Subj : Re: STL Containers w/ Virtually Zero-Overhead Lock-Free Reads... To : (Usenet) From : David Abrahams Date : Thu Sep 22 2005 11:38 am Joe Seigh writes: > David Abrahams wrote: >> Maciej Sobczak writes: >> >> >>>Chris Thomasson wrote: >>> >>> >>>>Perhaps I should create a demo virtually >>>>zero-overhead container class... >>> >>>Please do, it would be a perfect feasibility study. >>> >>>But be careful to implement the existing interface exactly and >>>completely, especially the guarantees about iterators lifetime and >>>validity - this will be needed to actually apply some algorithms to it. >> >> >> I could be missing something important, but it seems to me that if you >> want to "actually apply some algorithms to it" then a lock-free >> container is of little use, since you get synchronization at the wrong >> level of granularity. You'd need algorithm-level synchronization, >> rather than container-level synchronization, in that case. In other >> words, the SGI argument against putting mutexes in the containers >> applies just as well to the question of whether lock-free container >> operations are a good idea. >> > > It's not like putting synchronization wrappers around an existing > STL container. It involves different implememntations I know that. > and possibly subtle or not so subtle semantic differences, > particularly in the area of iterators and handling of "temporary" > raw references. Well, yeah. But either that's not what I'm talking about or it's a very low-level way to say the same thing. Lock-free seems to make sense for things like memory allocators, reference counts, and queues because they represent "virtually uniform and unordered" resources -- if somebody gets in while you're trying to do your work, it's no big deal; you can just start over. It doesn't matter whether you get this memory block or the next one; it doesn't matter if some other thread gets on the queue ahead of you as long as you get on reasonably soon. But not everything is like that. We could make vector::push_back lock-free at a complexity cost (O(N) instead of amortized O(1)), and then vectors could *almost* be used as queues. But you still don't have the right interface to remove elements from such a queue because you need to get the element value and remove it in one operation. Even if we solve that, vectors are general-purpose containers; it's not possible to make sensible semantic guarantees for arbitrary combinations of operations. Suppose we had lock-free vectors. Now one thread calls sort(v.begin(), v.end()) while another calls v.push_back(5). What does either thread see? It seems to me that even if it's possible to come up with coherent semantics for this case (which in itself is debatable), it's impossible to come up with _useful_ semantics. If containers are to be shared across threads, their state will almost always be just a small piece of some larger invariant, in which case you'll need to synchronize or "lock-free-ize" at a different level. And once you start handing out forward/bidirectional/random-access iterators, references, and pointers into a data structure, the implementor of the data structure loses _all_ control over the kind of atomicity you need when removing elements from queues. So, this doesn't look like a subtle problem to me, not at all. -- Dave Abrahams Boost Consulting www.boost-consulting.com [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ] .