Subj : Re: STL Containers w/ Virtually Zero-Overhead Lock-Free Reads... To : (Usenet) From : David Abrahams Date : Sat Sep 24 2005 10:38 pm "Chris Thomasson" <_no_spam_cristom@no_spam_comcast._net> writes: >>> I could be missing something important, but it seems to me that if you >>> want to "actually apply some algorithms to it" then a lock-free >>> container is of little use, > > Not true. > >>> since you get synchronization at the wrong >>> level of granularity. You'd need algorithm-level synchronization, >>> rather than container-level synchronization, in that case. > > I believe you might be misunderstanding what I mean by virtually > zero-overhead lock-free reads. I don't think so. > User-defined algorithms are not restricted > from using their own form(s) of mutual exclusion to protect > “critical-sequences” of operations that are required to be executed > atomically: > > > Thread_A { > user_app::mutex::t_guard lock( m_lock ); > sort( v.begin(), v.end() ); > } > > Thread_B { > user_app::mutex::t_guard lock( m_lock ); > v.push_back(5); > } > > > This is perfectly fine; a well built generic lock-free read container doesn’t > need to rely on internally defined forms of mutual exclusion in order to > provide "read-only access" that doesn't use any nasty atomic operations > and/or StoreLoad style memory barriers... Of course. But the effort you put into making your vector lock-free is not needed to make the above work, so in effect it goes to waste. And even lock free primitives like CAS are significantly more expensive than their "plain old" counterparts, so making the vector lock free actually imposes an efficiency cost for no benefit in this case. Anyway "read-only access" usually works just fine even with a completely thread-unaware container, so I'm not sure what you're driving at here. >> Suppose we had lock-free vectors. Now one thread calls >> sort(v.begin(), v.end()) while another calls v.push_back(5). > > These would be considered writes. Yes. >> What does either thread see? > > It depends on which thread executes first; remember writes are mutually > excluded. The individual writes to the vector are mutually excluded (when you have control over them -- which you don't, since the sort algorithm gets random access iterators and can thus grab addresses and references into the vector), but the call to sort consists of many individual writes to the vector. Even if you _could_ mutually exclude each one (which you can't, for reasons I just described) there's no reason to think that push_back doesn't sneak between two of them. >> It seems to me that even if it's possible to >> come up with coherent semantics for this case (which in itself is >> debatable), it's impossible to come up with _useful_ semantics. >> If containers are to be shared across threads, their state will almost >> always be just a small piece of some larger invariant, in which case >> you'll need to synchronize or "lock-free-ize" at a different level. > > This is completely compatible with virtually zero-overhead lock-free > reads. Yes, but it renders wasteful any special measures taken or cycles spent to do lock free reads. > Many existing application defined algorithms that decided to use STL > containers in multithreaded environments use mutexs to protect everything > anyway; therefore they are already compatible with the requirements of the > highly flexible lock-free algorithm I am using. Application defined > algorithm can literately pick and choose exactly which if their "read-only > operations" can be executed without a lock. For instance, a thread that > makes frequent read-only const_iteration's over a shared collection would be > a great candidate to "lock-free-ize"... I thought you were "lock-free-izing" the container. If so, you have done nothing to protect the entire process of iterating over it from being disturbed by writes. If not, maybe you should revise your description of what you're proposing. -- Dave Abrahams Boost Consulting www.boost-consulting.com [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ] .