Subj : Re: A question about atomic_ptr To : comp.programming.threads From : Joe Seigh Date : Wed Apr 13 2005 12:32 pm On Wed, 13 Apr 2005 15:41:01 +0300, Peter Dimov wrote: > Joe Seigh wrote: .... > I don't believe that C++ code can benefit from a traditional deferred > counting scheme (which avoids counting the stack and the registers and only > counts heap-allocated references.) > > Your local_ptr can possibly benefit from a deferring the counting somewhat, > but I'm not sure whether it will be worth doing. > > Weighted reference counting is another interesting algorithm (on paper). I > think that it is also not worth doing. The idea there is to keep a local > count (weight) in each pointer. The pointer starts with 65536 (say) and on > each copy the weight is shared between the source and the copy. When a > pointer is destroyed its weight is subtracted from the reference count (its > initial value is also 65536). To make pointers copyable you have to divide up the weight somehow. If you divide by two, after as little as 16 copies, you resulting pointer becomes uncopyable. There's another weighted reference counting scheme that uses infinite precision fractions for the weight. It keeps extending the precision at the expense of space boundedness. atomic_ptr/local_ptr is a modified weighted reference counting scheme. local_ptr looks a lot like pure weighted reference counting. > >> Actually, I have time, just not time to do pointless things. I mean, just >> how many more GC schemes do *I* need? > > From a C++ viewpoint, I don't believe that you need anything else beyond > (atomic|shared)_ptr-style counting and hazard pointers. But of course I may > be wrong. ;-) > And RCU of course. :) And it depends on what trade-offs you want to make. I have other unimplemented schemes I can use if I get into a situation where I need a different set of trade-offs. -- Joe Seigh .