Subj : Re: A question about atomic_ptr To : comp.programming.threads From : Chris Thomasson Date : Fri Apr 15 2005 11:03 pm >> Ok, Joe said that "It needs the virtual storage to always be valid so >> the load locked won't segfault." >> >> Not sexy. >> [...] > BTW, it's the same restriction lock-free LIFO stack using DWCAS w/ version > counts has. So, atomic_ptr for PPC-64 needs to have some sort of static/tracked storage wrt atomic_ptr_ref's? You can get around all of that by allocating static or "collected" arrays of atomic_ptr_ref's. Then you would use the index to atomic_ptr_ref as the "pointer" to it. Here is some quick sketch for a 32-bit solution of a CAS based atomic_ptr using win32 atomic ops: #define BLOCK_DEPTH ( USHRT_MAX ) #define BLOCK_NULL BLOCK_DEPTH template< typename T > class atomic_ptr_ref_block { union stack_anchor_t { struct { USHORT m_aba; USHORT m_front; } internal; LONG m_value; }; public: struct block_t { atomic_ptr_ref< T > m_ref; USHORT m_next; USHORT m_this_index; }; private: block_t m_blocks[BLOCK_DEPTH]; stack_anchor_t m_stack; public: typedef block_t *block_ptr_t; public: atomic_ptr_ref_block() { m_stack.internal.m_front = BLOCK_NULL; for ( unsigned short i; i < BLOCK_DEPTH; ++i ) { m_blocks[i].m_this_index = i; m_blocks[i].m_next = m_stack.internal.m_front; m_stack.internal.m_front = i; } } public: block_ptr_t pop() { register block_ptr_t block; register stack_anchor_t old, cmp, xchg; old.m_value = m_stack.m_value; // read_barrier_depends(); need's dependant load here do { if ( old.internal.m_front == BLOCK_NULL ) { return 0; } block = &m_blocks[old.internal.m_front]; xchg.internal.m_aba = old.internal.m_aba + 1; xchg.internal.m_front = block->m_next; cmp.m_value = old.m_value; old.m_value = InterlockedCompareExchangeAcquire ( &m_stack.m_value, xchg.m_value, cmp.m_value ); } while ( cmp.m_value != old.m_value ); return block; } void push( block_ptr_t block ) { register stack_anchor_t old, cmp, xchg; xchg.internal.m_front = block->m_this_index; old.m_value = m_stack.m_value; do { xchg.internal.m_aba = old.internal.m_aba; block->m_next = old.internal.m_front; cmp.m_value = old.m_value; old.m_value = InterlockedCompareExchangeRelease ( &m_stack.m_value, xchg.m_value, cmp.m_value ); } while ( cmp.m_value != old.m_value ); } }; > Not sexy but that doesn't stop anyone from using it. The above scheme could be easily used as portable lock-free method for atomic_ptr to allocate its atomic_ptr_ref's from. It would work fine on any 32/64 bit system that has a "normal" CAS. 64-bit could have much larger aba count, that's pretty sexy... :) You could even protect the "blocks" with shared_ptr's... ;) .