Subj : Re: Memory Barriers, Compiler Optimizations, etc. To : comp.programming.threads From : SenderX Date : Thu Feb 03 2005 04:58 pm > My concern wrt volatile was that treatments of memory issues refer to > "program order" as if it's the same as "source code order," but with > compilers moving stuff around prior to code generation, "source code > order" > may be quite different from "program order." At least in C++, if I want > to > ensure that the the relative order of these reads is preserved, > declaring x and y volatile will do it. Compilers can still move the reads > around wrt reads and writes of non-volatile data, but to remain compliant > with the C++ standard, x must be read before y in the generated code, > i.e., > in program order. I use volatile for source code documentation only. That about how usefull it really is wrt this kind of stuff. ;(... > However, if compilers recognize and respect the semantics of membars, the > need for volatile goes away, because I can just stick a membar between the > reads (which I need anyway), and the problem is solved. ...."if compilers recognize and respect the semantics of membars"... ^^^^^^^^^^^^^^^^^^^^^ It would be nice to have a compiler that could advertise "We handle calls to any memory barrier or critical function in a safe and effective manner." Something simple and magical like this would be sort of a start: /* full fence barrier */ extern void my_mb_fence( void ); /* any other functions that's critical... */ extern void my_mutex_lock( void ); extern void my_mutex_unlock( void ); [ect...] Now we use some magical #pragma's to inform the compiler of our own barriers and critical functions: /* Inidicate to compiler that my_mb_fence is actually a memory barrier. Now the compiler would have some critical information. */ #pragma memory_barrier( "my_mb_fence" ); /* Inidicate to compiler the mb_mutex_lock is actually the lock portion of a custom mutex. */ #pragma mutex_lock_function( "my_mutex_lock" ); /* Inidicate to compiler the my_mutex_unlock is actually the unlock portion of a custom mutex. */ #pragma mutex_unlock_function( "my_mutex_unlock" ); What do think about this "simple" strategy??? as for volatile, it should probablly be dropped for something like this: __attribute__( (shared_variable) ) int shared_var; Humm... Compiler writers REALLY need to get in on this! > Incidently, I understand how compiler intrinsics like Microsoft's > _ReadWriteBarrier are recognized by compilers, but from what I've read in > this group, there seems to be the assumption that calling an externally > defined function containing assembler will prevent code motion across > calls to the function, because compilers must pessimistically assume that > calls to the function affect all memory locations. With increasingly > aggressiving cross-module inlining technology available, this seems like a > bet that gets worse and worse with time. Yup. Its is basically all we have for now. ;(... My AppCore library relies on external assembled function to "attempt to reduce" the number of chances a rouge compiler would have to reorder "critical-sequence" of loads, stores, and function calls. After somebody reads its documentation, and follow the links contained in it to this thread ( and others ), nobody will want to use the damn thing!!!! :O sh$T#@$ lol .