This content has been marked as final. Show 2 replies
Originally posted by: oligny
On a multiple processor system, can one CPU (running thread A) update memberA to 10, then the other CPU (running thread B) update memberB to 60 without worrying to overwrite memberA with an old/oudated value. I would be doing this without mutex protection.
Cache coherency exists to avoid this kind of issue, your example would run fine without risc of overwritting.
What will happen is:
1) Thread A reads the cacheline of the structure: to do so it "snoops" every other cache on the system to know if someone else have a copy, if none the value willbe readed from memory;
2) Thread B reads the cacheline of the structure: it "snoops" every other cache, in this example the processor wich have Thread A will response and then the cacheline will be copied from A to B, on both it will be marked as "shared".
3) Thread A writes to Element A: since it is marked as "shared" before writting it will send a message to every cache in system to invalidate their copies of that cacheline, so the only valid copy will be on Thread A cache.
4) Thread B writes to Element B: since it no longer has the cacheline it snoops others and get the updated version from A, then it send the invalidate message, now Thread B has the only valid copy.
Yup your example would run correctly, but also suffer performance penalty from the "false sharing" of the cache line, i.e., two threads snooping and invalidating their caches for each other while they are not actually sharing any common variable.
A better way is to always put non-shared variables into different cache blocks when accessed concurrently by different threads.