cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

oligny
Journeyman III

Multiprocessor cache-line scope coherency

Can spill-over corruption occurs if two processor write to nearby memory without mutex protection

I learned last month that when the CPU reads from memory, it reads a whole cache-line of 64 bytes (could be different depending on architecture).

Does it do the same when writing ?

If so, do I have to take care not to write nearby variables without mutex protection. Example:

typedef struct object_S
{
 int memberA; // writable only by thread A (running on CPU 0)
 int memberB; // writable only by thread C (running on CPU 1)
 int memberC;
 int memberD;
} object_T;


ThreadA:

object->memberA = 10;

ThreadB:

object->memberB = 60;

On a multiple processor system, can one CPU (running thread A) update memberA to 10, then the other CPU (running thread B) update memberB to 60 without worrying to overwrite memberA with an old/oudated value. I would be doing this without mutex protection.

0 Likes
2 Replies
eduardoschardong
Journeyman III

Originally posted by: oligny
On a multiple processor system, can one CPU (running thread A) update memberA to 10, then the other CPU (running thread B) update memberB to 60 without worrying to overwrite memberA with an old/oudated value. I would be doing this without mutex protection.



Cache coherency exists to avoid this kind of issue, your example would run fine without risc of overwritting.

What will happen is:
1) Thread A reads the cacheline of the structure: to do so it "snoops" every other cache on the system to know if someone else have a copy, if none the value willbe readed from memory;
2) Thread B reads the cacheline of the structure: it "snoops" every other cache, in this example the processor wich have Thread A will response and then the cacheline will be copied from A to B, on both it will be marked as "shared".
3) Thread A writes to Element A: since it is marked as "shared" before writting it will send a message to every cache in system to invalidate their copies of that cacheline, so the only valid copy will be on Thread A cache.
4) Thread B writes to Element B: since it no longer has the cacheline it snoops others and get the updated version from A, then it send the invalidate message, now Thread B has the only valid copy.
0 Likes
yeyang
Journeyman III

Yup your example would run correctly, but also suffer performance penalty from the "false sharing" of the cache line, i.e., two threads snooping and invalidating their caches for each other while they are not actually sharing any common variable.

A better way is to always put non-shared variables into different cache blocks when accessed concurrently by different threads.
0 Likes