mardi 21 juillet 2015

Different threads see different order of execution

This is the code from http://ift.tt/1krieqi

#include <thread>
#include <atomic>
#include <cassert>

std::atomic<bool> x = {false};
std::atomic<bool> y = {false};
std::atomic<int> z = {0};

void write_x()
{
    x.store(true, std::memory_order_seq_cst);
}

void write_y()
{
    y.store(true, std::memory_order_seq_cst);
}

void read_x_then_y()
{
    while (!x.load(std::memory_order_seq_cst))
        ;
    if (y.load(std::memory_order_seq_cst)) {
        ++z;
    }
}

void read_y_then_x()
{
    while (!y.load(std::memory_order_seq_cst))
        ;
    if (x.load(std::memory_order_seq_cst)) {
        ++z;
    }
}

int main()
{
    std::thread a(write_x);
    std::thread b(write_y);
    std::thread c(read_x_then_y);
    std::thread d(read_y_then_x);
    a.join(); b.join(); c.join(); d.join();
    assert(z.load() != 0);  // will never happen
}

Now, if we consider that the memory accesses caused by thread a and b are executed with a different memory order argument (like std::memory_order_relaxed) and the accesses caused by c and d remain the same, how it is possible for the threads c and d to see a different order of the stores inside x and y? (or should all memory accesses allowed to be std::memory_order_relaxed?)

What causes two different reader threads to see a different order of execution? What architectures allow this? Is it because cache updates are not made in the same order?

Aucun commentaire:

Enregistrer un commentaire