Can I base a mission-critical application on the results of this test, that 100 threads reading a volatile pointer set a billion times by a main thread never see a tear?
I've made pvTearTest
volatile only to assure the main thread's write loop actually writes to it.
Any other potential problems doing this besides tearing?
Does the volatile make the accesses to this variable slower? For instance main-memory slow? Assuming the variable was really being set, do I even need to make it volatile?
Here's a stand-alone demo that compiles with g++ -g tear.cxx -o tear -pthread
.
#include <atomic>
#include <thread>
#include <vector>
using namespace std;
volatile void* pvTearTest;
atomic<int> iTears( 0 );
void TearTest( void ) {
while (1) {
void* pv = (void*) pvTearTest;
intptr_t i = (intptr_t) pv;
if ( ( i >> 32 ) != ( i & 0xFFFFFFFF ) ) {
printf( "tear: pv = %p\n", pv );
iTears++;
}
if ( ( i >> 32 ) == 999999999 )
break;
}
}
int main( int argc, char** argv ) {
printf( "\n\nTEAR TEST: are normal pointer read/writes atomic?\n" );
vector<thread> athr;
// Create lots of threads and have them do the test simultaneously.
for ( int i = 0; i < 100; i++ )
athr.emplace_back( TearTest );
for ( int i = 0; i < 1000000000; i++ )
pvTearTest = (volatile void*) (intptr_t)
( ( i % (1L<<32) ) * 0x100000001 );
for ( auto& thr: athr )
thr.join();
if ( iTears )
printf( "%d tears\n", iTears.load() );
else
printf( "\n\nTEAR TEST: SUCCESS, no tears\n" );
}
The actual application is a malloc()
'ed and sometimes realloc()
'd array (size is power of two; realloc doubles storage) that many child threads will absolutely be hammering in a mission-critical but also high-performance-critical way.
From time to time a thread will need to add a new entry to the array, and will do so by setting the next array entry to point to something, then increment an atomic<int> iCount
. Finally it will add data to some data structures that would cause other threads to attempt to dereference that cell.
It all seems fine (except I'm not positive if the increment of count is assured of happening before following non-atomic updates)... except for one thing: realloc()
will typically change the address of the array, and further frees the old one, the pointer to which is still visible to other threads.
OK, so instead of realloc()
, I malloc()
a new array, manually copy the contents, set the pointer to the array. I would free the old array but I realize other threads may still be accessing it: they read the array base; I free the base; a third thread allocates it writes something else there; the first thread then adds the indexed offset to the base and expects a valid pointer. I'm happy to leak those though. (Given the doubling growth, all old arrays combined are about the same size as the current array so overhead is simply an extra 16 bytes per item, and it's memory that soon is never referenced again.)
So, here's the crux of the question: once I allocate the bigger array, can I write it's base address with a non-atomic write, in utter safety? Or despite my billion-access test, do I actually have to make it atomic<> and thus slow all worker threads to read that atomic?
(As this is surely environment dependent, we're talking 2012-or-later Intel, g++ 4 to 9, and Red Hat of 2012 or later.)
Aucun commentaire:
Enregistrer un commentaire