I have the following templated function that takes a GameName (std::string) and a begin/end pair of iterators over a collection of GameTime (size_t). It iterates over the range and adds the GameTime-s together, and returns a tuple of the game name, total game time and average game time (GameStats):
template<typename InputIt>
GameStats calculateGameStats(const GameName& _gameName, const InputIt _begin, const InputIt _end)
{
std::string logMessage = "Started process for game " + _gameName + ":[ ";
for_each(_begin,_end,[&logMessage](GameTime e){ logMessage += std::to_string(e) + ',';});
std::clog << logMessage + " ]\n";
size_t itemCount = 0;
GameTime gameTime =
std::accumulate(_begin,_end,0,[&itemCount](const GameTime _lhs, const GameTime _rhs)
{
++itemCount;
return _lhs + _rhs;
});
logMessage = "Ended process for game " + _gameName + ":[ ";
for_each(_begin,_end,[&logMessage](GameTime e){ logMessage += std::to_string(e) + ',';});
std::clog << logMessage + " ]\n";
return std::make_tuple(_gameName, gameTime, gameTime / itemCount);
}
(For debugging purposes I also list the elements we iterated over at the beginning and at the end. They should be the same each time obviously, however see below.)
A similar version of this function taking a reference to std::vector worked fine, however this new one seems to be messing up its iterators when multiple instances of it are run in parallel. Here's the code that starts a process for each game:
// Start processing each game's stats in parallel
std::vector< std::future<GameStats> > processVector;
processVector.reserve(gameCount);
for(const std::pair<GameName, std::vector<GameTime> >& entryListPair : gameEntries)
{
const std::string& gameName = entryListPair.first;
const std::vector<GameTime>& entryList = entryListPair.second;
processVector.push_back(std::async(std::launch::async,
&calculateGameStats<decltype(entryList.cbegin())>,
gameName,
entryList.cbegin(),
entryList.cend()));
assert((processVector.cend()-1)->valid());
}
(GameEntries is an std::map type mapping GameName to a vector of GameTime)
Here is the relevant part of the output from running the program:
Started process for game CoD:[ 182,1264, ]
Ended process for game CoD:[ 606,1667, ]
Started process for game DotA:[ 606,1667, ]
Ended process for game DotA:[ 606,1667, ]
Started process for game GTAV:[ 606, ]
Ended process for game GTAV:[ 606, ]
Started process for game HotS:[ 606, ]
Ended process for game HotS:[ 606, ]
Started process for game LoL:[ 1277,193, ]
Ended process for game LoL:[ 1277,193, ]
Started process for game MC:[ 857,193, ]
Ended process for game MC:[ 857,193, ]
Started process for game OW:[ 0, ]
Note: 7 games in map, created 7 processes.
Ended process for game OW:[ 140377361861512, ]
Writing entry: CoD 2273 1136
Writing entry: DotA 2273 1136
Writing entry: GTAV 606 606
Writing entry: HotS 606 606
Writing entry: LoL 1470 735
Writing entry: MC 1050 525
Writing entry: OW 650759048 650759048
After processing: CoD:[ 1354,1442,]
After processing: DotA:[ 2137,1264,]
After processing: GTAV:[ 182,]
After processing: HotS:[ 2551,]
After processing: LoL:[ 606,1667,]
After processing: MC:[ 1277,193,]
After processing: OW:[ 857,]
Done!
Running the program more than once yields different results, ranging from correct results for some games to completely erroneous numbers everywhere. I also list all GameTime entries for each game after the program is done to make sure it hasn't been modified, in case that's what the problem is, but the vectors all come out of it unscathed.
However as seen from the output, iterating from the (supposedly constant and unmodified) beginning to the end within the same function yields different results each time. This is only the case if the tasks are run in parallel. If run sequentially (by calling wait() on each future before launching the next one), the program runs correctly, so my guess is that each thread is invalidating the others' iterators for some reason, even though they are input iterators to different vectors, which were all passed by value.
I would like to know what causes this interference and how I could get them to work in parallel.
Aucun commentaire:
Enregistrer un commentaire