I have a large number of objects that can be serialized into a array<double, 5>, let's call them stuff, I would like to merge all these objects into a large vector<double>, calling it data, to be used in messaging passing for synchronization between discrete local/remote nodes.
I thought using a move semantics for moving data for stuff into data should give me a huge performance boost over copying data, but in test, it actually performs considerably slower in debug mode, and in release mode slightly slower. I was thinking if there is a standard way of doing this for maximum performance? The followings are the implementions that I used:
std::vector<std::array<double, 5> > stuff(2000);
std::vector<double> data;
data.reserve(10000);
1)
for (auto & b : stuff) {
data.insert(data.end(), std::make_move_iterator(b.begin()),
std::make_move_iterator(b.end()));
}
2)
for (auto & b : stuff) {
for (auto & item : b) {
data.emplace_back(std::move(item));
}
}
3)
for (auto & b : stuff) {
std::move(b.begin(), b.end(), std::back_inserter(data));
}
4)
for (auto & b : stuff) {
for (const auto & item : b) {
data.emplace_back(item);
}
}
P.S: I am using g++ with the flags of -O3 -march=native -mavx
Aucun commentaire:
Enregistrer un commentaire