Is there a way to explicitly set/limit the degree of parallelism (= the number of separate threads) used by std::async
and related classes?
Perusing the thread support library hasn’t turned up anything promising.
As close as I could figure out, std::async
implementations (usually?) use a thread pool internally. Is there are standardised API to control this?
For background: I’m in a setting (shared cluster) where I have to manually limit the number of cores used. If I fail to do this, the load sharing scheduler throws a fit and I’m penalised. In particular, std::thread::hardware_concurrency()
holds no useful information, since the number of physical cores is irrelevant for the constraints I’m under.
Here’s a relevant piece of code (which, in C++17 with parallelism TS, would probably be written using parallel std::transform
):
auto read_data(std::string const&) -> std::string;
auto multi_read_data(std::vector<std::string> const& filenames, int ncores = 2) -> std::vector<std::string> {
auto futures = std::vector<std::future<std::string>>{};
// Haha, I wish.
std::thread_pool::set_max_parallelism(ncores);
for (auto const& filename : filenames) {
futures.push_back(std::async(std::launch::async, read_data, filename));
}
auto ret = std::vector<std::string>(filenames.size());
std::transform(futures.begin(), futures.end(), ret.begin(),
[](std::future<std::string>& f) {return f.get();});
return ret;
}
From a design point of view I’d have expected the std::execution::parallel_policy
class (from parallelism TS) to allow specifying that (in fact, this is how I did it in the framework I designed for my master thesis). But this doesn’t seem to be the case.
Ideally I’d like a solution for C++11 but if there’s one for later versions I would still like to know about it (though I can’t use it).
Aucun commentaire:
Enregistrer un commentaire