This question relates to this previous question.
I've implemented the code published there by Richard Hodges. The code published works for me when I use g++ (Debian 4.8.4-1) 4.8.4.
However, the implementation is part of a CUDA library, and I am stuck with CUDA 6.5 which unofficially supported C++11 features.
When I use the code Richard posted:
template <class F>
void submit( F&& f)
{
std::unique_lock<std::mutex> lock(_cvm);
++ _tasks;
lock.unlock();
_io_service.post(
[this, f = std::forward<F>(f)]
{
f();
reduce();
});
}
I get an error: error: expected a "]" referring to the lambda line. This makes me think that the header is not being parsed properly. I tried without the template, just passing a reference to my worker class, and without the forwarding.
void submit( trainer & job)
{
std::unique_lock<std::mutex> lock(_cvm);
++ _tasks;
lock.unlock();
_io_service.post([this,&]
{
job();
reduce();
});
}
And I got error: an enclosing-function local variable cannot be referenced in a lambda body unless it is in the capture list.
So I explicitly added both this and job:
void submit( trainer & job)
{
std::unique_lock<std::mutex> lock(_cvm);
++ _tasks;
lock.unlock();
_io_service.post([this,&job]
{
job();
reduce();
});
}
At which point, I got stuck at error:
error: could not convert ‘{{((cuANN::trainer_pool*)this)->cuANN::trainer_pool::_io_service}}’ from ‘<brace-enclosed initializer list>’ to ‘boost::asio::io_service::work’ boost::asio::io_service::work _work { _io_service };
FYI, cuANN::trainer_pool is the worker_pool in Richard's example, and the thread pool implementation, and _io_service is simply a member of the class trainer_pool:
class trainer_pool
{
public:
trainer_pool ( unsigned int max_threads );
void start();
void wait();
void stop();
void thread_proc();
void reduce();
void submit( trainer & job);
private:
unsigned int _max_threads_;
boost::asio::io_service _io_service;
boost::asio::io_service::work _work { _io_service };
std::vector<std::thread> _threads;
std::condition_variable _cv;
std::mutex _cvm;
size_t _tasks = 0;
};
- What am I doing wrong?
- In case the problem is with CUDA 6.5 and lambdas, how can I post work, by using bind?
Aucun commentaire:
Enregistrer un commentaire