I have a project that makes extensive use (high frequency) of a limited set of key linear algebra operations such as matrix multiplication, matrix inverse, addition, etc. These operations are implemented by a handful of linear algebra libraries that I would like to benchmark without having to recompile the business logic code to accommodate the different mannerisms of these various libraries.
I'm interested in figuring out what is the smartest way of accommodating a wrapper class as an abstraction across all of these libraries in order to standardize these operations against the rest of my code. My current approach relies on the Curiously Repeating Template Pattern and the fact that C++11 gcc is smart enough to inline virtual functions under the right circumstances.
This is the wrapper interface that will be available to the business logic:
template <class T>
class ITensor {
virtual void initZeros(uint32_t dim1, uint32_t dim2) = 0;
virtual void initOnes(uint32_t dim1, uint32_t dim2) = 0;
virtual void initRand(uint32_t dim1, uint32_t dim2) = 0;
virtual T mult(T& t) = 0;
virtual T add(T& t) = 0;
};
And here is an implementation of that interface using e.g. Armadillo
template <typename precision>
class Tensor : public ITensor<Tensor<precision> >
{
public:
Tensor(){}
Tensor(arma::Mat<precision> mat) : M(mat) { }
~Tensor(){}
inline void initOnes(uint32_t dim1, uint32_t dim2) override final
{ M = arma::ones<arma::Mat<precision> >(dim1,dim2); }
inline void initZeros(uint32_t dim1, uint32_t dim2) override final
{ M = arma::zeros<arma::Mat<precision> >(dim1,dim2);}
inline void initRand(uint32_t dim1, uint32_t dim2) override final
{ M = arma::randu<arma::Mat<precision> >(dim1,dim2);}
inline Tensor<precision> mult(Tensor<precision>& t1) override final
{
Tensor<precision> t(M * t1.M);
return t;
}
inline Tensor<precision> add(Tensor<precision>& t1) override final
{
Tensor<precision> t( M + t1.M);
return t;
}
arma::Mat<precision> M;
};
Questions:
- Does it make sense to use CRTP and inlining in this scenario?
- Can this be improved with respect to optimizing performance?
Aucun commentaire:
Enregistrer un commentaire