jeudi 29 juin 2017

Passing std::vector of float values to a function changes values a bit

I have a very simple little function that is supposed to verify the values in the passed vector:

bool Verify( std::vector<std::pair<float, float>> const & source) {
    // The lookup table must be changing linearly on input
    float delta_x{};
    for (unsigned i = 1; i < source.size(); i++) {
        auto input_delta = std::abs(source[i].first - source[i-1].first);
        if (i == 1) {
            delta_x = input_delta;
        } else {
            std::cout << "LUT"
              << "Expected step: '" << std::setprecision(10) << delta_x << "'."
              << " measured step[" << i << "]: '"
              << std::setprecision(10) << input_delta << "'"
              << (input_delta-delta_x)
              << ":" << source[i].first << "-" << source[i-1].first << std::endl;
        if (delta_x != input_delta) {
            return false;
        }
     }
  }

  return true;
}

It seems fairly straightforward. However, it fails. When I pass a simple vector like this:

    std::vector<std::pair<float, float>>kDecodeLut{
  {  -1.326, 101.3974},
  {   6.174, 96.0049 },
  {  13.674, 91.5644 },
  {  21.174, 87.5549 },
  {  28.674, 83.7873 },
  {  36.174, 80.1683 },
  {  43.674, 76.6441 },
  {  51.174, 73.1802 },
  {  58.674, 69.7524 }};

The values that the Verify method sees are not exactly the same as in the vector. Here is the printout:

LUTExpected step: '7.5'. measured step[2]: '7.5'0:13.67399979-6.173999786
LUTExpected step: '7.5'. measured step[3]: '7.5'0:21.17399979-13.67399979
LUTExpected step: '7.5'. measured step[4]: '7.5'0:28.67399979-21.17399979
LUTExpected step: '7.5'. measured step[5]: '7.5'0:36.17399979-28.67399979
LUTExpected step: '7.5'. measured step[6]: '7.5'0:43.67399979-36.17399979
LUTExpected step: '7.5'. measured step[7]: '7.5'0:51.17399979-43.67399979
LUTExpected step: '7.5'. measured step[8]: '7.5'0:58.67399979-51.17399979

It appears as if there was some number conversion from the original kDecodeLut vector to the internal variable inside Verify.

Was there really some sort of conversion? I didn't intend for the kDecodeLut vector to be copied or modified in any form. What else can explain this behavior?


Edit: I've added the code to print the values before calling Verify and they come out as expected.

 for (unsigned i = 1; i < kTiltWingDecodeLut.size(); i++) {
  std::cout << "LUT"
            << ":" << kTiltWingDecodeLut[i].first << "-" << kTiltWingDecodeLut[i-1].first << std::endl;
}
Verify(kTiltWingDecodeLut);

Here is the output

LUT:6.174--1.326
LUT:13.674-6.174
LUT:21.174-13.674
LUT:28.674-21.174
LUT:36.174-28.674
LUT:43.674-36.174
LUT:51.174-43.674
LUT:58.674-51.174
LUTExpected step: '7.5'. measured step[2]: '7.5'0:13.67399979-6.173999786
LUTExpected step: '7.5'. measured step[3]: '7.5'0:21.17399979-13.67399979
LUTExpected step: '7.5'. measured step[4]: '7.5'0:28.67399979-21.17399979
LUTExpected step: '7.5'. measured step[5]: '7.5'0:36.17399979-28.67399979
LUTExpected step: '7.5'. measured step[6]: '7.5'0:43.67399979-36.17399979
LUTExpected step: '7.5'. measured step[7]: '7.5'0:51.17399979-43.67399979
LUTExpected step: '7.5'. measured step[8]: '7.5'0:58.67399979-51.17399979

Aucun commentaire:

Enregistrer un commentaire