lundi 2 septembre 2019

Linux measuring time problem! std::chrono, QueryPerformanceCounter, clock_gettime

I use clock_gettime() in Linux and QueryPerformanceCounter() in Windows to measure time. When measuring time, I encountered an interesting case.

Firstly, I'm calculating DeltaTime in infinite while loop. This loop calls some update functions. To calculating DeltaTime, the program's waiting in 40 milliseconds in an Update function because update functions is empty yet.

Then, in the program compiled as Win64-Debug i measure DeltaTime. It's approximately 0.40f. And this continues as long as the program is running (Win64-Release works like that too). It runs correctly.

But in the program compiled as Linux64-Debug or Linux64-Release, there is a problem.

When the program starts running. Everything is normal. DeltaTime is approximately 0.40f. But after a while, deltatime is calculated 0.12XXf or 0.132XX, immediately after it's 0.40f. And so on.

I thought I was using QueryPerformanceCounter correctly and using clock_gettime() incorrectly. Then I decided to try it with the standard library std::chrono::high_resolution_clock, but it's the same. No change.

#define MICROSECONDS (1000*1000)

auto prev_time = std::chrono::high_resolution_clock::now();
decltype(prev_time) current_time;

while(1)
{
current_time = std::chrono::high_resolution_clock::now();

int64_t deltaTime = std::chrono::duration_cast<std::chrono::microseconds>(current_time - previous_time).count();

printf("DeltaTime: %f", deltaTime/(float)MICROSECONDS);

NetworkManager::instance().Update();

prev_time = current_time;

}


void NetworkManager::Update()
{
auto start = std::chrono::high_resolution_clock::now();
decltype(start) end;

while(1)
{
end = std::chrono::high_resolution_clock::now();

int64_t y = std::chrono::duration_cast<std::chrono::microseconds>(end-start).count();

if(y/(float)MICROSECONDS >= 0.040f)
break;

}

return;

}

Aucun commentaire:

Enregistrer un commentaire