I have been, for a while, successfully using the following scheme to put a thread to sleep for an amount of time given by intervalMs
:
alarm = tsAdd( alarm, intervalMs);
clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &alarm, NULL);
Where alarm is a timespec specifying monotonic time and has been initialized at some time in the past, and the interval is some other interval timespec. Recently, I began using a virtual machine to run some of my software, and while I haven't yet tested whether or the software is affected by the use of a virtual machine, I am finding that often times it appears that clock_nanosleep doesn't wake up in time. This is made obvious by taking the time at the point where the interval is added to the alarm as:
void wait()
{
alarm = tsAdd( alarm, intervalMs);
timespec now;
clock_gettime( CLOCK_MONOTONIC, &timeNow );
// tsCompare = timespec comparison function
if ( tsCompare( now, alarm ) > 0 )
{
// tsShow displays formatted timespec
printf( "TimeNow=%s AlarmTime=%s\n", tsShow( now ), tsShow( alarm ) );
}
clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &alarm, NULL);
}
Where I will sporadically find that the print statement is printed.
This function is used in a thread which is required to operate on a precise interval. The usage of this appears as such:
void run()
{
Interval interval;
interval.setIntervalMs( 20 );
for (;;)
{
//do some stuff here
interval.wait(); //< Where this calls the function as shown above
}
}
I thoroughly examined all of the functions in the for
loop and found that no function exceeded the intervalMs
time.
I have since switched to using the following:
void wait()
{
// alarm is a std::chrono::time_point<std::chrono::steady_clock>
// interval is a std::chrono::milliseconds
alarm += interval;
std::this_thread::sleep_until( alarm );
}
And I have found the clock to be far more accurate, as in, there are far less times where it will report that it has overstepped its anticipated wake up time.
While I have to maintain the software due to backwards compatibility issues, can anyone explain to me what the real difference is between these two methods. If they are both supposed to theoretically draw from monotonic time, how can I have different results?
Aucun commentaire:
Enregistrer un commentaire