Click here to Skip to main content
15,878,852 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Scenario:
I am developing a new module to generate protocol files in a special format that will be integrated in a bigger software. The main software has several parts, some in C# .Net and some in c++ with MFC (including multi-threading) and the API I am using has a .c file too.
My test Application is a C++ Console with MFC and ATL support in VS2017 Professional.
Although the main software uses multithread I am not willing to get involved with them in my test app.

I would like to get the fastest (from performance point of view) and most accurate / smallest resolution timestamp to calculate deltas / needed time of execution I can use.
The possibility of needing the deltas in another part of the program is not gone yet, so I would like to get it in a variable just in case. This is why I try to do the measurements via code instead of using external performance tools.

What I have tried:

I have tried several approaches:

1)
C++
time_t GetActualTime()
{
  _tzset();

  time_t myTime;
  time(&myTime);

  return myTime;
}
I suppose is the fastest option (in execution time), but it returns seconds. For other parts of my program is more than enough, but not for what I want to check right now.

2)
With FILETIME I should be able to get down to 100x Nanoseconds, but the problem is the "workaround" I am using to get the needed timestamp. With this code:
C++
ULARGE_INTEGER GetTimeFile100xNanoSec (int iNr)
{
  FILETIME timeCreation;        // Value in 100x NanoSecons
  ULARGE_INTEGER uli100xNanoSec;
  
  CString strFileName = _T("");
  strFileName.Format(_T("D:\\Temp\\myDummyFile_%03d.txt"), iNr); // to avoid tunnelig effect if using same name
  CStringW strFileNameW(strFileName);
  LPCWSTR lpFileName = strFileNameW;

  HANDLE hFile = CreateFileW(lpFileName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);

  GetFileTime(hFile, &timeCreation, NULL, NULL);
  CloseHandle(hFile);
  DeleteFileW(lpFileName);

  uli100xNanoSec.LowPart = timeCreation.dwLowDateTime;
  uli100xNanoSec.HighPart = timeCreation.dwHighDateTime;
 
  return uli100xNanoSec;
}
using it in a loop like:
C++
ULARGE_INTEGER lluOld;
ULARGE_INTEGER lluNew;

lluOld = GetTimeFile100xNanoSec(999);
for (int i=0; i<100; i++)
{
  lluNew = GetTimeFile100xNanoSec(i);
  wprintf(_T("\nIn Loop [%d], New - old = %llu"), i, lluNew.QuadPart - lluOld.QuadPart);
  lluOld.QuadPart = lluNew.QuadPart;
}

I am getting values from 10001 (1 ms) up to 60006 (6 ms), average of the 100 tries is almost 2,5 ms. Creating / deleting the temp files affects the performance making it so slow, that the valid range gets in milliseconds. What makes the small resolution to be in vain.


3)
With SYSTEMTIME I can only go down to Miliseconds. I have not checked performance speed yet, but I will do it later, if I can get the 1ms step in a stable way, I suppose I will use this since #2 is not stable enough to be reliable


Any suggestions to get below the mark of milliseconds in a reliable and reusable way? With reusable I mean getting the value in a variable that can be evaluated in other place.
Posted
Updated 1-Mar-19 5:16am
v4

I use a little wrapper class around the QueryPerformanceCounter API function. Here is a stripped down version of it:
C++
class CElapsed
{
public :
   CElapsed()   // constructor
   {
        // get the frequency of the performance counter and its period in seconds

        LARGE_INTEGER li = { 0 };
        m_Period = QueryPerformanceFrequency( &li ) ? 1.0 / (double)li.QuadPart : 0;
   }

   // get the current performance counter value, convert it
   // to seconds, and return the difference from begin in seconds

   double TimeSince( double begin=0 )
   {
       LARGE_INTEGER endtime;
       QueryPerformanceCounter( &endtime );
       return ( endtime.QuadPart * m_Period ) - begin;
   }

   // returns true if the counter is available

   bool IsAvailable()     { return m_Period != 0; }

   // return the counter frequency

   double GetFrequency()  { return 1.0 / m_Period; }

protected :
   double  m_Period;
};
Here is an example of how to use it :
C++
CElapsed et;
double start = et.TimeSince( 0 );

// code to time goes here

double elapsed = et.TimeSince( start );
_tprintf( _T( "elapsed time was %.3f seconds\n" ), elapsed );
I recommend reading up on QueryPerformanceCounter to see what its properties are. The counter frequency is machine-dependent but is nearly always in the megahertz range so its resolution is in the microseconds but it does have some overhead. I have never needed timing with a resolution of under a millisecond so this is adequate for my purposes.

There are many variations of this kind of timer class. I like this implementation because it does NOT retain the starting value. This allows one timer object to be used for many things simultaneously. In fact, one timer object can be used for an entire application if you want to do that. Construction is also minimal so instances can be created quickly and easily with a minimum of overhead.
 
Share this answer
 
v2
Comments
Nelek 1-Mar-19 16:00pm    
I didn't read about it because it is in the same site as SetTimer and WaitableTimerObject. I didn't see the usability of those two so I just ignored the rest (wrong... I know)

Looks nice. I will have a look and do some tests. Thank you
[no name] 2-Mar-19 6:05am    
Also: https://www.codeproject.com/Questions/480201/whatplusQueryPerformanceFrequencyplusfor-3f
Rick York 2-Mar-19 11:56am    
Yes, except he has the terminology backwards. QueryPerformanceFrequency gives the frequency of the counter. One over the frequency is the period of the counter in seconds. One thousand over the frequency is the period in milliseconds.
Nelek 4-Mar-19 7:06am    
Hi Rick,

I have tested your wrapper and I like it pretty much. I find a bit annoying that it is still so unreliable in execution :( I mean:
I tested it like:

CElapsed et;
double start = et.TimeSince(0.0);
double absolute = 0.0;
double delta = 0.0;
double old = 0.0;
wprintf(_T("\nPrior loop: start TimeSince (0.0) = %lf\n"), start);
for (int j = 0; j < 100; j++)
{
absolute = et.TimeSince(start);
delta = absolute - old;
wprintf(_T("\nIn Elapsed Delta ABSOLUTE [%d] = %lf"), j, absolute);
wprintf(_T("\nIn Elapsed Delta RELATIVE [%d] = %lf"), j, delta);
wprintf(_T("\n----------"));
old = absolute;
}

and it still varies from 0,000271 (271 microsec) fastest tick in all my tests (10 executions a 100 iterations) to slowest 0,006823 (6,8 ms).
It usually is fast at the beginning of the loop (around 0,7 ms), gets slower after some iterations (between 1 and 4 ms) and remains so for a while, getting fast again at the end (between 0,5 and 0,8 ms)

I don't know if I am using it wrong, but I would have expected it to be a bit more constant with the deltas
Nelek 4-Mar-19 8:25am    
I have reduced the loop to 10 iterations. After 20 executions I got 95% of the values below 0,5 ms (slowest was 0,8 ms).

That should cover my needs.

Thank you a lot for the information
See here: GetTickCount function | Microsoft Docs[^] and here: About Timers - Windows applications | Microsoft Docs[^]

Resolution is not fixed: it varies by system, so what you get on two different PCs running the same software may be different.
 
Share this answer
 
Comments
Nelek 1-Mar-19 16:13pm    
Not sure if the error was on me, but I have already did some tests with GetTickCount and I would say it was not returning [quote] The return value is the number of milliseconds that have elapsed since the system was started [/quote]. I'll have a look again.
About the timers... To be honest, I dismissed the page because the Timer operations and WaitableTimerObject don't fit in what I want to do. But seeing Solution #2 I should have read the whole page a bit more carefully.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900