
Introduction
First of all, I would like to suggest you to run my small program. Not sure what your system is; on my computer, it prints out a line in about 1500 milliseconds. I think your result would be quite close to this as long as you run it on a Windows NT, 2000, or XP platform. Then, you just open your browser and visit http://finance.yahoo.com/. Yes, the Yahoo! Finance page, not the Yahoo! home page. Suddenly, you will see that the small program runs much faster (on my computer, it prints a line in 200ms which is 8 times faster). If you leave this page, you will see that the program goes back to the normal state. Magic isn't it? So how does Yahoo! speed up my program?
Background
This is a real situation which I faced several years ago. I was puzzled for quite a while. Today, when I recall it, I still fell it is quite interesting, and is worth to let others know. At that time, I was developing a server on the ground of an RPC framework. The QA team told us that our server exceeded the maximum capacity without using a very high percentage of the CPU. Network was not an issue because our server was in a high speed local network. One day, another developer demonstrated me that once he opened the Yahoo! finance page, our server suddenly run into a higher level of capacity which it never achieved before, and once he left that page, our server slowed down again. "What the heck is going on? What should we do? Tell the customer to open this page when the server is running?" The guy yelled.
Analysis
After a few hours' deep analysis, the mystery was unveiled to me. The RPC framework calls Sleep(1)
in receiving every package. It tries to give out some free time to the high level applications which caused the issue. OK, in order to explain it clearly, let's go back to my sample. It calls Sleep(1)
100 times, then prints a line, so it should print out each line in about 100ms, or a little bit more for the overhead, right? But it does this in 1500ms which is about 10 times slower. Why? Sleep(1)
usually does not sleep for just 1ms, but for 10-15ms depending on your system. This is the frequency of your clock interrupts, and it is also your minimum timer resolution. But some multimedia applications do need a higher resolution timer which is as accurate as your hardware can be. Microsoft provides this functionality in the Multimedia SDK. You can call the API timeBeginPeriod(1)
to adjust the minimum timer resolution to be 1ms. However, it impacts the whole system as all applications would call Sleep(1)
which actually sleeps 1ms. The Yahoo! Finance page contains a Macromedia Flash control that happens to call the timeBeginPeriod()
.
Conclusion
Don't misunderstand my point. I'm not telling you to call timeBeginPeriod()
in all your applications. My suggestion is that don't call Sleep(n>0)
in your code unless you really know how long you want to wait. very often, delays like 5 ms or even 100 ms are not always appropriate to all situations. The side effect may be accumulated and not to be found as easily as in my trivial program. If your code wants to give out a chance to other running threads, then simply call Sleep(0)
. The operation system will check if it needs to switch the running context. If not, your thread will continue to run with little overhead.
I'm working at Trangosoft, a divison of Siemens Canada as a Senior Programmer/Analyst.