As a developer who has spent a fair amount of time tweaking code for performance optimization, I'm having some issues with this article.
First of all, who has generally accepted 10M iterations to test the performance? Unless your native code happens to be calling a function thousands or millions of times, the only time I've found that it's worth testing a function for performance is if it's taking more than about a 10th of a second to complete.
If a function took .1 seconds, and you called it 10M times, you'd be waiting 11 days for the test to complete.
Secondly, there's no consideration as to what type of function is being tested. The data that is called inside of the function may be cached up either by the function or its parent object. Another major factor would be if the function is calling some sort of (relational) database. Many database servers will cache the results of a given query, so the first call may take many times longer than the subsequent 9,999,999 calls. I work with a financial data processing system on a regular basis, where a given query will often take more than 30 seconds to run the first time around, but can be re-run a second time in about 5 seconds.
Obviously, averaging 10M of those tests is going to result in an answer approaching 5.0 seconds. In the real world though, an end-user who calls this function a few times a week, is going to experience a result more on the order of 30 seconds to complete.
Another related topic that the author didn't seem to touch on is the fact that many times you don't know which function is causing the problems. In that case, it is often desirable to use an epoch timestamp (milliseconds) and generate a series of timestamps throughout the flow of code. That lets you analyze which sets of functions are performing the most poorly. Once you know where the pain-points are, then you can begin analyzing specific calls.
I was recently involved in a situation where a proud developer was very pleased with the fact that a particular call was only taking about .25 seconds to execute. That seems very acceptable that the end-user would wait a quarter of a second to retrieve data before moving onto the next "screen". What he failed to take into consideration was that he was calling this function 8-15 times between the screens, which turned a simple .25 second delay, into a 2-4 second pause. This particular application was an audio-response application, and for someone sitting on the phone waiting for the application to come back to them, it was causing the end-users to crawl out of their skin.
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.