I've written a small benchmark program as a small test of some languages I've been looking at. It doesn't say a lot about the language, but it gives me a bit of experience using the language and also gives a minor benchmark of performance. The same algorithm is used for all languages. I've used this with C++, Rust, Go, Crystal, "V", Java, Dart, JS. I'm really only interested in a compiled back-end language, but I included some others out of interest. The benchmark is to simply find the prime numbers up to a certain number. The algorithm is probably not the fastest, but it is the same algorithm for all programs.
To calculate the prime numbers up to 10 million on my i7 PC with almost all of these languages takes less than 13 seconds. Java takes about 14 seconds.
However, I also wrote this program using "C", and it took about 675 seconds and I don't know why. I'm not a "C" programmer, but I am experienced with a few languages. What I want to know is what I have done wrong in the program. It is extremely strange to me that it works as expected in "C" on Linux, but not on Win10.
I've posted the code here:
https://controlc.com/3179b32a
#include <stdio.h>
#include <stdlib.h>
#include "strings.h"
#include <sysinfoapi.h>
long fnCalcSqrt(long);
int main(void) {
printf("\nCalculate number of prime integers from 2 to n million");
long iMillions = 0;
char sMillions[10];
while (iMillions < 1 || iMillions > 100) {
printf("\nEnter number of millions (1 to 100): ");
fgets(sMillions, sizeof(sMillions), stdin);
if (strlen(sMillions) < 2) {
return 0;
}
iMillions = atoi(sMillions);
if (iMillions < 1 || iMillions > 100) {
printf("Must be from 1 to 100");
}
}
long iEndVal = iMillions * 1000000;
long iCurrent, iDiv, iSqrt, tfPrime, iPrimeTot = 0;
printf("Started: calculating primes from 2 to %ld,000,000 .......\n", iMillions);
long iElapsedMs = GetTickCount();
for (iCurrent = 2; iCurrent <= iEndVal; iCurrent++) {
if (iCurrent % 2 != 0 || iCurrent == 2) {
iSqrt = fnCalcSqrt(iCurrent);
tfPrime = 1;
for (iDiv = 2; iDiv <= iSqrt; iDiv++) {
if ((iCurrent % iDiv) == 0) {
if (iDiv != iCurrent) {
tfPrime = 0;
}
break;
}
}
iPrimeTot += tfPrime;
}
}
iElapsedMs = GetTickCount() -iElapsedMs;
printf("Finished. prime total = %ld\n", iPrimeTot);
printf("Elapsed = %ld.%ld seconds\n", iElapsedMs/1000, iElapsedMs % 1000);
return 0;
}
long fnCalcSqrt(long iCurrent)
{
long iProd = 0, iPrevDiv = 0, iDiff = 0;
long iDiv = iCurrent / 10;
if (iDiv < 2) {
iDiv = 2;
}
while (1) {
iProd = iDiv * iDiv;
if (iPrevDiv < iDiv) {
iDiff = (iDiv - iPrevDiv) / 2;
} else {
iDiff = (iPrevDiv - iDiv) / 2;
}
iPrevDiv = iDiv;
if (iProd < iCurrent) {
if (iDiff < 1) {
iDiff = 1;
}
iDiv += iDiff;
} else {
if (iDiff < 2) {
return iDiv;
}
iDiv -= iDiff;
}
}
}
What I have tried:
I presumed that I had made a mistake, and I presume that I have, but I couldn't find where. So, I set up WSL2 on my PC and compiled and ran the program on Ubuntu and it took 12.5 seconds to run. This is the exact same program that I ran on Win10 that took 675 seconds - about 50 times longer. So, I presumed it must have been the compiler on Windows, so I tried different compilers on Windows - MS, GCC, Tiny C, Pelles C. The result was the same with all - 675 seconds or more. It was then that I coded it in JS in order to transpile to "C", however that did not work (did not compile), and if I modified it to work I would have ended up with essentially my own "C" program. Thinking that it may be my PC, I tried on another Win10 PC, but essentially the same problem.