As an experiment: run WinZip or some other packaging utility on the one computer (locally, not across the network!), turn it all into a single, big file. Then transfer that file. Finally unpack on the other computer. And time each of these three steps. The transfer will be much quicker, possibly but not necessarily the overall process will be faster too.
You didn't mention the most important thing: what drives are you using? That's the bottleneck in these things.
Yesterday I moved a 60GB file across the network and it took 12 mins between two reasonably well-spec'd machine. I then copied across to a SAN with the fastest bits we could buy and it took 25s. Doing the same across to my portable USB drive I use for backups can take forever.
Specs seem fine. Are you RAIDing or mirroring the target drive? Do you have a lot of other network congestion going through whatever switch/router you're using? Do you have any kinks or knots in the cables? (OK, I'm kidding...)
My domain controller running windows server 2003 enterprise, my client running windows xp pro. I apply the GPO to enable allow incoming echo request. I found that all client are apply this setting, but when i try to ping to some client, it still request time out since i found the remote machine is up and running as normal. Any idea?
I'm not talking about your corporate firewall. Every machine runs its own Windows Firewall. Start->Run->Services.msc-> Windows Firewall. Or some other package, possibly bundled with anti-virus software.
Also, your corporate routers/switches may be filtering ICMP packets on interfaces or even down to port-level.
Question - We have an number of automatic print jobs that gets printed to our network printer. Is there a way to change the set up of a printer so that any jobs printed to that particular network printer will be saved as a soft copy in a shared drive instead of printing a hard copy?
The speed only dictates what options are suggested to be turned off. It is not a throttle for the actual speed of the data going over the wire.
What will speed up rendering of the screen image you see is if you turn off various options. But, over a 1GB connection, this won't help at all since your connection is so fast to begin with. You can turn everything on and still not see any degredation in performance.
is there a way to clear (or selectively remove files from) Windows' file read cache ?
my app uses a lot of really large data files (~16GB worth) - most of them are used to look up small chunks of data via a binary search. so, a lot of seeks and small reads. this can't be changed - it's just the nature of the problem. and, my input data is fixed - i can't invent new test data, i can only use what was given to me. so, i end up running the same data through the system, over and over.
what this means is that the first run of the morning is slow - every lookup goes to the file, all seeks and reads are on the actual file. but every subsequent run is very fast because all those little reads for the lookups from the first run have already been cached in Windows' file read cache. while the speed is nice for testing, it's misleading. end users will not be running the same data through, over and over, so they won't get the benefit of the read cache.
so, for performance testing / optimization, i really need to be able to force all that cached data out of Windows file cache after each run, so that it will have to go back to the disk for the next run.
i've done a lot of Googling, but haven't found any good answer to this. (no, SysInternals' CacheSet doesn't work)
good question, for which I don't know (and never saw) a solution, other than a reboot. I'll be interested in anything that comes up, and not just for Win7.
FWIW some ideas that may or may not apply, I can't tell as you didn't tell much about the app itself:
- have several identical files; with N files you could execute N cold runs.
- reorganize the data file to better take advantage of the fact that you binary search a lot; it typically means you read lots of sectors/clusters to check a few bytes and dismiss the data; using the smallest possible nodes (each with a pointer to the actual data if you really want it) improves locality of data, and may dramatically improve performance, hence reduce the warm/cold problem.
- use a database instead of a large file.
Nope, but this[^] SO-answer looks promising (see the answer on FILE_FLAG_NO_BUFFERING).
Wouldn't it roughly be measuring the same lookup over and over? How would that differ from doing the run once and multiply it with the number of desired runs? If you really need to test different lookups, than you'd have to take in account that some lookups may take longer than others.
hmm. interesting. i always assumed that NO_BUFFERING thing meant that there was no read-ahead buffer to assist in small sequential reads, not that the OS doesn't put the data into its file system cache. time to re-investigate.
I thought of that too, however I rejected the idea because I expect the removable disk to be slower, which I guess is not what you want. There must be many ways to make it slower or even awfully slow so all cold/hot aspects vanish...