|
Joan Murt wrote: 3138 files and 114 folders
I think there is the problem.
As an experiment: run WinZip or some other packaging utility on the one computer (locally, not across the network!), turn it all into a single, big file. Then transfer that file. Finally unpack on the other computer. And time each of these three steps. The transfer will be much quicker, possibly but not necessarily the overall process will be faster too.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
You didn't mention the most important thing: what drives are you using? That's the bottleneck in these things.
Yesterday I moved a 60GB file across the network and it took 12 mins between two reasonably well-spec'd machine. I then copied across to a SAN with the fastest bits we could buy and it took 25s. Doing the same across to my portable USB drive I use for backups can take forever.
cheers,
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|
Copy
from: disk[^]
to: disk[^]
Full specs now...
Thank you in advance for taking a look at it!
|
|
|
|
|
Specs seem fine. Are you RAIDing or mirroring the target drive? Do you have a lot of other network congestion going through whatever switch/router you're using? Do you have any kinks or knots in the cables? (OK, I'm kidding...)
cheers,
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|
No, no and no...
It is as if the pc's were configured to go slow...
Cabling Cat5e...
|
|
|
|
|
OK, what colour are the cables? Have you tried moving the servers closer together, or placing the source server slightly higher than the target server. Electrons do have mass, after all...
Does the issue happen only between the current source and target? Do you get any speed difference to or from different boxes?
cheers,
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|
Chris Maunder wrote: OK, what colour are the cables? Have you tried moving the servers closer
together, or placing the source server slightly higher than the target server.
Electrons do have mass, after all...
I've even shaked them a little bit to avoid that the shy electrons would not want to go through that strange/unknown cable...
It happens with all the computers... I guess this must be something with the source computer then...
|
|
|
|
|
My domain controller running windows server 2003 enterprise, my client running windows xp pro. I apply the GPO to enable allow incoming echo request. I found that all client are apply this setting, but when i try to ping to some client, it still request time out since i found the remote machine is up and running as normal. Any idea?
|
|
|
|
|
|
No, it is not a firewall configuration. It is within my internal LAN. It should be working since some machine are work but some are not. Any idea?
|
|
|
|
|
panda-kh wrote: It is within my internal LAN.
I'm not talking about your corporate firewall. Every machine runs its own Windows Firewall. Start->Run->Services.msc-> Windows Firewall. Or some other package, possibly bundled with anti-virus software.
Also, your corporate routers/switches may be filtering ICMP packets on interfaces or even down to port-level.
|
|
|
|
|
Question - We have an number of automatic print jobs that gets printed to our network printer. Is there a way to change the set up of a printer so that any jobs printed to that particular network printer will be saved as a soft copy in a shared drive instead of printing a hard copy?
|
|
|
|
|
Probably the easiest way would be to install any of the PDF printers that are out there, in that way you'll be able to set it up to print in a specific location and to maintain the file format.
Takle a look at (i.e.) PDFCreator[^].
HTH!
|
|
|
|
|
Hello all,
I've seen a remote desktop connection with a wrong set up in terms of network speed:
In a 1gb lan it has been parameterized as 256kbps.
Does this improve the speed as there are less fancy things that are being done or in the other hand it is the opposite and everything goes slower as the defined speed is wroingly configured?
Could you give me any hint? I mean something from the experience...
Thank you in advance.
|
|
|
|
|
The speed only dictates what options are suggested to be turned off. It is not a throttle for the actual speed of the data going over the wire.
What will speed up rendering of the screen image you see is if you turn off various options. But, over a 1GB connection, this won't help at all since your connection is so fast to begin with. You can turn everything on and still not see any degredation in performance.
|
|
|
|
|
It's what I thought as the behavior of changing the setting allows you to see that some checkboxes are checked/unchecked.
But I wanted to be sure that it was in that way.
Thank you for your answer! :thimbsup:
|
|
|
|
|
is there a way to clear (or selectively remove files from) Windows' file read cache ?
my app uses a lot of really large data files (~16GB worth) - most of them are used to look up small chunks of data via a binary search. so, a lot of seeks and small reads. this can't be changed - it's just the nature of the problem. and, my input data is fixed - i can't invent new test data, i can only use what was given to me. so, i end up running the same data through the system, over and over.
what this means is that the first run of the morning is slow - every lookup goes to the file, all seeks and reads are on the actual file. but every subsequent run is very fast because all those little reads for the lookups from the first run have already been cached in Windows' file read cache. while the speed is nice for testing, it's misleading. end users will not be running the same data through, over and over, so they won't get the benefit of the read cache.
so, for performance testing / optimization, i really need to be able to force all that cached data out of Windows file cache after each run, so that it will have to go back to the disk for the next run.
i've done a lot of Googling, but haven't found any good answer to this. (no, SysInternals' CacheSet doesn't work)
anybody know how to do this?
|
|
|
|
|
Hi Chris,
good question, for which I don't know (and never saw) a solution, other than a reboot. I'll be interested in anything that comes up, and not just for Win7.
FWIW some ideas that may or may not apply, I can't tell as you didn't tell much about the app itself:
- have several identical files; with N files you could execute N cold runs.
- reorganize the data file to better take advantage of the fact that you binary search a lot; it typically means you read lots of sectors/clusters to check a few bytes and dismiss the data; using the smallest possible nodes (each with a pointer to the actual data if you really want it) improves locality of data, and may dramatically improve performance, hence reduce the warm/cold problem.
- use a database instead of a large file.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Chris Losinger wrote: anybody know how to do this?
Nope, but this[^] SO-answer looks promising (see the answer on FILE_FLAG_NO_BUFFERING).
Wouldn't it roughly be measuring the same lookup over and over? How would that differ from doing the run once and multiply it with the number of desired runs? If you really need to test different lookups, than you'd have to take in account that some lookups may take longer than others.
I are Troll
|
|
|
|
|
Eddy Vluggen wrote: Wouldn't it roughly be measuring the same lookup over and over?
i have only one input data set, but it's 2300 different records. that's enough to hit most of the code paths.
Eddy Vluggen wrote: this[^] SO-answer looks promisin
hmm. interesting. i always assumed that NO_BUFFERING thing meant that there was no read-ahead buffer to assist in small sequential reads, not that the OS doesn't put the data into its file system cache. time to re-investigate.
|
|
|
|
|
|
Luc Pattyn wrote: Hmm. I always thought FILE_FLAG_NO_BUFFERING was making the file accesses slower for that file
I didn't even know such a feature existed, until today - I'll be reading the article tonight, thanks for the link 
|
|
|
|
|
if you can put the files on a removable drive, removing the drive will clear the files from Windows' file cache.
|
|
|
|
|
I thought of that too, however I rejected the idea because I expect the removable disk to be slower, which I guess is not what you want. There must be many ways to make it slower or even awfully slow so all cold/hot aspects vanish...
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
slow isn't bad. as long as it's consistent. and, slow testing increases the incentive to make the process even faster!
|
|
|
|