|
Gary R. Wheeler wrote: Do you think it's possible the reason file I/O is very low is because I/O is being satisfied from cache a large amount of the time? Accessing disk in large chuncks is certainly more efficient than doing it in small chunks, whoever does it (the application, OS, disc driver or disc firmware). As long as you access a contiguous sequence of disk sectors, the time cost and disk load is almost constant, independent of data volume. Before disks with RAM caches were common, before the OS did much buffering, your high performance application might read 64 KiByte at a time and gain a lot of speed. (And for that purpose, keeping your FAT disk defragmented was essential!).
DOS did no buffering; it already occupied 384 KiByte of the 1 MiByte address space available, leaving 640 KiByte to the application (well known fellow allegedly considered that it "ought to be enough for anyone"...).
The law of diminishing returns soon comes to play. Reading beyond the end of the file fragment is a waste (your application or OS won't do it; the disc cache has no awareness of fragment limits). If you have flagged your NTFS file as encrypted or compressed, it is anyhow processed in 64 KiByte chunks. At least some RAID solutions does striping in 64 KiByte chunks. Quite a few files - by number, not by total volume - are less than 64 KiByte in size, or not very much more (in particular in software development environments). The performance benefit of reading up to 64 KiByte chunks may be significant, but for the all over system performance, it drops rapidly off beyond that.
Today, RAM is so cheap that we uncritically buffer, whether beneficial or not. The benefit of OS prefetching (i.e. transferring large chunks) has diminished a lot the last few years, due to a couple of other fairly recent (on a historical scale) developments:
Nowadays, most system disks (and almost all new ones), and an increasing share of data disks, are solid state - still slower than RAM, but the factor is more like one to ten, rather than one to ten thousand. If you turn off all buffering, always reading a single page at a time, flash disk slowdown would not be much noticeable on application performance; speed would be almost like before.
Second: Most new magnetic discs have on-disk RAM buffers, reading an entire track (or a significant portion of it) into their own RAM, whether asked to or not. On the next single-page request from the OS, data goes from one RAM buffer (in the disc) to another RAM buffer (in OS managed memory), at a speed usually limited only by the disk interface.
Certainly: If your application makes 16 single-page (4 KiByte) disc accesses rather than a single 64 KiByte access, management work done by the CPU is higher. If the OS doesn't find the pages in its own buffer, it may have to make up to 16 separate disc accesses. This takes some CPU capacity as well. Yet you never see the CPU load rocket when you access the disk. CPU load is insignificant.
Opening or creating a file may require quite a few disc accesses, for accessing / updating its directory, reading or allocating an MFT entry, updating the allocation bit map (create, write), ... These are file system structures that the OS repeatedly accesses, and can benefit from caching. But they are OS owned data, not user data.
I could see performance improvements from a large cache even for typical consumer apps like web browsers: video, images, large data chunks, and so on. Tuning web caching may be quite different from tuning disc caching, but they do share some characteristics. For video and large data chunks: How often do you watch that same video again, while it is still in memory? I'd say: Not very often. You download some huge software - say, a new OS image. How often do you repeat that download before the first one is our of the cache? Web caching saves a lot of tiny little transfers, such as logos or icons used on every page presented by a web site. But first: The cache is maintained in the file system, by the browser - not in RAM by the OS.
Second: HTTP allows an expiry time for a chunk of data (such as a logo), but many web sites are lazy at setting this properly, so web browsers commonly make a request anyway, asking if the logo has been updated recently. If it hasn't, there is no need to transfer those two hundred bytes again. Maybe it took a couple thousand bytes to save the transfer of two hundred...
Images are in a middle-between (and they are usually cached by the browser): I have tried timing web newspaper front pages with a couple dozen of photos, before and after a complete cleaning of the browser cache. The first access was measurably slower, when all the logos and icons had to be retransferred. On the second access to the front page, the speed was the same as before the cleanup. So the caching obviously gave some speedup. Not much, but measurable.
Yet, the trend today is exactly the opposite: Lots of sites presenting scores of images in a huge display, deliberately do not fetch all of them, but only those currently visible in the browser window. Do not waste resources on retrieving anything that might not be needed after all!
This may be justifiable, considering the speed of an Internet connection vs. the speed of a flash disk transfer. Also, the probability of a disk file user accessing the entire file may higher than that of a web user wanting to see the complete display of all pictures. Also, the difference between an application (the web browser) managing a cache in disk files, vs. the OS managing a cache in RAM, is quite significant. So several considerations regarding caching cannot be directly transferred from the one area to the other.
|
|
|
|
|
In days when used Windows to development I saw very good improvement when used large amount of memory - so the last Windows machine I built used 8Gb for each core (totalling to 64Gb)... It give me the ability to run virtual machines for each server (SQL, IIS, app server and so) with maximum efficiency... It did go up to 85-95% of memory usage at full usage, but remained very fast.
It probably right that I could use some memory optimizing to recycle unused memory and save some, but those days the memory was much cheaper that to be bothered by that...
About two years after I built that monster I move to Linux and things changed dramatically, even in the first phase, where still used some Windows in VM I could drop half of the memory, but today that I have no traces of Windows anymore I rarely go over the 4Gb (!!!), still doing the same things mostly...
But! At work I have Windows with the 16Gb and while developing I do hit memory problems occasionally... It seems that some software just eating up the memory without second thought... The main problem is VS (that can eat several Gb of memory while left open overnight) and node.js (used to compile Angular on the fly)...
So if there is a problem with memory it is that some of us used to see memory as an endless resource that need not be take care of... And this is something you see from the OS level up the the end product... It is very simple to write a few line of codes that will use any memory it can get
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
|
|
|
|
|
PC performance is a great market for placebo and snake oil. Lots of myths, lots of lack of understanding. And very little actual measurements to verify any claims.
One of my friends insists that he must vacuum clean his PC regularly, to keep it from slowing down. He also has a theory to explain it: Dust accumulates on the fans making them less effective, so the CPU lowers the clock speed to avoid overheating. I once challenged him to do some real speed measurements before and after the spring clean: Of course there was no measurable difference. Seeing the results, he claimed that this time, the fans were far less dusty than they use to be; he couldn't explain why. But that must be The Explanation for measurements not supporting his subjective feeling of higher PC performance after vacuuming.
History has seen numerous similar reports. The first lengthy quarrel I remember is from back in the days of 286 / 386sx, where floating point arithmetic was delivered as a separate chip (287 / 387): This guy who had upgraded his PC with a 387 an insisted stubbornly that boot-up now was much faster. Guys from MS shook their heads: No. The boot process, or Windows in general, makes no use of floating point whatsoever, so adding a 387 will in no way affect the speed of any Windows code! (Maybe Windows today uses FP - at that time, it didn't.)
When hearing statements like "I have a feeling that the PC is now a lot faster", I usually just nod and make no further comment. If there is an undisputable speedup, I would like to investigate the entire machine, both before and after the upgrade/modification: You may claim some explanation that turns out not to be the real reason. Say, if your VM software actually manages to utilize all eight cores fully, while the setup of the non-virtualized machine for all practical purposes run everything on a single core, then 64 GByte of RAM may not be the real reason for speedup. Maybe the speed would be the same with 32 GByte shared RAM (not statically distributed among the cores).
But again: As I said in my first posting, computing center experts will know how to monitor and balance resources. I am not talking about those users. If you on your PC have 8 VMs, each running some heavy server, then you are halfway to a computing center. You are far beyond that single-user desktop PC and some non-computer-professional (or one who essentially knows application programming, not system tuning) who has been told that his PC will perform so much better if he doubles the RAM. There are a hundred times as many users in that group as those who knows how to use system monitoring and tuning tools.
Finally: From your description, it looks like you used to run 8 VMs each requiring 8 Gbyte. Today you run the same 8 tasks, with an average of half a GByte for each, and you have the same performance. Did you ever consider that maybe the Windows performance might have been the same with far less RAM? Did you ever reduce the amount of RAM to, say, half as much, watching paging go through the ceiling? Most PCs have a LED flashing up when a physical disk transfer is made. If it flashes up every now and then, you do not have excessive paging. (When you start a new application, there is of course disc activity to get the code and data segments into memory!)
|
|
|
|
|
I first read that as "flushing to toilet". I had to read it 3 more times before I saw "paging disk".
|
|
|
|
|
If you provide the phone number of your shrink, we may call him to suggest where your problems may be seated ...
|
|
|
|
|
|
|
Agreed. The secret is having enough when you need it that you don't do anything that lands you on CNN.
Software Zen: delete this;
|
|
|
|
|
On the more serious side: Throw in the seeds from a cardamom fruit when you grind the coffee beans. Do not use two fruits, or so much that you notice it as cardamom; it should just modify the taste, not be the taste.
Adding ground cardamom to the beans after grinding is a cheap trick that won't give the true aroma of the freshly ground cardamom seeds, ground with the coffee beans.
|
|
|
|
|
I've brewed coffee onto (whole) star anise[^] for years - takes the bitter edge off of it if it's not such great coffee and makes it taste very full. Stars are reusable for many cups/pots, depending upon how you perk it because, due to their woody nature only a fraction of the essence is removed.
Now - if you're really intent on grinding stuff in with the coffee, cardamon [^] is popular in the middle east but you need not extend your search for coffee brighteners that far: anise seed [^] (not related to star anise at all) can be ground in for a similar enriching and smoothing effect, or, if you really like it, add a bunch for anise flavoring. Both of the others seem to be much less expensive than cardamon.
Probably many other variations on these are known.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
I prefer coffee to be black and bitter -- it sets the proper tone for my work day.
|
|
|
|
|
You, perhaps, need a position with more job satisfaction. Let's see:
Doctor, Lawyer, Indian Chief; Rich Man, Poor Man, Beggar Man, Thief, Policeman; Gigolo.
Well - there's lots to choose from to brighten your day and sweeten your coffee.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: Well - there's lots to choose from to brighten your day and sweeten your coffee. Nahhh ... IT works for me!
|
|
|
|
|
BryanFazekas wrote: Nahhh ... IT works for me! Gigolo works for she.
- just sayin' .
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
From my profile:
"Member since Sat 11 Dec 2010
(10 years, 1 month)"
I've passed the ten years mark!
It's been a real pleasure, too!
Here's to the next ten years
|
|
|
|
|
Yay! Keep on codin'!
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Now you are a bit decade-nt
|
|
|
|
|
|
Sander Rossel wrote: It's been a real pleasure, too!
It's been a pleasure hanging out with you!
Also, allow me to take this opportunity to thank you for your contributions here...articles, qa, sotw, and lounge content. Keep up the good work!
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
Congratulations!
I had my 30th anniversary at work last fall, and I'll reach the 20 year mark on CP in March of this year. Egads, I'm an old .
Software Zen: delete this;
|
|
|
|
|
Happy "aniversary"
I am soon getting the 15th.
I hope CP reaches the 100 and that we can see it and celebrate together
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Nelek wrote: I am soon getting the 15th.
Just did that yesterday.
|
|
|
|
|
Congratulations
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I’ve been here for 11 years and 7 months. Time flies.
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
Such as that novel: "Hole in the mattress", by Mr. Completely.
|
|
|
|
|