|
There isn't really. It's all highly situational.
For example, I do dithering and automatic color matching in my graphics library so that I can load a full color JPG on to for example, a 7-color e-paper display. It will match any red it gets, with the nearest red that the e-paper can support and then if possible, dither it with another color to get it closer.
It takes time. I cache the color matching and dithering results in a hash table as I load the page. The hit rate is extremely high. It's very rare that a pixel of a particular color only appears once. That's close to ideal. The cache is discarded all at once once the frame is rendered. In that case, also easy to determine.
Naturally, for a web site, things look much different, and considerations change. Your cache hit algo probably won't be as ideal as my previous example just because there are so few examples in life that closely match a general algorithm's design.
At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.
This. So much this.
|
|
|
|
|
honey the codewitch wrote: When your CPU core(s) aren't performing tasks, they are idle hands.
When your RAM is not allocated, it's doing no useful work. (Still drawing power though!)
While your I/O was idle, it could have been preloading something for you.
Sounds like the wife complaining about her hubby.
|
|
|
|
|
Anything above 80-85% utilization will quickly start thrashing that particular resource. Up to that point you're spot on.
|
|
|
|
|
I wouldn't say *anything*, but I do hear you.
Certainly thrashing is a concern with something like virtual memory, but I'm not even necessarily talking about vmem here. With the memory example, my point was simply about a hypothetical ideal. It takes the same amount of power to run 32GB of allocated memory as it does 32GB of unallocated memory, so if you're not using that memory for something, it's in effect, being wasted. In the standard case, this would be an OS responsibility, and if an OS wanted to approach that ideal, it might use something, like an internal ramdisk to preload commonly used apps and data for example. May as well. It's not being used for anything else, and if you run out, you just start dumping all your ramdisk. Only after it's gone, start going to vmem. Something like that. It's just an idea, there are a million ways to use RAM.
I/O (to storage) is really where your thrashing occurs, and historically there was literal thrashing due to the moving parts involved, even though that's so often not the case anymore.
But again, the idea would be in an ideal "typical" situation, an OS would manage that, and run any preloads at idle time, and make them lower priority than anything else.
In effect, as long as everything you're doing on top of idling is basically "disposable" thrashing won't be much of a concern.
The CPU is a bit of an animal, in that you'll need about 10% of it to run the scheduler effectively, and without that, everything else falls apart. So yeah, with a CPU it's more like 80-90% utilization, although 100% is acceptable for bursts. In any case, I worded my post carefully to dictate that the CPU should be utilized when it has something to do. It's not the case that I'd necessarily want to "find" things to do with it the way I would with RAM. It's that when it does need to do something, it expands like a lil puffer fish and uses all of its threading power toward a task - again, ideal scenario. The reason for the discrepancy here, vs say with RAM is because of power concerns. RAM uses the same power regardless. A CPU varies with task so it should be allowed to idle if that makes sense.
I hope this clears things up rather than making it worse.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
|
Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive
|
|
|
|
|
My understanding is that DRAM needs constant periodic refresh voltage to maintain its data
Memory refresh - Wikipedia[^]
So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point.
In effect, the writes are always happening regardless, at least to my understanding.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The refresh occurs at a far lower frequency than the ordinary reading/writing. You can do several hundred r/w accesses for each refresh.
|
|
|
|
|
Ah, good to know.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Whenever you experience a bottleneck, inspect your system for other, significantly under-utilized resources, and be honest to yourself, saying: OK, I wasted too much resources on that, and on that! I could have done away with a lot less on those parts.
|
|
|
|
|
Sure, absolutely. Getting a utilization profile can uncover a lot about your system.
I actually had fun sourcing my PC components to be perfectly matched so that I didn't experience unavoidable bottlenecks in what I use it for. But it's more than that of course. That doesn't even cover the software angle.
Why is my zip decompression only using 30% of my I/O? is my CPU too slow? That sort of thing. It's interesting, too.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
When my AMD A4 / 8 GB RAM / 128 GB SSD rig suffered 100% CPU utilization / 99% RAM utilization for minutes (SQL, some Python) while my boss stood behind me, he asked that would I urgently need a new box. I've said no, because the company owners are happy now, we use every bit of the kit they provided us, no money invested in idle, quickly aging tech.
|
|
|
|
|
At one time I was interested in how a computer used resources
The learning curve was too great with no background in electrical engendering I gave up
and was just happy if it worked
BUT I remember reading about "Bank Switching"
So is this still a concept used in a personal computer today?
And if so is it good design to manage system resources on personal computers?
Quote: What's that old saw? Idle hands are the devil's playground. Your computer is like that.
Why ask a machine to run full tilt if it only needs 50% resource to do the job?
If the job gets bigger then or another job needs resource's there is a reserve.
If this an illogical thought process happy to hear why I should have stayed out of this conversation
|
|
|
|
|
Because there is usually more work a computer *could* be doing.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Choroid wrote: BUT I remember reading about "Bank Switching"
So is this still a concept used in a personal computer today? No. 32 or 64 bit logical addressing removed the need for it.
The problem in the old PCs was the addressing range. Most single-chip CPUs were 8-bit, with 16 bit addresses, so they could only handle 2**16 = 64 Ki bytes of RAM; there was no way to identify any more. Then came the LIM standard for banking: You could set up your system with, say, 48 Ki plain RAM and the upper 16 Ki of the address space handled by a LIM card, providing several 16 Ki blocks (or "banks") of physical RAM, but you could use only one of them at a time. You had to tell the LIM card (through I/O instructions) which of the banks to enable.
You would usually put code, not data, in the banked part: In the un-banked (lower 48 Ki) part, you put a stub that is called from other places. This stub tells the LIM card to enable the right bank where the actual code is placed, and jump to the code address in that bank. If this function called functions in other banks, it would have to call via an unbanked stub to switch to the right bank. Upon return, the previous bank had to be enabled again, to let the caller continue. It did not lead to blazingly fast performance.
Catching data access through a similar stub is not that easy. You could, in an OO world have a object in unbanked RAM with huge data part banked RAM, the object knowing the bank number and address, and channeling all access to it through accessor functions (set, get), but OO was little known in the PC world at that time. I never saw anyone doing this.
LIM was a PC concept. On larger machines, you could see memory overlays, which was also based on routine stubs, but the stub would read a code block from disk into RAM, overwriting anything that was there. You had much more flexibility, but PC disks were so slow in those days that it would have been next to unusable.
You could say that bank switching is a relative of paging mechanisms. I know of at least one family of 16-bit minis providing 64 Ki words (128 Ki bytes) of address space to each user, but could handle up to 32 Mi bytes of physical RAM. So the 64 terminals hooked up to the mini could have their full address space resident in RAM, with no paging. When the CPU switched its attention from one user to another one, it replaced the page table contents to point to the new user's pages - not that different from telling the LIM board to switch to another bank.
You could also say that the 8086 solution, with its segment registers, is a close relative to banking: It allowed direct addressing of up to a mebibyte; this required the instruction to identify the segment register (a parallel to bank number) in every instruction, so it worked even for 'banked' data, not just instructions. (There is a well known quote from the discussions in one big company about how to split that mebibyte between operating system and user segments, and a famous industry leader was quoted for wanting 384 Ki of that space being reserved for the OS, the remaining 640 Ki should be enough for any user program running in 1 Mi of RAM.)
Today, almost every general-purpose machine has at least 32 bit address (4 Gi bytes), 64 bits is becoming the new standard. Then there is no need for neither banking nor overlaying. Rather, the paging system is used to pack the pieces of memory addresses that are actually used into a much smaller physical address space than the logical address space presents to the program.
For special purposes - almost all of it falls under 'embedded' - bit addresses are seen even today. No more than 6-7 years ago, I was working with an 8051, extended with banking hardware for four 16 Ki banks. I believe that chip is still available. (I moved on to another project, so I am not sure.)
|
|
|
|
|
Double WOW and Thank You for the time you put into this reply
A lot of computer design information here
Don't Google LIM Card only found one with LIM Card Computer that is a door relay ? ?
Makes me wonder how the industry made FASTER PC disk's
The quote from the "Famous Industry Leader" I think tells me why Windows 11
needed a new hardware configuration the OS got too big let the end use buy new machines
if they want our OS
I have a Dell T7600 Precision Workstation (Refurbish)
with Xeon E5 2.9GHz 2 processors & 64 GB RAM 64 bit
Graphics Card Nvidia P400 only 2 GB The OEM died Thank goodness for Plug & Play
Windows 7 Pro
It works I do not make any large demands other than VS 2019 and SQLite DB
|
|
|
|
|
My current utilization is: 3 monitors (one playing internet TV); 2 Edge browsers; Outlook; 2 file explorers; 3 different graphics programs; 3 open PDF's; Character Map; 4 - 2022 VS windows; Snipping Tool; 2 image viewers; and Task Manager.
63% memory (out of 16GB)
4% CPU (Ryzen 7)
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
With my 'smart' meter and a few days away from home, I've established that my ambient electricity use averages out at a constant 210w, by which I mean the amount of power I'm using not really do anything at all.
There could be quite a few culprits - fridge-freezer, two security cameras, broadband fibre pick up, router, network switch, roughly eight Alexas (lost count), some smart light-switches and sockets, sockets with USB sockets in them, a smart meter and heaven knows how many things on standby. All only a few watts each but I guess it adds up. Probably the biggest draw is a small weedy PC I use as a home server running Exchange for a load of email accounts.
Anyway, when you add it all up that works out at about £560/year in electricity before you add standing charges and everything else, which would be fair enough if it was my total bill, but this is just the 'before you get started' amount.
I was hoping for some comparisons, and knowing CodeProject is filled with the type of people who know exactly what their baseline energy usage is, thought I'd ask here.
Regards,
Rob Philpott.
|
|
|
|
|
I had a (so called) smart meter* but since it didn't tell me anything useful I switched it off. So I probably saved 50p a year. As to all those devices in your house - we have two laptops, and a router on the phone line.
*I thought the actual meter was the smart bit.
|
|
|
|
|
And a fridge, and lots of other things which don't have a proper isolated off switch surely?
I like the smart meter, I can look it at it shake my head and then wander around the house turning all the unused lights off grumbling as I go, like my dad used to.
Regards,
Rob Philpott.
|
|
|
|
|
I really don't need a machine (yet) to tell me when to switch lights on or off.
|
|
|
|
|
Rob Philpott wrote: £560/year
Ha! At our old house, gas and electric combined was close to that per month!
We're currently on a £100/month DD for electricity alone - no mains gas in these parts! - and it looks like we used around 350kWh in August, which is the only full month we've been here.
On the plus side, we now have 20 solar panels, and we inherited the "feed-in tariff" which pays based on how much you generate rather than how much you feed back in to the national grid. During the summer months, it looks like we're generating over 480kWh per month, so they're paying us over £340/month for that. Which is nice.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
modified 8-Sep-23 3:13am.
|
|
|
|
|
Richard Deeming wrote: Ha! At our old house, gas and electric combined was close to that per month!
Ouch. That's serious.
Richard Deeming wrote: so they're paying us over £570/month for that
That's insane, and quite interesting. I know a few people who have panels and the gist of it seems to be they can sell 1KWH for 10p or something, then buy it back later at 35p.
Regards,
Rob Philpott.
|
|
|
|
|