|
Hello all,
I have a customer with one 25 years old robot with an industrial computer and windows 95.
That robot computer came with a big chunky 60GB HDD.
Years ago, I replaced that HDD by a Fujitsu SSD with the same size.
Now the customer wants an extra backup (clever) and asked me to buy another SSD to get an image stored there "just in case".
The smallest SSD I've found is +/- 240GB.
I know OSes have limits, is it possible to partition the SSD to fool the computer and make it work even in a 32Bits windows 95? Is that even necessary? will that not be a solution? and is there any solution for that?
Thank you all in advance...
|
|
|
|
|
yes. I've done it for years. Wait one...
Charlie Gilley
“Microsoft is the virus..."
"the problem with socialism is that eventually you run out of other people's money"
|
|
|
|
|
On one of my embedded projects, the controller OS could only handle DOS-16 - basically nothing larger than 2GB. Now this project started in 2003, it's still deployed. We started with 64MB compact flash. Obviously, over a couple of decades, you simply cannot find these small capacity cards. So, I dug around and came up with the information for resizing the drives.
I mainly used this on Compact Flash cards that were either 4 or 8GB. I have not used it on an SSD, but I cannot think of a reason why it would not work.
Hope this helps.
------------------------------------------------------------------
Before starting, please remember that you can seriously mess things up if you make a mistake using command line disk management. Please make 150% sure you are selecting the correct disk so you don’t format your hard drive. You are solely responsible for anything that happens as a result of using this code 🙂
Instructions for reducing the partition size of a compact flash (CF) card:
Open a command Window (Windows-> Start -> cmd)
Type diskpart
A new window will open up with a “diskpart>” prompt note: if there happen to be network drives, and you are not on the network, this command can take some time.
list disk
select disk n (where n is the number of your CF card)
list volume
select volume n (where n is the number of CF card volume)
clean all (this completely reformats the disk – it will take a while and appear to hang but be patient)
create partition primary (this gives the newly formated CF card a partition so it can be resized)
shrink querymax
This will tell you how much space is currently on available on your CF card. Subtract this from the filesize in MB you want for the final disk than add 1.
For my 4GB disk, shrink querymax returns:
“The maximum number of reclaimable bytes is: 3824MB”
I wanted a final disk size of 2GB which a google search told me is 1954MB so 3824 – 1954 + 1= 1871
shrink desired = 1871 (This tells diskpart to try and shrink the disk by 1871 MB)
Now that the disk is the right size, you can format the partition…
format fs=fat label=”volumelabel”
That’s all it takes.
More info on diskpart commands here:
http://technet.microsoft.com/en-us/library/cc766465(WS.10).aspx
Charlie Gilley
“Microsoft is the virus..."
"the problem with socialism is that eventually you run out of other people's money"
|
|
|
|
|
Wordle 1,204 3/6*
⬜⬜⬜🟨⬜
⬜🟩🟩🟩⬜
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,204 4/6
⬜🟨⬜🟨🟨
🟨🟨⬜🟨⬜
⬜🟩🟩🟩🟩
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 1,204 3/6*
⬜🟨⬜⬜🟨
⬜⬜🟩🟩🟩
🟩🟩🟩🟩🟩
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Wordle 1,204 4/6*
⬛⬛⬛🟨🟨
⬛⬛🟨🟨🟨
🟨🟩⬛🟩🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,204 4/6
⬛⬛⬛🟨🟨
🟨🟨⬛🟨🟨
🟨⬛🟩🟩🟩
🟩🟩🟩🟩🟩
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Don't optimize until you have to generally.
But that can sometimes backfire, like when an optimization requires an architecture change.
To save memory and remain flexible, my SVG engine writes its SVGs out using callbacks with color runs and coordinates.
The exception is when
A) your canvas bounds are the same as the bound bitmap bounding box
B) the bitmap is RGBA8888 (32-bit pixels)
I just added an optimization to do direct writes in lieu of using callbacks. It greatly speeds up rendering - if you have the memory for it.
Despite being a totally different way of rendering, I exposed it through a relatively common interface, and automatically direct bind to targets that fulfill A and B:
template<typename Destination>
struct xdraw_canvas_binder {
static gfx_result canvas(Destination& destination, const srect16& bounds, ::gfx::canvas* out_canvas) {
::gfx::canvas result((size16)bounds.dimensions());
gfx_result gr = result.initialize();
if(gr!=gfx_result::success) {
return gr;
}
using st_t = xdraw_canvas_state<Destination>;
st_t* st = (st_t*)malloc(sizeof(st_t));
if(st==nullptr) {
result.deinitialize();
return gfx_result::out_of_memory;
}
st->dest = &destination;
st->dim = (size16)bounds.dimensions();
st->offset = bounds.point1();
*out_canvas=gfx_move(result);
out_canvas->on_write_callback(xdraw_canvas_write_callback<Destination>,st,::free);
out_canvas->on_read_callback(xdraw_canvas_read_callback<Destination>,st,::free);
return gfx_result::success;
}
};
template<>
struct xdraw_canvas_binder<bitmap<rgba_pixel<32>>>
{
static gfx_result canvas(bitmap<rgba_pixel<32>>& destination, const srect16& bounds, ::gfx::canvas* out_canvas) {
::gfx::canvas result((size16)bounds.dimensions());
gfx_result gr = result.initialize();
if(gr!=gfx_result::success) {
return gr;
}
using st_t = xdraw_canvas_state<bitmap<rgba_pixel<32>>>;
st_t* st = (st_t*)malloc(sizeof(st_t));
if(st==nullptr) {
result.deinitialize();
return gfx_result::out_of_memory;
}
st->dest = &destination;
st->dim = (size16)bounds.dimensions();
st->offset = bounds.point1();
*out_canvas=::gfx::helpers::gfx_move(result);
if(bounds==(srect16)destination.bounds()) {
gr=out_canvas->direct_bitmap_rgba32(&destination);
if(gr!=gfx_result::success) {
return gr;
}
} else {
out_canvas->on_write_callback(xdraw_canvas_write_callback<bitmap<rgba_pixel<32>>>,st,::free);
out_canvas->on_read_callback(xdraw_canvas_read_callback<bitmap<rgba_pixel<32>>>,st,::free);
}
return gfx_result::success;
}
};
Here I have a specialization for a general bind and one that binds directly if your pixel format is rgba_pixel<32>
I can then invoke to appropriate method with this lil guy:
template<typename Destination>
static gfx_result canvas(Destination& destination, const srect16& bounds, ::gfx::canvas* out_canvas) {
return xdraw_canvas_binder<Destination>::canvas(destination,bounds,out_canvas);
}
Because I did that, I don't need to change any code that uses ::gfx::draw::canvas<>
Radically different ways of writing, and template specializations to the rescue once again.
A little planning up front led me here, and paid for itself in spades.
Now I have transparent optimization without changing the surface area of my API, even hiding a radical departure in terms of what its draw target is (callbacks vs bitmap writes)
Anyway, as a general rule, I do an optimization pass during many of my design iterations (not version iterations, but process iterations) to make sure I'm not painting myself into a corner. When I do this I assume everything in a critical codepath will need to be optimized. Then I ask myself, "what would that look like and what impact would it have on the final architecture", and I design with that in mind, as I did here.
It's part knack, part luck, and part experience, but optimization shouldn't entirely be ignored during the design phase IMO. Don't optimize, but ask yourself what would optimization do to the design if it has to happen? and then mitigate that in the design.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The best optimizations are usually design changes. Any optimization that requires an architectural change is a design change.
|
|
|
|
|
I didn't mean to imply otherwise.
Edit: Adding, generally the ones that require design changes are usually "the best" (as in, have the most impact) because they are often what I call algorithmic changes. You can change the way something works as opposed to doing last mile bit twiddling which tends to (except in some cases, like in my SVG direct writes) yield less spectacular results. What I'm talking about in the OP is ways to identify potential areas where a design change might be in order for optimization purposes later, and then designing such that you can shoehorn it in without upending the whole thing.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 14hrs ago.
|
|
|
|
|
Lowercase variables require less space.
|
|
|
|
|
Oh'Really?
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Yep, lowercase tends to use less space on a line when using a variable width font.
|
|
|
|
|
What vitamins do you take? Do you hear voices in your head? You scare me.
<-- in case you missed the humor.
That's some impressive stuff. Templates have always scared me - I don't understand them (confession) but they seem like C++'s version of macros on steroids and PCP. I need to go learn them.
Charlie Gilley
“Microsoft is the virus..."
"the problem with socialism is that eventually you run out of other people's money"
|
|
|
|
|
They are basically that.
I think of them as kind of a mail merge with "smart" (typed) arguments.
A template is a source code generator. The C++ compiler process the arguments using the compiler's type info, and then emits more *textual* C++ code as a result. That result is then fed back into the compiler and parsed, much like a preprocessor macro.
Where templates get primarily confusing is template specializations, but that's where their real power is, and what I'm using above.
The two struct templates are part of the same overall template. The second one is a specialization for when the draw destination is a bitmap<rgba_pixel<32>> . When the compiler sees that it generates the alternate.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Yup - I understand the basic concept. I think a developer's attitude toward templates (and macros for that matter) depends on their initial exposure to them. I've seen templates used in a) books and b) one project that I support. The book examples tend not to really demonstrate the problem they are trying to solve - it's just dry, it gets complicated and most of the time I hit the location of "what's the point?" How is this really helping me?
The second example - templates in use have to be done correctly if you want to be able to understand the problem they solve. I've seen templates used in code that just make everything more complicated and add unnecessary complexity. Or the template approach used just doesn't make any sense - more of a "hey let's try a template approach..."
I need to go grab some older but much cheaper books on the subject to revisit.
Charlie Gilley
“Microsoft is the virus..."
"the problem with socialism is that eventually you run out of other people's money"
|
|
|
|
|
I taught myself C++ without picking up a book since the 1990s (i had exactly one, and it was crap), so there are holes in my knowledge, but what I have learned is practical.
Where templates really open the language up and make it do things no other major language can is metaprogramming.
Metaprogramming is incredibly powerful.
Metaprogramming is confusing, because C++ wasn't designed to do it. Rather, it was discovered and accomplished by stretching the intended purpose of the template keyword well beyond what it was initially designed to do.
Try this on for size - my terminology might be a little off, but the concepts therein are sound:
Metaprogramming in C++: A Gentle Introduction[^]
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Nordic just released a WiFi6 capable embedded radio. The Espressif ESP line of chips have WiFi, albeit older.
The trajectory seems to be connected IoT devices doing more and more.
HTML5 is relatively simple. It can be processed top down for the most part. They did a good job cleaning up HTML4 and making the spec more coherent. It's already what I'd consider mostly appropriate for embedded.
CSS, not so much, and it's where I'd like to see some kind of formalized standard for a CSS subset compatible with forward only processing.
What's the value in this? Frankly, being able to run web interfaces on sub ARM Cortex A chips opens the web up to far more affordable hardware. An ESP32 devkit with an integrated screen costs $18 on amazon. An RPi with no screen is in the ballpark of $100. They're different kits entirely, but I'm talking about the little guys.
Currently to make a connectible device talk to the web it requires a dedicated UI for that web app, and usually a REST or MQTT based communication on the back end. What if you could provide your simple UI from online sources that update all devices immediately when the back end changes? just for example.
CSS is the big blocker, in my experience, because of the DOM requirement. Simply choosing the right subset of selector syntax for embedded would be wonderful, but they can probably remove some of the heavier features such as font and image embedding as well.
I think what will *probably* happen instead is the price on webkit capable devices will bottom out, and then everything will have 128MB of DDR3 or better instead of 512KB of SRAM. That will solve the problem too, but is much more power hungry.
I think it's nice when a capabilities problem can be largely solved by pruning and spinning off an existing standard. I think it's possible with web stuff.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
An interesting thought.
Are you suggesting also dropping the Javascript virtual machine from the client environment also?
A real back to basics web page with no CSS and no interactivity via Javascript?
|
|
|
|
|
I think so, short of IoT MCUs being able to interpret JS** which isn't very realistic.
Still having it presented declaratively and on the web as HTML5/CSS as stripped down as it may be for embedded:
A) Should reduce cost of development, as you can hire people to do basic HTML and CSS (EDIT: see notes) instead of knowing C or C++
B) Can potentially be a server templatized/dynamized page and spun off of a full fledged website when an embedded device hits it. This is already often done for smartphones
C) Would provide pain free UI updates that instantly roll out across all devices.
There are probably other benefits that aren't occurring to me right now, but those are some major ones I see.
** it has been done, but i think it's precompiled and uploaded, or at least semi-compiled.
CSS notes: CSS can be forward only processed if you limit the CSS selector syntax to specifying ids and classes, or maybe only slightly more complicated than that. In theory you could do like .foo/bar/baz potentially and it could still be forward only, but would be more complicated. I'd be inclined to ditch that feature, but if it was in there it wouldn't be a performance killer. Parent references would be, as well as forward searches in many if not all cases (i haven't baked it all out in my head at this point)
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 19hrs ago.
|
|
|
|
|
You would have to eliminate all HTML events (like onclick, onblur, onfocus, etc.) Right?
|
|
|
|
|
I would eliminate JS, and "active CSS" elements that deal with the dom yeah, so no dom events.
The whole idea is to make it renderable without keeping the entire document in memory.
Basically forward only, render as you go, and then toss once you've rendered.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
OK, got it.
Reminds me of the old IBM Block mode terminals that had a physical map (with all literal fields and color specs, input specs and protection specs) and a data map that was only the fields that returned values. Both were sent to the terminal which would use a forward only processing to paint the screen and enable the unprotected fields. Any action key on the terminal transmitted only the data map to the invoking program. It was up to the program to split the data map back into fields and do any validation and processing of the data. (very efficient use of transmission bandwidth in the old days.)
|
|
|
|
|
Anyone who hates it doesn't really know it and cannot come up with a better solution to a declarative UI descriptor. And it's muuuuuch easier to hate than to learn, now ain't it?
Also, anyone who intentionally is provocative to spew hate (out of boredom or some other psychological issue) only wants to drag people down to their level because it's very, very low and misery loves company. Probably has lousy relationships in real life and is generally not liked... except by other hateful people.
Happy Friday! May the winners in life have an awesome weekend. You guys rock.
Jeremy Falcon
|
|
|
|
|