|
There was a time I was in the white pages, but I matured.
My mobile is still Californian even though I last lived there 15 years ago.
|
|
|
|
|
Greg Utas wrote: There was a time I was in the white pages, but I matured. So, this is what I find fascinating. I don't recall anyone ever complaining about a huge book of names, addresses, and phone numbers being handed out for free.
Now with the internet, people are scared over data that can't even be linked to them being shared with someone.
It's not a knock on you and no I am not trolling. I am genuinely amazed at how that has shifted and am very curious as to why. No one seems to know what dangers there might be with linking an ip address with a zip code yet so many people, especially technologically educated people, are so worried about it. I don't understand it which is maybe why it fascinates me.
Greg Utas wrote: My mobile is still Californian even though I last lived there 15 years ago That explains why you don't answer when I call.
Social Media - A platform that makes it easier for the crazies to find each other.
Everyone is born right handed. Only the strongest overcome it.
Fight for left-handed rights and hand equality.
|
|
|
|
|
I think what has changed is stuff like identity theft, hacking, surveillance, and stalking.
The only identity theft I heard about back in the day was people who wanted to disappear. They'd steal the identity of someone who died in childhood, get a birth certificate, apply for a social security number, and establish a completely new identity.
|
|
|
|
|
Remember the good old 8086 and how it used its segment registers? Essentially I'm doing something like that on an 8 bit computer right now and can then address up to 16 megabytes. Unfortunately my segment registers will not be part of the CPU, so there will be no automatic address calculations during interrupts, DMA, subroutine calling or even normal branching. The computer is still confined too much in its regular 64k address space.
That's why I have no choice than to slice up this address space into smaller segments which can be switched individually. With a little care the segments then can be switched around without an instant crash.
The big question now is what sorts of segments are most practical and how big they should be. Let's look at the 8086 segment registers for inspiration.
The stack segment: As I said, the stack should not be switched away. Let's also throw in interrupt routines and DMA buffers. Would that not be a waste of expanded memory if only one page in this segment is used? No. Every process would get its own page and (when interrupts and DMA are disabled) the OS can map in I/O ports, video memory or keep track of its processes and memory management.
The code segment: I can emulate a full MMU in software when calling subroutines. That means I can call code in any page of the code segment at any time without problems. The code in every page would have the character of a module, a DLL if you want. I could call the subroutines by an ordinal instead of addresses, add memory management on the code segment and (with a proper storage device) even implement virtual memory, just in time loading of these modules or even just in time compilation. Quite advanced for a little 8 bit computer.
The data segment: Very nice to have, but only at a cost. Pros: Your programs get access to much more memory than they would without it. Also, this segment is a good alternative to access I/O and video memory if you can't do that on the stack segment for some reason. Con: The code segment gets smaller.
This is my current mix:
0000 - 7FFF code segment (32k)
8000 - BFFF data segment (16k)
C000 - FFFF stack segment (16k)
Any better ideas which might work better? Have I forgotten something? If i make a bad decision now, I will have to live with it for a long time.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I'd throw in one (or more likely two) more: system code / system data - this holds the interrupt handlers, system stack and "bios" code that you need to handle the MUU - doesn't have to be a big segment (depending on your coding abilities / compiler if used) but having always available and always in the same place is a big bonus.
If I recall correctly (and I last used an MMU twenty years ago) that's how I had my memory organised in the Z180 / HD64180 MMU, and it allows for flexibility and speed in response to "system events".
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I have thought about doing something like that, but decided against it. This way the other segments get bigger and also a larger share of the expanded memory. I would put only the master interrupt routine into the stack segment. This interrupt routine would then call separate innterrupt routines in some module loaded into the code segment. There would be some register saving and bank switching involved, but it should work.
The same goes for normal OS calls. The processor calls and returns from subroutines with the help of small procedures that do the proper bookkeeping on the stack. Or stacks, in my case. I use a separate call and parameter stack. By saving the content of the code segment and the data segment registers in the call procedure and restoring them in the return procedure I can call code anywhere and continue where I left off upon returning. These two small procedures also have to go to the stack segment.
Did you ever use FORTH? It works in a similar way, organizing the code into a dictionary with 'pages' and 'words'. The pages are all about memory management and I really see no problem adapting it to a paged memory model. That's also why FORTH is an interpreter or a (just in time) compiler at the same time, just as it is operating system and programing language at the same time. Just implement more words and add pages to the dictionary, then let the memory managment worry about where and when to load them. FORTH always was a hype waiting to happen, but I can see why it is too exotic to ever become mainstream.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
similar to OG I was wondering where are you putting your kernal (hypervisor?), page tables/states.
... you mentioned the stack segment may be underused, would the above go there. (I/O ports too?)
... even then the state settings/info you may find 16M becoming optimistic, 8M may be doable.
pestilence [ pes-tl-uh ns ] noun
1. a deadly or virulent epidemic disease. especially bubonic plague.
2. something that is considered harmful, destructive, or evil.
Synonyms: pest, plague, CCP
|
|
|
|
|
Look at my reply to OG. FORTH has always handled these things quite well. On tiny microcontrollers, single board computers and even modern systems. You can even use assembly subroutines or compiled C functions if you don't forget to tell FORTH that it should better not try to interpret or compile these new 'words'.
But don't worry, I get most of my 16 megabytes. On the hardware side I always have my 24 address lines. I will sacrifice 512k of that space by not installing the last memory chip and use it's select signal as master select for the memory mapped I/O ports and video memory. With a little addressing trick (actually just ignoring A15 and A14) I can even select these things into any segment if the stack segment is not convenient.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
What kind of software are you going to run on this "beast" ?
How much control do you have over the software?
My experience with that small systems is that stacks are surprisingly shallow.
If my memory is correct, the 8051 stack is 256 bytes. With that tight limits, you soon learn to pass multiple arguments as a pointer to a struct with the inidividual values, and to structure your code to "get out of there" as quickly as possible. Avoid nested calls whenever you can do them as a sequence. Maybe each operation nests 2-3 levels deep, but not dozens of levels. You should strive to keep the stack shallow: E.g. initialization functions should return before starting the "real" appliction code. Even if plan to run code that you cannot control, 16Ki is a lot.
Are you using languages requiring a static link? Or languages allowing pointers to locations in the calling routine (e.g. in frames deeper down on the stack)? If not, you could virtualize the stack segment as well, keeping only the topmost frame directly adressable. Then all function calls must go through a (redsident) stub that unmaps the current frame and sets up the new, and upon return remaps the caller's frame (as well as mapping in the code segment with the current function, if it is not the current one).
Years ago, I worked on a banked 8501: The upper 16K could be switched between four banks (but I believe that the architecture allowed extension to an arbitrary number). Banks were not pure data or code, they could have both. Each bank typically contained one module or functional subset: When you loaded a bank, you would usually stay in there for a major piece of work, with direct, "normal" calls within the bank. Libraries used by "everyone" was located in the common, lower 48K and could also be called directly; only when a function in one bank needed to call a function in another bank went via a stub (in the lower 48Ki).
Globally static data, shared among banks, were located in the lower 48Ki; data relevant only to the functions in one bank was located in that bank. Care was required: You could not pass a pointer to data in a bank to a function in another bank. For such needs, the data had to be allocated in the common 48K.
If I were to design a new banked system, I guess I would stick to the same solution: A bank can have both code and data. I would probably increase the bank size to 32Ki; that would make larger functionality in a single bank, reducing bank interactions, fewer bank switches. Most likely, a fair share of banks would not be filled up. For really large systems, filling up physical RAM, it would be nice if a half-filled bank, 16Ki, would require only 16Ki in backing RAM. Allocating bank space in e.g. 1Ki increments would allow a larger number of banks.
The lower half would hold interrupt handlers, DMA buffers, data shared between banks, stubs for all exported banked functions (e.g callable from arbitrary bank), and buffers where one bank can place argument structs when calling functions in other banks. The stack would be in the lower 32Ki. If your stack grows downwards (which is quite commmon) and you have Stack Limit register, why don't you fill the common 32Ki from the botton, ending with setting Stack Limit to the current load address, and the stack bottom to 32Ki - then you can increase stack space avialable when needed by moving more of the common code/data into yet another bank. For a stack growing upwards, you could of course let it grow from the end of common code/data, up to 32Ki, but there is a tradition for starting the stack at a "round" address.
The old open-source P4 Pascal compiler shared all unallocated adress space between the stack and the heap, growing from each end. If a heap allocation could not find a free area below the current HeapTop, HeapTop was pushed upwards for the request, and compared to the current StackTop, for an out-of-space check. Similarly, a function call would check tbe new StackTop against HeapTop, and report a collision if space was exhausted. So a program with deeply nested calls could not use as much heap space, and a program using the heap a lot would have to refrain from too deeply nested calls.
I guess I would not try to virtualize (bank) the stack; that would require dynamic allocation of banks.
Another factor: This CPU, does it provide any harware signal indicating whether it is fetching instructions (code), accessing data, whether the stack register is involved in the address calculation or if you are in an interrupt handler? Many CPUs do, and you might use these signals as bank selectors. The 16 bit address could effectively be extended to 18 bits: One 64Ki code space, one 64Ki data space, one 64Ki stack space, and one 64Ki interrupt space. (So it would switch the lower 32Ki as well). Where DMA fits it would depend on the how the signals are set during DMA. Each of these four spaces might be banked. (I guess that stack and DMA would have only a single bank, but if it treated as such, the banker should be able to map the DMA bank with data buffers as a data bank accessible to "ordinary" (non-DMA/interrupt) code.
All of this of course depends on the available signals from the CPU, as well as the flexibility of your bank mapping mechanism (e.g. if an interrupt can immediately swith to the interrupt bank, sufficiently fast for the interrupt handler), or if the DMA can be hardwired to one specific lower 32Ki independent of the mapping of code, data and stack. If the hardware is alredy designed and implemented, I guess it is difficult to accomodate new proposals at this level.
|
|
|
|
|
Ah the 8051/8052. I have really done incredible stuff with it. It is amazing what you can squeeze out of 256 bytes of RAM. In combination with 32 kbyte of EPROM what you can do is truly amazing. Full blown control systems for nearly every type of compressor, it was absolutely fantastic.
Later I used versions from silicon labs which performed up to 100 Mips with 64 kbyte of flash memory with a few kilobytes of external Ram, they were perfect to implement all sorts of communication gateways.
Those really were the days.
|
|
|
|
|
Member 7989122 wrote: Another factor: This CPU, does it provide any harware signal indicating whether it is fetching instructions (code), accessing data, whether the stack register is involved in the address calculation or if you are in an interrupt handler? Many CPUs do, and you might use these signals as bank selectors. The 16 bit address could effectively be extended to 18 bits: One 64Ki code space, one 64Ki data space, one 64Ki stack space, and one 64Ki interrupt space. (So it would switch the lower 32Ki as well). Where DMA fits it would depend on the how the signals are set during DMA. Each of these four spaces might be banked. (I guess that stack and DMA would have only a single bank, but if it treated as such, the banker should be able to map the DMA bank with data buffers as a data bank accessible to "ordinary" (non-DMA/interrupt) code.
Interesting. I have to think about that. The processor indeed has that sort of signals that show what kind of bus cycle it is currently executing. These are fetch, execute, interrupt and DMA. Unlike many other CPUs, it does not give up its bus for DMA. Instead, it acts as a DMA controller during DMA cycles, and does the memory addressing itself. The device that requested the DMA just has to put its byte on the bus (or read it) as soon as the CPU responds with a DMA cycle.
Interrupt cycles get a little hairier. The interrupt cycles are only used to respond to an interrupt. The interrupt routine itself is executed with regular fetch and execute cycles.
Fetch cycles themselves are unproblematic, but execute cycles are not. They can access the stack, data or even code (in branching instructions). And then there still is the matter of calling code on another page. It's like sawing off the branch on which you are sitting and usually results in a nice crash.
Maybe I can work out a hybrid of both ideas. At first glance this looks good for DMA, but adds more problems for the other bus cycles.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
This reminds me of the horror that was "Large Model" programming with FAR pointers and all of that. That was so annoying. We can still see lingering traces of it today in the Win32 API with variable types for pointers often prefixed with L. Just seeing those Ls is still annoying for me today and I alias away every one of them that hasn't already been. There are still a few around.
That's what I was wondering about - how can this be handled automatically for software by you and/or the OS? I doubt the compiler for that CPU has any knowledge of segment registers or anything other than what was called "small model."
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
The basic problem is that of pointers.
The Windows design made a serious attempt abandoning them, replacing them with handles, where the programmer (an the user as well) is unconcerned about where the object is physically located: A binary identifier, a synonym to the variable name. In the Win16 days, with 8086 CPUs, there was no hardware support, so we had to pin the object to a location while we were accessing it. But our box of objects could fill a hundred megabytes, even the address space was limited to 1 megabyte: The Windows runtime "paged in" the objects to the address space when we pinned an object to use it.
Windows was created in an age when this was high fashion. There were lots of experimental CPU designs which fully supported handle addressing in hardware. Some tried to make it in the industrial, commercial world, such as the Intel 832 which was 100% object oriented: You provided a handle (aka capability ID) to an object and an offset within that object. THe memory address was completely hidden and inaccessible to the programmer.
Java tried to introduce handles in software to be run on a virtual machine that could trap object references to page these in. Somewhat later C# arrived with references rather than pointers.
If software developers had fully embraced these concepts, abandoning pointers and programmer-known memory addresses, we could have avoided both far and near pointers. But we were not ready. We were thinking in terms of memory addresses, and to some degree we still are: Even in C#, you may have to relate to pointers when addressing Win32 functions - and a fair share of classical developers rejoice. Here they get something concrete, something solid that they can get a grasp on.
Pointerless software is slowly getting acceptance. Very slowly. For 30+ years we have had these performance pi**ing contests: My program runs 3% faster than yours! Then you cannot waste time on having hardware looking up a handle in a capability table, the hardware must work directly on direct addresses, pointers.
Today, for large application areas, CPU speeds are "high enough". A generation ago, video playback could saturate the CPU; today it might take one or two percent of one core. Measured over an hour of use, a typical home PC probably uses far below 1% of its process capacity. Or its disk I/O capacity. We probably could afford (in terms of performance) the cost of getting rid of pointers, with truly object oriented CPUs.
The original IAPX 432 design was, from a functional viewpoint, fully satisfactory. Revitailizing it today would be a joke, e.g. its limit on 8Ki objects per process. In 1980 it was like "640K should be enough for everyody". But the experience from 432 taught Intel countless lessons for builing the 386 memory management system (the segment tables have inherited a lot from the 432 capability table). I really wish that Intel would repeat this exercize, using 35 years of experience to develop a 432 Mark II implementing in hardware the dotNET object model, similar to the original 432 (but certainly different).
Since lots of dotNET software does p/invoke pointer-based code, I guess 432 Mark II would require a co-implemented x64 core. Hopefully, x64 could be phased out with time: When DEC introduced VAX 780, it could execute legacy PDP-11 code natively. Later models, such as the 8600, appearently could do the same, but reality was that it required a DecWriter system console: PDP-11 code caused an interrupt so that VMS could ship that code module over the serial line to the system console for interpretation on the LSI-11 that processed keypresses on the console. Unplug the console, and the 8600 looses its ability to run PDP-11 code! If we got something like 432 Mark II machines, maybe we five years later will have to insert a USB stick with an x64 CPU running Win10 if the dotNET application on the main processor makes p/invoke calls to pointer based code.
I think this would be a great development. I always liked the security provided by capability based machines. In principle, I think that we are ready for it it now - both the technology and the dotNET softare developers. Yet I am not holding my breath. I am afraid that I will see nothing like that in my liftime (and I am seriously planning to live for quite a few more years).
|
|
|
|
|
I remember the i432. It was somewhat interesting. I found the i860 far more interesting though. It was for chewing through numbers which I was and am still interested in except now I get to do it on the job.
I am not at all interested in a world of software without pointers. I can see the value of the concept in certain realms but, thankfully, I am not required to participate in those. I am quite happy where I am.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Rick York wrote: That's what I was wondering about - how can this be handled automatically for software by you and/or the OS? I doubt the compiler for that CPU has any knowledge of segment registers or anything other than what was called "small model."
It's actually very easy. The processor comes from the time of home built single board computers. In many ways it was a little ahead of its time in several ways, but was very limited by slow and small memories and expensive storage devices. With more memory and some sort of mass storage to fill that memory it can really eat any 8 bit processor of its time for breakfast and seriously stray into 16 bit territory.
It's true, the processor does not know anything about segment registers. That's ok for the stack segment. It should only be changed under certain conditions, so that will be done by the code and only when these conditions are met.
The data segment can and should be changed as needed. I have little choice but to leave that to the code as well.
At least I can do something for the code segment. Due to its RISC architecture, the processor does not have instructions to call subroutines or return from them. Instead, it loads the address of a subroutine into any one of its 16 working registers and make that register the new program counter. Returning is just as easy. Leave the original program counter alone in the subroutine and make it the program counter again.
Usually you have only two such simple procedures. One is used to call subroutines with a more elaborate protocol for passing parameters and saving registers. The other one handles returning from a subroutines, restoring the registers that were saved in the calling procedure and passing return values. Simply by modifying these procedures to save, change and restore the segment registers of the code and data segments I can instantly call subroutines anywhere in the code segment. No other processor with fixed call/return instructions can do that.
As things were, you wrote machine code. An assembler was luxury and also wanted its share of your memory. There were various BASIC interpreters, but I never was really interested. They just were too limited and wasteful with the limited memory resources. The better ones at least tokenized the code, making the memory hunger a bit smaller and the parsing at runtime a little faster. There were other interpreters, but these languages usually suffered from similar problems. Compilers were not much of a thing at all, like on most 8 bit computers. The reason for this was again memory and mass storage.
There is one exception. FORTH. It scales and adapts very well from tiny microcontrollers to modern processors. It also is quite fast, because it can't decide to be an interpreter or a just in time compiler. It even solves the problem of what to do with the OS. In the old days there was none at all and FORTH has a tendency to become the OS itself by keeping track of every bit of code you wrote. All that makes it a good candidate from the old times to adapt to my memory model.
Today we also have cross assemblers, a C compiler and an emulator/debugger. The cross assembler is open source and I might adapt it myself. The author of the emulator does his best to emulate all the little computers that use this old processor. I already had contact with him when he wanted my permission to include a little game that I wrote 40 years ago on the old computer. I think he will also include my new memory model in his emulator, once I have something to show.
And the C compiler? It needs a new project type, similar to compiling a DLL instead of an executable. And it has to use my modified call and return procedures.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I would probably do something similar to what DEC (Digital Equipment) did some 50 years ago, in it's PDP11 product line and RSX11D/M operating system: build a small MMU (takes a few PALs or something similar) and support 8 segments of 8KB each of them mappable into the physical memory space.
Of those 8 segements:
one is a permanent "system" segment holding small parts of OS code, OS data, OS stack (say 4KB); the code in there can do whatever it takes to get more OS code and more OS data mapped in and out.
the remaining 7 are "user" segments, that can be used for code, data or stack as required by each individual process.
So each process could dynamically choose concurrent access to 8/16/24/32 KB of code plus 8/16/24/32 KB of data plus 8/16... KB of stack totaling up to 56KB.
|
|
|
|
|
Does anyone here use the Pomodoro technique for coding? I've used it to excellent effect before and would like to get back to it.
That's why I'm asking for recommendations for a Pomodoro planning/tracking app. There are so many, and trying out individual ones is quite a hassle. What do you use/recommend?
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
Never heard of it before but did a quick scan...
It looks a bit... suspicious
Wouldn't it only work in you can split your tasks into equal segments of work? Let's say you have a 25 minute tomato... what happen when a piece of work takes 30 minutes, or 2 hours, etc?
|
|
|
|
|
The humor of that seems to have pasta you by.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: pasta you by
Usually Spaghetti... what pasta you by?
|
|
|
|
|
I should have rotini my original post, orzo it seems.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
|
Well that is puree a matter of paste.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
If a Pomodoro takes more work than your timer, e.g. 25 minutes, either:
a) You haven't divided your main tasks into separate little tasks. This skill grows very quickly as you begin to practice this technique.
b) You stop working after 25 minutes, note that Pomodoro as unfinished, and after 5 minutes, you start a new Pomodoro and resume work on your ill-planned task.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
|