|
You underestimate the power of AI.
Answer: "Ask Siri, she knows"
If you can keep your head while those about you are losing theirs, perhaps you don't understand the situation.
|
|
|
|
|
I won't have one of those shameless hussies in my home!
At the very least, I have a wife - one female voiced entity awaiting an opportunity to pounce is all I can handle
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
I got you, 3 to 1. I have 2 daughters. No, make it 4 to 1. Almost forgot the granddaughter.
Genius: The guy who put the mute button on hearing aids.
If you can keep your head while those about you are losing theirs, perhaps you don't understand the situation.
|
|
|
|
|
Here we see again how important artificial intelligence can be to the naturally dumb.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
W∴ Balboos, GHB wrote: Well - the only ones left will be the current Q&A Posters.
That's a scary thought.
|
|
|
|
|
That is so funny I nearly wet myself.
|
|
|
|
|
Not quite off topic - a few time in the past I posted "LSHIWM" and similar takeoffs on that horrid "LOL" - so clearly your mental facilities are careening towards a crash in the right direction.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Quote: A programmer braindamaged by years of programming Intel processors wants to have the instructions PUSH/POP also for the 68000. He solves the 'problem' in the following way:
I agree, Intel processors cause brain damage. I would even extend that to all CISC processors.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Push and Pop were always two of my favorites. Even had a POP manual: (370) Principles of Operation.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
In my case it's more about the little differences between RISC and CISC processors. I have a nice set of general purpose registers, of which any can be made the current stack pointer at any time. While this is a cross assembler, it still generally assumes a CISC processor. I have two stacks, a parameter stack and a call stack. For all calls it must be sorted out, where the parameters come from (other registers or rarely memory) and what stack they need to go to.
The coolest thing is that the processor does the same trick with the program counter. This opens up a neat possibility. It's a typical 8 bit processor which can only address up to 64k. Of course you can add a larger banked memory, but doing the bank switching in code will get complicated and error prone.
By having multiple program counters, I can move the bank switching into the calling protocol. I can call any routine in any part of the banked memory without any complication.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: I have a nice set of general purpose registers, of which any can be made the current stack pointer at any time. You could go a lot further. What was the name of that TI chip (9900?) that had all its registers in RAM, with only a single "register block pointer" in the CPU? So general performance was mediocre, but interrupt response was excellent! A single clock cycle to set the register block pointer, and the interrupt handler could start using its private registers, with no need to save anything at all. Same for processes/threads: They all had their private register sets.
A less extreme variant: One of the first CPUs I programmed had 16 register blocks (each consisting of 16 special purpose registers), one block for each of the 16 interrupt priority levels. When an interrupt signal arrived, the first instruction of the handler was executing 900 ns later, which was quite hefty for a "PDP-11 class" 16 bit mini in the mid 1970s. But this was limited to interrupt handling: All ordinary user processes shared a single register block.
|
|
|
|
|
Just look at the cheapest PIC microcontroller. They are called RISC, but they really are the good old Harvard architecture. What they call onboard RAM actually is a set of a few hundred to a few thousand 8 bit registers.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Some RISC processors can cause brain damage as well: try having a look at the "full Monty" ARM 7+ processor which describes itself as RISC.
It's actually a truly wonderful processor to work with, but by gawd it's complicated for RISC! Nearly every instruction has condition codes, and the addressing modes are ... extensive, shall we say.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
RISC is not what most people think it is.
The instruction set is not reduced to a minimal set of instructions. The scope of the instructions is reduced, so that they can be executed in two bus cycles. Fetch, execute, fetch, execute. With a pipelined architecture and memory caches you can reduce that down to one or two clocks execution time for each instruction.
The origin of RISC definitely lies in Harvard architecture, from which it inherited the fetch/execute type of operation and the large array of general purpose registers. The registers were the only place these computers could keep their data.
Microprocessors, on the other hand, always were of Princeton (aka Von Neumann) architecture and were built to access RAM for both instructions and data were kept in external memory. At first, all microprocessors were CISC. They only had dedicated registers and many addressing modes. They were all about addressing their memory.
RISC reunited these two philosophies. RISC processors basically are the good old Harvard architecture, but their registers now are also used as memory pointers, so that they now could fetch instructions from memory or here and there read or write some data to it.
Modern processors usually are hybrids between RISC and CISC, trying to get the best of both worlds. That comes at a price, because a pure RISC processor, even with caches and pipelines, is much simpler and needs less transistors. That means less heat and power consumption and also reduces the programmer's brain damage.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
The so-called Advanced "RISC" Machine also has push and pop instructions that take a list of registers as operands to push or pop. I can't think of any definition of RISC that makes that make sense.
|
|
|
|
|
Pipelines and caches have watered down that principle a little. If they are efficient enough, you can afford such un-RISKy things while sacrificing the other principle of keeping the design simple. There are times when I wish I had such an instruction on my old CDP1802, but then again I can also run that processor on batteries for months. Simplicity has its advantages.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: RISC is not what most people think it is. The instruction set is not reduced to a minimal set of instructions. That should be obvious from its name: "Reduced Instruction Set computer" can't possibly have anything to do with a reduced instruction set ...
Or. It started out that way. RISC became high fashion, a marketing concept. RISC is good! But after the first wave of architectures that had a truly reduced instruction set, they developed on, adding lots of sometimes very complex instructions (such as 4*4 matrix multiplication instructions). From a marketing point of view, telling that "The instruction set has grown so complex that it can no longer be called a RISC" was totally out of the question. Rather, "RISC" was redefined to allow for arbitrary Complex Instruction Set architectures.
During the 80s and 90s, we saw a multitude of alternate RISC redefinitions. Coming to mind is "No microcode", "Single cycle instruction execution" (with the obvious exception for those that could not complete in a single cycle...), "A large number of general registers", "A regular instruction set where a given bit(group) serve the same function in all instructions", "No complex operand address formats", ... Oh, there were more. All of them to bring the attention away from the ever more complex instruction set.
Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. As did the PDP-11. But they had been branded as CISC (by RISC adherents), and never succeeded in washing the CISC stain off - they had to develop a new architecture under a new name. At least it helped them take a certain market share for a few years.
I never worked with IA64, but the x86 architecture is such a mess that I never understood how it could survive. And even less can I understand how they make it spin around that fast. No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. Today it seems quite amazing that it didn't take more to implement a complete CISC architecture!)
|
|
|
|
|
Quote: Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. I never used the 6800, but spent a lot of time with the 68000. It very much followed the beaten CISC path, but at least had arrays of data and address registers that you can freely use in these roles.
The only 8 bit processor that I can think of that was really a RISC processor was the old CDP1802. An 8 bit RISC processor is an unlikely thing, since you are stuck with 256 opcodes. They had to shoehorn in a few instructions at the expense of other practical but not essential instructions. Also branching instructions that had a full 16 bit address were a problem because they did not fit into the neat fetch/execute schema. But look at the programming model! Very few dedicated registers, not even a program counter or a stack pointer, but 16 16 bit registers that you can use as you wish. That little processor was a RISC processor before it was officially invented.
- No microcode? Check.
- Single cycle execution? Well, two cycles is the best you can do without pipelines. Check, except for the already mentioned 'long branch' instructions with three cycles.
- A large number of general registers? Check, but they did not go far enough for my taste. It would have been glorious if they had pulled of the same trick for the accumulator as they did for the program counter or the stack pointer.
- A regular instruction set where a given bit(group) serve the same function in all instructions? Check, except for the shoehorned instructions.
- No complex operand address formats? Check! To access memory, you had to load the address into any one of the working registers and use it as a memory pointer. How you got your address was your business.
Fits the description very well so far. Have a look: CDP1802 handbook[^]
Quote: but the x86 architecture is such a mess that I never understood how it could survive I think that's mostly because it was Intel that defined many people's first impression what a microprocessor is supposed to be like. Let's begin with the 8080, put the very popular Z80 (which was an improved 8080 bootleg) and then look at the 8086. These processors were the standard, all others were seen as imitations. Not that Intel would ever try to keep that way of thinking alive.
Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct:
CDP1802: 5000 transistors. That's a little misleading, because it's a CMOS processors where the MOSFETs always come in pairs.
Intel 8080: 6000 transistors.
68000: 68000 transistors (!).
Intel 8086: 29000 transistors
Intel 80286: 134000 transistors
Intel 80386: 275000 transistors
Intel 80486: 1180235 transistors
Pentium: 3100000 transistors
You have to go up to the later multicore processors to hit 1 billion or more transistors.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct: What I intended to say that "the 68000 was said to have about 68000 transistors" - as you confirm in your comment.
I am impressed that they managed to do the entire 68K architecture in 68K transistors, rather than a few million. Or a billion.
|
|
|
|
|
The number of transistors is not a good measure. For example, CMOS always uses transistors in pairs. One opens up, the other closes. A current only flows in the brief moment when they switch. That's why CMOS devices go much easier on your phone's battery.
The number of basic logic gates would be a far better metric for the complexity of a processor. Even that could be misleading. Theoretically you could build a processor just with NAND or NOR gates, but you would need more gates than if you went out of your way to use the right type of gate as needed.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
From my very limited experince of Intel assembly compared to 68K I would say there wasn't much difference. Assembly always struck me as asking for trouble.
|
|
|
|
|
I don't know about trouble, but it certainly leaves you with no excuses - you can't blame the framework, the compiler, or the optimiser: it's exactly what you told it to do and you can't wriggle out of that!
And while that gives you no-one else to blame, it also gives you incredible freedom - and if you know what you are doing wonderful speed and code density to boot.
Yes, it's slower to develop (though debuggers are available for assembler these days, they weren't when I started) and slower to code (you've basically got 8 or 16 variables you actually want to use and the overhead when going beyond that can be extensive). It's certainly harder to maintain than a high level language!
But it's seriously rewarding. When you get a tight bit of assembler running maybe a thousand time faster than the best the compiler can manage and it provides a true square wave as the data clock instead of the compilers asymmetric clock it's a big rush!
I don't code in it any more, but ... I do miss it some days.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Like much in life, it has it's uses but can shoot you in the foot (or Head depends on what you are doing) but gives you freedom, freedom in my expirence to drink coffee and draw flow charts to figure out what is or isn't happening. C gives you freedom with only a minor performance hinderance.
It's like Cars, Formula 1 lots of speed, no airbag. Bentley comes with a cup holder but won't do Monza at the same speed. The last time I coded in it was PIC assembler for a product that was coded before the company bought a C compiler.
|
|
|
|
|
Nothing like building your own computer and then bringing it to life.
OriginalGriff wrote: Yes, it's slower to develop Slower? Not really. With a little discipline you can make your life much easier. I write libraries for and against everything. That's a universal way to cut development time in any programming environment.
But I'm getting old. Doing more elaborate math in assembly is a mess. Too much going on on the stack and calling some math routines. You can't see the formula for all the instructions anymore. Here I now use macros more than before. With all that calling and the stack operations out of the way it's much easier to concentrate on what you are actually trying to do.
Quote: though debuggers are available for assembler these days, they weren't when I started How old are you? Did the processors still need an oil change once in a while? Actually a debugger was the first software I ever bought and that was in 1979. It cost me something like 15$, almost a month's income back then.
Quote: It's certainly harder to maintain than a high level language! Discipline, libraries and macros, again.
Quote: But it's seriously rewarding. Yes, a 'I made fire' moment every five minutes. Extremely addictive.
I made fire![^]
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
It is slower to develop, if only because you have to type more, and have to think more carefully about what you are doing. For example, in C / Z80 code:
for (int i = 0; i < 10; i++)
PUSH BC
LD B, 10
loop:
...
DJNZ loop
POP BC And if the iteration count exceeds a byte, you have to think about that as well:
PUSH BC
LD BC, 1000
loop:
...
DJNZ loop
DEC C
JP C, loop
POP BC
But on the other hand, you can copy a string or bytes in one instruction:
copy:
LD HL, source
LD DE, dest
LD BC, count
LDIR
deleteOneChar:
LD HL, source+1
LD DE, source
LD BC, count
LDIR And so on.
Don't get me wrong, I still remember my assembler decades with great fondness - but from a general development POV a high level language lets you get a lot more done in a much shorter timescale, and with a much more maintainable result.
But if you have restricted RAM and ROM (and most of mine was done using 4K RAM, 8K ROM) then assembler is the best way to go!
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|