|
The so-called Advanced "RISC" Machine also has push and pop instructions that take a list of registers as operands to push or pop. I can't think of any definition of RISC that makes that make sense.
|
|
|
|
|
Pipelines and caches have watered down that principle a little. If they are efficient enough, you can afford such un-RISKy things while sacrificing the other principle of keeping the design simple. There are times when I wish I had such an instruction on my old CDP1802, but then again I can also run that processor on batteries for months. Simplicity has its advantages.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: RISC is not what most people think it is. The instruction set is not reduced to a minimal set of instructions. That should be obvious from its name: "Reduced Instruction Set computer" can't possibly have anything to do with a reduced instruction set ...
Or. It started out that way. RISC became high fashion, a marketing concept. RISC is good! But after the first wave of architectures that had a truly reduced instruction set, they developed on, adding lots of sometimes very complex instructions (such as 4*4 matrix multiplication instructions). From a marketing point of view, telling that "The instruction set has grown so complex that it can no longer be called a RISC" was totally out of the question. Rather, "RISC" was redefined to allow for arbitrary Complex Instruction Set architectures.
During the 80s and 90s, we saw a multitude of alternate RISC redefinitions. Coming to mind is "No microcode", "Single cycle instruction execution" (with the obvious exception for those that could not complete in a single cycle...), "A large number of general registers", "A regular instruction set where a given bit(group) serve the same function in all instructions", "No complex operand address formats", ... Oh, there were more. All of them to bring the attention away from the ever more complex instruction set.
Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. As did the PDP-11. But they had been branded as CISC (by RISC adherents), and never succeeded in washing the CISC stain off - they had to develop a new architecture under a new name. At least it helped them take a certain market share for a few years.
I never worked with IA64, but the x86 architecture is such a mess that I never understood how it could survive. And even less can I understand how they make it spin around that fast. No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. Today it seems quite amazing that it didn't take more to implement a complete CISC architecture!)
|
|
|
|
|
Quote: Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. I never used the 6800, but spent a lot of time with the 68000. It very much followed the beaten CISC path, but at least had arrays of data and address registers that you can freely use in these roles.
The only 8 bit processor that I can think of that was really a RISC processor was the old CDP1802. An 8 bit RISC processor is an unlikely thing, since you are stuck with 256 opcodes. They had to shoehorn in a few instructions at the expense of other practical but not essential instructions. Also branching instructions that had a full 16 bit address were a problem because they did not fit into the neat fetch/execute schema. But look at the programming model! Very few dedicated registers, not even a program counter or a stack pointer, but 16 16 bit registers that you can use as you wish. That little processor was a RISC processor before it was officially invented.
- No microcode? Check.
- Single cycle execution? Well, two cycles is the best you can do without pipelines. Check, except for the already mentioned 'long branch' instructions with three cycles.
- A large number of general registers? Check, but they did not go far enough for my taste. It would have been glorious if they had pulled of the same trick for the accumulator as they did for the program counter or the stack pointer.
- A regular instruction set where a given bit(group) serve the same function in all instructions? Check, except for the shoehorned instructions.
- No complex operand address formats? Check! To access memory, you had to load the address into any one of the working registers and use it as a memory pointer. How you got your address was your business.
Fits the description very well so far. Have a look: CDP1802 handbook[^]
Quote: but the x86 architecture is such a mess that I never understood how it could survive I think that's mostly because it was Intel that defined many people's first impression what a microprocessor is supposed to be like. Let's begin with the 8080, put the very popular Z80 (which was an improved 8080 bootleg) and then look at the 8086. These processors were the standard, all others were seen as imitations. Not that Intel would ever try to keep that way of thinking alive.
Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct:
CDP1802: 5000 transistors. That's a little misleading, because it's a CMOS processors where the MOSFETs always come in pairs.
Intel 8080: 6000 transistors.
68000: 68000 transistors (!).
Intel 8086: 29000 transistors
Intel 80286: 134000 transistors
Intel 80386: 275000 transistors
Intel 80486: 1180235 transistors
Pentium: 3100000 transistors
You have to go up to the later multicore processors to hit 1 billion or more transistors.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct: What I intended to say that "the 68000 was said to have about 68000 transistors" - as you confirm in your comment.
I am impressed that they managed to do the entire 68K architecture in 68K transistors, rather than a few million. Or a billion.
|
|
|
|
|
The number of transistors is not a good measure. For example, CMOS always uses transistors in pairs. One opens up, the other closes. A current only flows in the brief moment when they switch. That's why CMOS devices go much easier on your phone's battery.
The number of basic logic gates would be a far better metric for the complexity of a processor. Even that could be misleading. Theoretically you could build a processor just with NAND or NOR gates, but you would need more gates than if you went out of your way to use the right type of gate as needed.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
From my very limited experince of Intel assembly compared to 68K I would say there wasn't much difference. Assembly always struck me as asking for trouble.
|
|
|
|
|
I don't know about trouble, but it certainly leaves you with no excuses - you can't blame the framework, the compiler, or the optimiser: it's exactly what you told it to do and you can't wriggle out of that!
And while that gives you no-one else to blame, it also gives you incredible freedom - and if you know what you are doing wonderful speed and code density to boot.
Yes, it's slower to develop (though debuggers are available for assembler these days, they weren't when I started) and slower to code (you've basically got 8 or 16 variables you actually want to use and the overhead when going beyond that can be extensive). It's certainly harder to maintain than a high level language!
But it's seriously rewarding. When you get a tight bit of assembler running maybe a thousand time faster than the best the compiler can manage and it provides a true square wave as the data clock instead of the compilers asymmetric clock it's a big rush!
I don't code in it any more, but ... I do miss it some days.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Like much in life, it has it's uses but can shoot you in the foot (or Head depends on what you are doing) but gives you freedom, freedom in my expirence to drink coffee and draw flow charts to figure out what is or isn't happening. C gives you freedom with only a minor performance hinderance.
It's like Cars, Formula 1 lots of speed, no airbag. Bentley comes with a cup holder but won't do Monza at the same speed. The last time I coded in it was PIC assembler for a product that was coded before the company bought a C compiler.
|
|
|
|
|
Nothing like building your own computer and then bringing it to life.
OriginalGriff wrote: Yes, it's slower to develop Slower? Not really. With a little discipline you can make your life much easier. I write libraries for and against everything. That's a universal way to cut development time in any programming environment.
But I'm getting old. Doing more elaborate math in assembly is a mess. Too much going on on the stack and calling some math routines. You can't see the formula for all the instructions anymore. Here I now use macros more than before. With all that calling and the stack operations out of the way it's much easier to concentrate on what you are actually trying to do.
Quote: though debuggers are available for assembler these days, they weren't when I started How old are you? Did the processors still need an oil change once in a while? Actually a debugger was the first software I ever bought and that was in 1979. It cost me something like 15$, almost a month's income back then.
Quote: It's certainly harder to maintain than a high level language! Discipline, libraries and macros, again.
Quote: But it's seriously rewarding. Yes, a 'I made fire' moment every five minutes. Extremely addictive.
I made fire![^]
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
It is slower to develop, if only because you have to type more, and have to think more carefully about what you are doing. For example, in C / Z80 code:
for (int i = 0; i < 10; i++)
PUSH BC
LD B, 10
loop:
...
DJNZ loop
POP BC And if the iteration count exceeds a byte, you have to think about that as well:
PUSH BC
LD BC, 1000
loop:
...
DJNZ loop
DEC C
JP C, loop
POP BC
But on the other hand, you can copy a string or bytes in one instruction:
copy:
LD HL, source
LD DE, dest
LD BC, count
LDIR
deleteOneChar:
LD HL, source+1
LD DE, source
LD BC, count
LDIR And so on.
Don't get me wrong, I still remember my assembler decades with great fondness - but from a general development POV a high level language lets you get a lot more done in a much shorter timescale, and with a much more maintainable result.
But if you have restricted RAM and ROM (and most of mine was done using 4K RAM, 8K ROM) then assembler is the best way to go!
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
My impression is that this concern for instruction sets from "ordinary" programmers (excluding those writing OS kernels and bottom level drivers) comes from an idea that you will be more clever that the compiler, writing faster code.
You won't. In the proceedings from one "History of programming languages" conference (in the previous millennium) the developers of Fortran II - the first compiler piloting a large number of now "classical" optimization techniques - reported that they repeatedly spent hours understanding "How the elephant did the compiler find out that it could do that?? But it works!"
While I still did some assembly coding, I also did extensive timing to see how much execution time I could save, compared to a high level language. True enough: Sometimes, for a very simple, very tight loop I might be able to increase execution speed for that loop by, say 30%. Profiling the application typically showed that loop to take a percent or less of the total execution time. For the total run, I almost never could measure any difference, whether I compiled with my "optimized" assembler or HLL code. So I gave up assembly. If it can be done in a HLL, do it in a HLL!
Compilers of today do a lot more clever optimizing than Fortran II. You can't outsmart them. Assembly should be limited to those cases where you can't benchmark it against HLL - because the problem is impossible to solve in a HLL.
If you are not assembly coding, why would be concerned about RISC vs. CISC? It affects your software about as much as the microcode word length. Or semiconductor technology. Or internal buses within the CPU chip. That is: Not at all.
|
|
|
|
|
Wrong algorithm = slow program. No compiler can optimize that away. That's where most of the optimization happens. Also, that's why I like my RISC processors so much. There often is only one way to do something, which makes things very straight forward. The clever part is your algorithm.
Quote: Compilers of today do a lot more clever optimizing than Fortran II. You can't outsmart them.
I already had that discussion with a professor long ago. First he thought that I was an arrogant (censored). A few talks later on how I could possibly beat the compiler, he always wanted me to write papers on these ideas. Most of them were quite evil hacks which go against many other holy commandmends and dogmas. If used with caution, you can get away with that and go where no compiler can follow.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Wrong algorithm = slow program. No compiler can optimize that away. That's where most of the optimization happens. Amen. So code that algorithm in a HLL. I can't think of an algorithm that can be coded in assembly only. All algorithms can equally well be coded in a HLL.
Re discussions with professors: When I was teaching at a tech college, "readable" code was one of my main focuses. The basic "Computer Architecture" course was the only one with assembly coding, so that the students could see stuff like registers and instruction sets in real use. To zero AX, you move a zero into AX, right? MOV AX, 0. One of the students insisted that "no real programmers" would do it that way; they would rather use XOR AX, AX, which is faster! That he knew for sure - he wouldn't sacrifice performance for readability! So I dug up the timing diagrams to show him that although he was right for the 8086, that less readable code could save you a single clock cycle, for the 186, 286 and 386 (which was state of the art at that time), the two instructions required the same number of clock cycles. That didn't move him: He insisted on writing code that would run at maximum speed on the 8086, even though the 8086 at that time was beyond obsolete.
For the hand-ins, this student provided two solutions: One large comment block with some of the dirtiest assembly code I have seen, headed by the text "This is how a REAL programmer would do it:", followed by a block of (not commented out) assembly code that was neat and readable, headed by "But this is how we are forced to write the code in this course:".
I found it kind of cute. Deep down in my old "archives", I still have a photocopy of that hand-in.
|
|
|
|
|
trønderen wrote: All algorithms can equally well be coded in a HLL. Not every device is a state of the art PC with the strongest processor and plenty of memory. Think of the other end of the spectrum, like microcontrollers. Things like serial communication with a terminal in software without a UART, just bit banging two I/O pins.
Or generating a video signal in software. On my oldest computer this is really done that way. The graphics chip only provides the correct timing and the CPU acts as interrupt and DMA controller to provide the video data on the bus just at the moment it is expected.
Such things require very careful timing and a HLL usually does not give you enough control over the resulting code to do that.
And not all processors do even fundamental things the same way. My old processor, for example, does not have any instructions to call or return from a subroutine. You have to use small procedures with two separate program counters to call a subroutine or to return.
How primitive, right? The processor is just as flexible with the stack pointer as it is with the program counter, so let's use two stacks. A call stack and a parameter stack to make passing parameters a little less complicated. Or let's add some memory management to dynamically load (or even compile just in time) the requested code module. By the way that also opens the way to expanding the memory far beyond the usual addressing range of the processor. The page adress is stored on the call stack along with the return address. The processor does not notice anything.
You see where this is going. Most high level languages take the usual calling conventions for granted. They would not let me use this processor's abilities other than in the usual way. Leaving the beaten paths is one of the most interesting things a programmer can do and high level languages don't easily let me go there because they are built on these beaten paths.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Things like serial communication with a terminal in software without a UART, just bit banging two I/O pins. Or generating a video signal in software. Sure, but those are very good examples of the kind of software I was referring to when writing "(excluding those writing OS kernels and bottom level drivers)".
Those I/O pins that you are bit banging are not available in a HLL. You cannot solve that problem without the help from low level assembly code. If you can't control timing with sufficient precision (is that a question of algorithm?) in a HLL, then HLL is not a viable option.
If the timing precision is the only argument against using a HLL, then you claim that it is impossible for a compiler to generate the same instructions as those you handcraft using an assembler. I would like to see the arguments for defending that claim. If your say "that won't happen in practice - so the code generated by the compiler doesn't realize the same algorithm as the one I assembly code", then you have made the definition of the algorithm dependent on the compiler: A fully optimizing compiler makes a different algorithm from a non-optimizing one, from the same HLL code. That does not agree with my idea of an algorithm.
And not all processors do even fundamental things the same way. I once read something by a fellow named Turing, but he could of course be wrong
I still maintain: If assembly and HLL are both viable choices, don't go for assembly for performance reasons, use a HLL. You won't beat the compiler.
If it can't be done in a HLL, then don't code it in HLL.
|
|
|
|
|
Quote: If the timing precision is the only argument against using a HLL, then you claim that it is impossible for a compiler to generate the same instructions as those you handcraft using an assembler. I would like to see the arguments for defending that claim.
I can try, but it will not be easy to show you all the traps in this code which a compiler would have to evade.
This is the datasheet of the ancient CDP1861 graphics chip: Datasheet[^]
It's just a year younger than the famous Altair and the ability to add graphics to your computer for about 20$ was a small wonder. Just hook up this chip to your bus, send the output signals to a composite monitor, include a small interrupt routine and you are ready to go. Of course that only works if you have a CDP1802 processor, because these ICs work together closely via interrupt and DMA.
You will find these interrupt routines on the last pages of the datasheet.
The upper part is all acout initialisation. It already has some pitfalls, worst of which is that the graphics chip gives us only a certain number of bus cycles before it starts requesting display data via DMA. If the initialisation is not complete by then, we are already out of sync before we have begun. How is a compiler to know this? Will it read the datasheet? Other devices may give us more or less time.
The real problem comes in the second half from the DISP label on. The graphics chip has begun to display graphics data line by line. It gets these bytes via DMA, but the CPU never gives up control of the bus. Instead, it adds an additional DMA bus cycle at the end of the current instruction, does the memory addressing itself. The CPU acts like a DMA controller and uses register 0 as DMA pointer.
The lower part of the interrupt routine is albout reducing the vertical resolution. The graphics chip always requests 128 lines every frame. If you repeat each line two or four times, you can reduce the memory requirements of the video buffer and also get better aspect ratios of the pixels.
Again we must execute an exact number of bus cycles per line and at the same time manipulate register 0 while it's also altered by exactly 8 DMA requests per line. Do you know any compiler that could deal with this? Why is it even important? When hardware and software interact so closely all knowledge of the instruction set in the world is not enough.
And yes, this CDP1861 is obsolete and out of production for 30 years now. It's a museum piece. That did not krrp some people from building their own replacements, some even with higher resolutions. But nobody ever even tried to implement any interrupt routines in any high level language. And I recently posted them a little graphics library with modified interrupt routines thatsupport double buffering and configurable vertical resolution. And sprites, text output...
Yes, we have a C compiler that I could have used for that. The performance was ok, but the compiled code was about 1/4 - 1/3 longer. Not acceptable on a computer with as little as 4k memory. Just as I said before. Not everything you can program has the resources of a state of the art PC.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I have been working with embedded processors for about ten years, and was heavily involved in the core bare-bones software when we switched from 8051 to Cortex M0. Even on the 8051, only a few core functions for hardware interface was assembly code; less than a handful coders managed it. The rest was C. The M0 was similar: Very few of the developers of e.g. ANT or Bluetooth protocols ever touched assembly functions.
As we progressed to more advanced ARM variants, and even more so: More advanced on-chip peripherals, the tiny group of programmers handling assembly coded core functions stayed the same. The protocol and application group grew quite a lot, but none of them need to know the instruction set of the M33/M4s we are using nowadays.
We are currently in a transition from our proprietary bare-bones monitor, written almost entirely in C, to an open-source embedded OS written in C, with only very low-level, architecture dependent drivers in assembly. I would guess that 99+% of our system-on-chip code is C. And 99,99% of the application code for SoC is C, C++ or other HLLs.
We are still talking about SoCs with 64Ki RAM, 256Ki flash - but not 30 years obsolete 4Ki/16Ki units. Nor are we talking about the need for the CPU to regularly refresh dynamic RAM, relate to magnetic core memory or synchronize to mercury memory tubes.
Where do we draw the limit for what is relevant today? At mercury tubes? At 74 chips? Should 74 be forgotten, but CPD18xx taken as relevant influence on the choice of assembly vs. HLL code development?
There are two primary ways of getting old. Either you can turn into a grumpy old man, like Jeff Dunham's Walter, or you may lean back, saying, "Oh well, if that is the way the the next generation wants it, then let'em!" So let them have agile and github and google appstore and facebook and whathaveyou. For the part which is software development, it is HLL, whether you condone or condemn it.
My practical experience is that for embedded code, once you have got the (very limited) assembly functions required for hardware interfacing, C and other HLLs are most certainly suitable even for embedded programming.
|
|
|
|
|
Quote: Where do we draw the limit for what is relevant today? At mercury tubes? At 74 chips? Should 74 be forgotten, but CPD18xx taken as relevant influence on the choice of assembly vs. HLL code development? Draw the line at the day Moore's law finally fails. Technology may stagnate, the expectations will not. Many old approaches come back when there is no more easy way out. It ain't over until the fat lady sings.
Quote: There are two primary ways of getting old. Either you can turn into a grumpy old man, like Jeff Dunham's Walter, or you may lean back, saying, "Oh well, if that is the way the the next generation wants it, then let'em!" So let them have agile and github and google appstore and facebook and whathaveyou. For the part which is software development, it is HLL, whether you condone or condemn it. I have often enough profited from those who are helpless without their tools, frameworks and compilers. So, produce more of them by all means.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I beg to differ: here are the execution times of the same functions, in microseconds, with the same algorithm programmed in C and in assembler using SSE, on a i5-3610 (averages over 10000 repetition with adequate cache clearing between tests):
Function C Assembler
-------------------------------------------------
F1 333.297, 209.641
F2 804.771, 219.726
F3 1441.889, 280.273
F4 1452.625, 281.373
F5 1435.306, 658.708
F6 1450.495, 663.955
F7 1439.217, 596.668
F8 1454.818, 612.861
the only one with only a minor enhancement is the first, which is a simple memcpy. Code was compiled with VS2008, but tests with 2015, 2017 and even Intel's own compiler gave the exact same running times.
And this is on a modern CPU. I won't even talk about embedded programming in realtime systems, where you have microcontrollers managing the pwm control of a triphase motor with a resolution of 125 microseconds AND manage communication on the CAN bus plus the control system on a 40 Mhz microcontroller.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
And how much was the speedup of your application, running from start to end?
Another question of importance: Do you have an absolute guarantee that the assembly code and the C code implements exactly the same algorithm? If you let me have the assembly source code, so that I could in very simple C write exactly the same flow, do exactly the same tests etc. as in the assembly code, would the speed difference be the same?
Does the C compiler make use of the same hardware - here: The same set of instructions? If assembly code makes use of instructions that the compiler is not aware of, then you have a shortcoming of the compiler, not of HLL per se.
Does the C compiler handle a lot of stuff that is omitted from your assembly code? Or are you comparing apples with oranges? If you use a "C" compiler that is really a C++ compiler, handling stuff like exceptions and memory allocation and whatever, then turning off these facilities could make a great impact.
den2k88 wrote: I won't even talk about embedded programming in realtime systems, where you have microcontrollers managing the pwm control of a triphase motor with a resolution of 125 microseconds AND manage communication on the CAN bus plus the control system on a 40 Mhz microcontroller. If you won't talk about it: Note that I did, when writing:
(excluding those writing OS kernels and bottom level drivers).
You are perfectly right: If the problem can't be solved in a HLL, then don't use a HLL.
|
|
|
|
|
trønderen wrote: And how much was the speedup of your application, running from start to end?
The application elaborated images in real time to eject damaged products from a live production line. The average elaboration window (aka the max time the software had to perform analysis and get to a decision) was 50ms, and we had about 26 algos running. Shearing a ms from an elaboration was priceless, as it was the average running time of many algos. The elaborations I optimized had to be performed 2-3 times per window, meaning 2-3 ms every 50.
trønderen wrote: Another question of importance: Do you have an absolute guarantee that the assembly code and the C code implements exactly the same algorithm? If you let me have the assembly source code, so that I could in very simple C write exactly the same flow, do exactly the same tests etc. as in the assembly code, would the speed difference be the same?
Does the C compiler make use of the same hardware - here: The same set of instructions? If assembly code makes use of instructions that the compiler is not aware of, then you have a shortcoming of the compiler, not of HLL per se.
The algorithm is the same (not many ways you can rotate a 16bpp buffer of 90° degrees + eventual horizontal and or vertical mirrors), and the compiler does have access to the xmm registers.
Using assembler meant that I could develop my own algo that read contiguous strips of memory, maximizing cache usage, perform a rotation in register space and write the result in the required place in dest image. There is no way to do that in pure C.
In another instance I cut down the Sobel calculation of an image by a factor of 3, using the XMM registers as a contiguous memory space and building on the symmetrical and scale independent nature of the Sobel matrix. Using the mentioned Sobel calculation and injecting XMM using code to calculate integrals over 32 line blocks instead of doing it in C I brought an algo from 12ms on the worst case scenario to 4ms, which meant that it went from unusable to always enabled.
trønderen wrote: If you won't talk about it: Note that I did, when writing:
(excluding those writing OS kernels and bottom level drivers).
Yup, I kind of forgot that programming microcontrollers is quite low level.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
In the good old days before decimalisation, our currency consisted of pounds, shillings and pence. 12 pence to the shilling, 20 shillings to the pound. The first computer I worked on contained a special register which allowed it to perform calculations in that monetary system.
|
|
|
|
|
Makes a lot of sense if you have a lot of money counting to do.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
We produced bills for all of the petrol stations, and agencies, across the UK, who sold Shell or BP products.
|
|
|
|
|