|
My first programming education at university level was in Pascal. The professor required us to model all data as RECORD types ('struct' in c lore), and define all operations on these records as a coherent set of fuctions / subroutines, all taking the record instance as the first parameter. (He also required input parameters to precede the output/var parameters in the parameter list, but that's a different matter.) We structured subtypes as multiple level RECORD types, or as variant records ('union' in c lore).
This was in 1977, eight years before C++. Simula68 was nine years old, but the professor never referred to that, or for that sake to OOP at all. He just did OO programming, without labeling as such.
Having that background when OO appeared, the change was limited to a few syntactic details. We had to move the first parameter ahead of the function name and add a full stop. We had to add braces around the RECORD type and its associated functions (and start calling it 'methods' rather than 'functions' ), but those are details. The code needed no fundamental restructuring. The way we thought about the problem solution didn't really change much. The addition of objects/OOP to TP 5.5 was undramatic. If your previous experience was with Fortran or Cobol, I guess it would be different.
|
|
|
|
|
Very cool experience! I started in 1975 with FORTRAN II on punched cards; back then Engineering was very into FORTRAN. I tried COBOL once, but that was a clear dead end. When Turbo Pascal came out, it was the only compiler I could afford (MS products were so obscenely expensive it wasn't even a consideration) and having been programming in Paradox it was a breath of fresh air! The 5.5 version introduced OOP, and that was so well documented in what we used to call, "User Manuals," the learning curve was trivial. The only stumbling block for me was the concept of serializing data to disk. I was used to random access approaches common at the time. Still, it worked out great. Then it was over... the Dark Age of cryptic languages and made up terminology began.
Will Rogers never met me.
|
|
|
|
|
I never got to pick, so why fight it? Make the most of what you've got to work with.
Each language has its strengths and weaknesses. I have loved writing code in every language I have tried. If the language didn't support something in the best way, it was a chance to create libraries of helpers. That's fun, too! And makes you appreciate the language that does that thing well.
My only regret is that I rarely got to work in Assembly. What a blast that was. Like having legos at the molecular level. You could make anything you could think of.
|
|
|
|
|
YAML - Somebody layered Python on top of JSON and stripped out most of the punctuation.
I’ve given up trying to be calm. However, I am open to feeling slightly less agitated.
|
|
|
|
|
Most disgusting über lenient syntax ever
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
An honest answer would be to select only from the languages you know. For me that list would include C, C++, assembly, and scripting languages (Windows "batch").
I can't say any of them are my least favorite. For a given problem one of them will be the proper choice and the others will not. That doesn't mean I dislike the other languages or would refuse to use them if they were available.
Software Zen: delete this;
|
|
|
|
|
I think the point of the question was.
Given access to all of these. If starting a new project, which would be LAST on your list of choices.
I struggled in that direction. Assembly, for example, is fine on Motorola Chips, and the Old DEC/VAX systems with MACRO-11 Assembler. But Z80 instructions... Shoot me now! LOL. I wish Apple would have chosen Intel, and IBM chose Motorola! It would be a different world. I think they would still be working on the original Mac OS!
|
|
|
|
|
Oh yeah DEC Assembler is nice! Though it is ancient it was chosen for us to learn in university because of its rich addressing system and overall good structure. I liked it.
Texas Instruments DSP assembler is also a masterpiece, I should say.
|
|
|
|
|
I worked with an assembler where a register load was written as 'W3 := <loc>', store as 'W3 =: <loc>'. Basic math was like 'F2 + <loc>', 'F4 * <const>' and so on. Turn off (most) interrupts (unprivileged, for at most 256 clock cycles): 'SOLO', back on: 'TUTTI'. Memory swap, halfword: 'H SWAP <loc1>, <loc2>'. Conditional jump: 'IF <cond> GO <dest>'. malloc/free: 'W1 GETB <size>' / 'FREEB <size>, '.
You could read the assembly code without learning a gazillion acronyms!
Going from this assembler to C was a small step for a man ... and, I'd say, no large leap for mankind, either.
|
|
|
|
|
When I look at a piece of code, I want to be able to "grok" it, i.e., understand it at the deepest level, in a few seconds.
By "grok", I mean understand the design requirements, the author's intentions, and the implicit and explicit operations of the code.
Many of the languages surveyed have evolved through feature creep to the point that a piece of code may work in language version 123, but not 234.
Rant over.
|
|
|
|
|
When the question asks for your least favorite language, I find it natural that those languages leave a lot to be desired.
|
|
|
|
|
It's painful to write in any assembly language but everyone who did it know the sensation afterwards. I loved it, especially when I had to delve deep into all the SSE instruction set - I had to optimize some functions for different generations of processor, from SSE2 to SSE4.2 passing through the infamouse SSSE3 (yes, triple S).
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Assembly language programming is fun, I enjoyed it for many decades.
It takes a specific mindset though: if you don't "get it" than you'll struggle and hate it. I think it's a load easier to go from "good assembler developer" to "good high level developer" than it is to go the other way - which may be why so many people hate it.
I started with COBOL at Uni (which is "English with a small vocabulary") and FORTRAN (which was "Assembler with less lines to type") to Assembler, then then up the Algol / Pascal / C / C++ / C# ladder, often in parallel.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Assembly rewards the very classic approach to programming: flow chart first, code later. If you learnt this, and I was lucky to have it taught me since high school, assembly is much less daunting.
Since it forces to formulate a plan and pre-digest the problem before tackling it, it builds a mindset that become useful in any language.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
I don't flowchart asm. I just code in it like anything else. It's basically platform specific "C" without the syntactic sugar, if you understand how C works.**
I guess I should have learned it in school?
Honestly I have a worse time in Javascript, because asm doesn't encourage me to write bugs in it that only crop up at runtime.
** and yes my analogy is goofy. *hides*
To err is human. Fortune favors the monsters.
|
|
|
|
|
I was a student when (the original K&R) C came on the scene. We quite explicitly referred to it as a (mostly) machine independent assembly language. (The number of cases where the semantics was implementation defined was so large that it certainly wasn't fully machine independent.)
For several years, we saw C as a language for OS and driver implementation, not for application programming. For such purposes, we would use languages with suitable abstractions at a much higher level.
I still think applications should be built in higher abstraction languages. (And providing a language that allows you to build your own abstractions from C level and up is not the same thing. There is a difference between a steel mill and a wrench.)
|
|
|
|
|
I agree with you in general, but C is almost perfect for IoT development. C++ is usable too but to be feasible you must give up most of the niceties like the STL and exceptions.
You can't really afford higher level languages. Sure you can run Micropython on an ESP32, but the performance is what you'd expect.
To err is human. Fortune favors the monsters.
|
|
|
|
|
honey the codewitch wrote: You can't really afford higher level languages. I consider embedded programming to be a very close relative to driver programming, i.e. within the realm of C.
IoT embedded programming have other issues justifying C/assembly programming: Battery life! To reduce power consumption, you want to minimize RAM footprint, which is far easier in low level programming. You will also minimize the time you listen to the radio or keep other IO lines active; that usually requires low level (or a memory/power consuming mapping, which you cannot afford). To conserve battery power, IoT devices frequently reduce the clock frequency to the minimum required to perform the IoT tasks, even if the CPU is capable of a lot higher performance. Low level programming can allow you to complete tasks in time at a lower clock frequency. An IoT device has no need for increasing it idle time, as long as it does its job!
For end user applications, 'performance' has been relegated to a sales argument (and nothing more) for at least ten, maybe you could say twenty, years. What was once super-heavy tasks, such as video processing, are trivial on modern (general) CPUs. Who cares whether the CPU is 95% or 97% idle when decoding a 4K video? In the 1990s, we used to split large document into separate files for each chapter because the word processor got too slow. Even ten years ago (then on a 2008 vintage CPU) did I edit 500 page books as a single file in MS Word. As soon as the user level waiting time for an operation falls below a certain threshold, further speedup has very limited value. For the very great majority of end user applications, we have been below that threshold for ages.
Modern smartphones have a physical appearance that points in the direction of 'embedded'. The processing power points differently. Displaying 4K video on a 5" screen at 120 Hz clearly shows that the biggest problem is how to waste enough CPU cycles to make the customer ditch that phone for a new and ever more powerful one. To see that your new phone is faster, you must use a benchmark program - it isn't visible in the ordinary user interface.
I maintain that for end user applications, including smartphone apps, we most certainly can afford programming in high level languages. For this purpose, I do not consider C class languages high level. But then: Which higher level languages is alive and kicking today? Not very many!
|
|
|
|
|
den2k88 wrote: Assembly rewards the very classic approach to programming: flow chart first, code late I agree wholeheartedly, with c++ I generally, can just start writing, with assembly I must flow chart to keep my sanity. I do enjoy writing in assembly though.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
In my study years, we flowcharted the solution - not the implementation. The solution is language independent. In other words: We flowcharted, regardless of language.
I do not miss the flowcharting in itself, but I do miss the solution understanding that is independent of specific language constructs. You really should be able to (and not only be able to, but also do it describe the solution so clearly that you can give the same description to a Fortran code monkey, a Pascal and a C++ one, and an assembler guy, and they should come up with four functionally identical code solutions.
Maybe flowcharting is the best way to keep language specifics to creep in - something that very easily happens with pseudocode.
The same goes for data structures: I really miss ER (Entity-Relationship) modelling, which was an excellent method for putting some structure into a mess of information, without making any premature assumptions about how it should be coded in a given coding language. Or, if you like assembler level data modelling as well: ASN.1 is similarly coding language independent. (That's the 'A' in 'Abstract Syntax Notation 1'!)
|
|
|
|
|
I often use pseudocode due to the monodimensional nature, I often have troubles with the sheet width whrn jotting flowcharts. Nevertheless, flow charts win often. In many modeling tools though they managed to feature creep the flow charts too, I normally use a very lean set of shapes that I learnt in high school.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
The same feature creep happened to ER data modelling. I was even in a project where we added to the established ER some new extensions. If I were to do that project again, I would have fought against all extensions, insisting that we stick to the simple, easy-to-understand for everyone, basic mechanisms.
The simplicity of the ER tools gave us a great success in another project: We modelled the complete information structure managed by the city administration. Even people who had never before used a computer (this was in the early 1980s) were able to give valuable, constructive feedback on the model. If we had introduced the ER extensions that we created for the other project, those non-computer people would have had a far less chance of getting involved in the model discussions.
|
|
|
|
|
Some of the delights of x86 assembly language were the small number of registers and the extreme non-orthogonality of its instruction set. I used to delight in writing highly optimized code for it.
The x64 instruction set is easier to code for in most respects, but some of the fun has gone out of it. OTOH, compilers have become better at optimization so the number of cases where hand-optimized assembly language is required is much smaller.
That's progress for you
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I never programmed PDP-8 myself, but I know people who did, telling how they never allocated a constant in memory if there happened to be, say, an instruction within the same memory page having the same bit pattern as the constant they needed: Then they could reference that location, rather than wasting an entire word (12 bits) for a second copy of the same bit pattern ...
Well, I guess that was 'fun', sort of. I prefer to leave the fun to the compiler. If it finds the desired constant value bit pattern as some instruction code it has already generated, I'll let it have that fun - as long as it promises not to assume that the same reference is valid when the code changes to a different value.
I did play around with assembly programming for a few years. I got (and still have) the impression that most of those touting assembly as a way to speed up you code rarely compare the actual execution time for compiler generated code compared to their hand assembled one. With modern CPUs, reducing the number of instructions within a tight loop may have next to zero effect on execution time; reducing the number of iterations is far more significant - and can be done in a high-level language as well. I never had an assembly programmer tell me 'Look how much my assembly coding speeded up the application at the end user interface level!' - it is always 'Look at the speedup of this 17-instruction innermost loop!' And quite often, the assembler coder shows me how he has eliminated loops, in a way that could have been done in the high-level language as well. Yet, unrolled (/modified algorithm) code is compared to iterating HLL performance.
Imagine that you could set up with PCs with identical applications, two of them with given modules coded in assembler, two in C, and the fifth may be either variant. Set up an automatic process to enable either variant on either of the five PCs (keeping the 2 + ? + 2 distribution) and invite end users in to run the applications on all five machines, and 'vote' whether they think that the application is pure high level code or has essential modules assembly coded. I am afraid that the verdict would be rather disappointing for assembly coding addicts.
Quite often, the speedup gained from assembly programming is not primarily due to inefficient code generated by compilers, but because assembly allows direct access to specialized hardware functions. If end users in the blind test described above were, with some statistical significance, able to identify the assembly coded alternatives, I am quite sure that a closer inspection would show that the assembler coders were using hardware not utilized by the high-level language code. A compiler ready to use the same specialized hardware can generate (for all practical purposes) equally efficient code. That is one of the good things about .net and JIT compilation at the target machine: It allows the JIT compiler to utilize any locally available hardware which is not necessarily available in other contexts.
Assembly has two essential purposes: To access hardware facilities not otherwise accessible from high level languages. And (not unrelated to the first!) as an educational tool to see how the computer works at the machine code level. See what the compiler is going after.
For general code, not accessing specific hardware functions, the compiler will be a lot better than you to generate optimal code. It has been that way for at least thirty years. Even the FORTRAN II developers had to spend great efforts to understand how their compiler had been able to discover how a piece of code could be twisted around to execute faster, even if they had written the optimizing algorithms themselves. It is sort of like a chess championship: Cheating is based on advice from a computer, not from a human grandmaster.
|
|
|
|
|
We are mostly in agreement.
I agree that on most modern CPUs, optimizing compilers can make good use of the processor features and produce code that is difficult for humans to optimize further. As you say, this implies that human time is better invested on algorithmic improvements.
Thirty to forty years ago, on x86 CPUs, it was both possible and sometimes necessary to rewrite inner loops in assembly language. Creative usage of the instruction set allowed some incredibly fast code to be written.
I did not say then and do not say now that it makes any sense to write anything other than innermost loops with extreme performance requirements in assembly language. I did say that writing such code was a challenge, and fun.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|