With all the interest in the “Internet of Things”, currently being discussed in the programming world, it is worth reexamining our approaches to developing software. One thing is for sure, building fast executing software for tinier and tinier devices requires some real rethinking. But wait a minute ! Haven’t we already been here before ? Actually we have, but it is not what you may think.
Our age and experience is our advantage
It seems the programming world is more and more dominated by the young. But in a strange way, the “old timers” among programmers now have an advantage few new programmers can even come close to. What is that ? We have more experience with working with less powerful devices. You see, with everything going mobile today, devices must get smaller, but the problem is that the smaller the device, the less power it also must have. You just can’t pack as much power into a smaller device as you can in a full sized computer. The concept of the “Internet of Things” also means even smaller devices than we currently have, so the trend will continue. So what is a programmer to do ?
Programmers with a little gray hair, may already have an idea of where I am going in this article. But for you young people, I will give you a little background.
When less was all we had !
My own experience may provide some of the background needed here. In the 1980′s I started digging deeply into computer programming. The home computer was popular back then, but it was nothing like it is today. Most so called home computers of the 1980′s would be totally unusable for the average consumer today. They really could not do much and anyone interested in writing software for them really had to work hard to be able to write decent software. Let’s consider one of the most popular among them, the Commodore 64 computer. It cost as much as two cheap laptops you can buy today (at about $600), but it only had a 1 megahertz CPU and 64 kilobytes of RAM. True it was not a small device, but it is the minimal amount of hardware power that I am talking about here. By todays standard, a computer with nearly 1000 times more CPU power and over 8000 times more RAM memory than the Commdore 64 is considered minimal hardware. Even if devices designed for the “Internet of Things” had only 1/4 of that power (256 mhz CPU and 128 meg ram), they would still have a CPU 256 times more powerful and 2000 times more RAM than a Commodore 64 computer.
For more info about the Commodore 64
Yet, design software for the Commodore 64 I did. I was using an real native code compiler back then (Abacus BASIC compiler), not interpreted BASIC. I even used the compiler to write my own BASIC subset compiler for nearly close to assembly language speed. I had to understand the hardware I was working with much more so than the average programmer today does. Things like switching out ROM for RAM, building custom character tables, direct access to hardware sprites, etc. Programmers had to learn every “trick in the book” to be able to make a decent application.
When I started writing software for businesses later on, things were a little better, but old CPM computers I was working with were not much better than the Commodore 64. Yet amazingly programmers back then were able to write some very powerful, consumer and business software.
The lessons we learned
Programming is not only a skill, but it is an art of sorts. Like trademens of old (ie. wood workers) , programmers back then developed all sorts of “tricks” to get more out of a computer. They learned what worked and what didn’t. They found ways to squeeze more out of a computer than the manufacturers of the computer every envisioned. One good example was how some amazing programmers developed an actual GUI based operating system for the Commodore 64, called GEOS . True by todays standards it was rudimentary, but it was similar to the early Apple Macintosh interface and for back then it was an amazing feat for the Commodore 64.
Now programmers of today like to refer to old coding styles from back then with terms like “spagetti code” and they tout how much better coding styles of today are. Actually, programmers back then better understood the value of modular code than most may think. You see, when you are working with a tiny amount of RAM memory or even ROM memory for burned in code) you have a lot less to work with, so the amount of space used by code was critical. The only way to maximize ones code, was to write modular or reusable code in the form of flat API’s. The API’s had to be designed to do as much as possible with the minimal amount of code. So if you think old time programmers don’t appreciate modular code design, you are wrong. They understood its importance more than you may think. Code rusability (or modular design) was one of the keys of effective programming back then. But there was one major difference between modular design back then and modular design today. And what was that ?
Flat API’s versus Objected Oriented API’s
Programmers back then worked with what we today would refer to as a Flat API. Flat API’s are more procedural in nature than Object Oriented ones. But Flat API’s have a few advantages over object oriented ones. But aren’t Object Oriented approaches superior to old style Flat API’s ? Actually, you may be surprised (I told you that you may not like this) that Flat API’s not only still exist and are used by a number of programmers, but that the such API’s allow programmers to write the smaller, faster and more reliable software we all desire, especially for the coming “Internet of Things”. Let’s consider this.
Advantages of the Flat API model
- Smaller
Flat API’s by their nature tend to be significantly smaller in size than their Object Oriented counterparts. I am convinced of this. Building classes (and objects) for everything adds a significant amount of overhead to an API. It is just the nature of it. The generated machine code from such compiled libraries also has extra overhead in them as well, so the end result is software which is larger in size and more complex. Now one may thing it does not matter with the powerful computers of today, but what happens when devices get smaller and smaller and that power is lost ? The end result is sluggish software. With the powerful computers we have today, we should have lightning fast software, but sadly we don’t. Why ? Because of the extra overhead we add to software. Flat API’s can greatly reduce this overhead, which means smaller and faster software.
- Faster
Flat API’s use less code, which means less generated machine code, which means less machine code for the CPU to execute, which means FASTER code ! This I am also convinced of. True, the goal of many libraries today is suppose to be easier programming, so isn’t the extra overhead worth it ? The reality is that what many consider easy, may not be so easy after all. For some, coding today has become simply a matter of selecting objects, classes, methods and properties from a drop down list via an intellisense based code editor. But have you ever wondered who wrote all those objects and classes you use ? Likely a C++ programmer, not only adept at writing object oriented code, but quite possibly knowledgable with the use of more procedural style coding styles and Flat APIs. And some of those Flat API’s likely were written by some pure C programmers, rather than C++ programmers. Every programmer should watch this video of Herb Sutter’s talk entitled “Why C++ ?” .
- More Reliable
Now this one, you may not like to hear, but please have an open mind on this one. Can Flat API’s be more reliable than Object Oriented ones ? Now of course, reliablity has a lot to do with the quality of the code no matter how it is written. Bad code, no matter the methodology used, is still bad code. But with that out of the way, how could Flat API design be more reliable than Object Oriented design. Two reasons I will discuss here. First is the simple reasoning of when there is less code, there is less that can go wrong with it. The less code being executed, the less things that can go wrong. Flat API’s lend themselves to a less code approach, so there is less code to debug, less code to go wrong, so a better chance of more reliable software. But the second reason is far more important. No programmer can write perfect code the first time. Human nature as it is, we all sooner or later mess up and we introduce bugs into our code. Some bugs are from typing errors. Some are from logic errors. Some are from design errors (the concept is simply flawed). At some point we have to go back and trudge through our code looking for what went wrong. In the old DOS days, this was not too much of a problem because following code flow was pretty straight forward. But today, with multi-threading, event based execution and especially with object oriented coding, tracking code flow can become a nightmare to say the least. To quote an interesting article I found in Intel’s web site, “Now it is almost impossible to follow the execution flow” and with this I agree. Flat API’s by their nature are inherently easier to track code flow. Especially if the Flat API was designed well, modular and does not go too deep in its inner levels (ie. an API calls another API, which calls another API and so on).
Does anybody use a Flat API anymore ?
Actually, the answer is yes ! One of the most powerful, popular and with a long history is the Windows WIN32 API . While later versions of Windows has a good bit of COM in it, most of the WIN32 API is a flat API, which accessable even from the most rudimentary of languages like Assembler, pure C and so on. An entire application can still be written today using a purely procedural style of coding using a Flat API (WIN32). Why would anyone want to do that, seeing we have such more powerful programming languages today ? Back to the Internet of Things. Devices are getting smaller and smaller and less powerful because of this and we need to be able to build smaller, faster and more reliable software for these devices. Windows Embedded can be shrunk down to a pretty small size today and by using the WIN32 API developers can produce amazingly small, fast and reliable software, well suited to the smaller devices.
I have been programming using the WIN32 API for the last decade and half now and I am convinced at how powerful the WIN32 API is and how it allows one to build smaller, faster and more reliable software. As one example, why not check out this very powerful graphic library written using the WIN32 API. It is called GDImage . GDImage was written using the WIN32 API (and OpenGL 1.0/2.0 which is also a Flat API). The developer even uses GDI+ (GDI Plus), but instead of the window classes version, he uses the GDI+ Flat API version in the WIN32 API. The graphic library was originally written in PowerBasic and then recently ported to C++ (but he avoided using an OOP in it). The library is not only very sound (reliable), but what is most amazing is the performance (fast) and the size (only 315 Kilobytes) .
Now while I don’t expect many to code in assembler today, here is an excellent example of combining coding in assembler and using the WIN32 Flat API. It is a nice utility program called ToolBar Paint. It demonstrates how tiny a WIN32 application can be (and fast) and still be very useful and productive. Maybe there is something to learn from developers like these, especially if you want to develop software for the “Internet of Things”.
As a long time WIN32 programmer myself, I can attest to how fast and how small WIN32 applications (and libraries) can be. Also well written WIN32 applications can be very reliable too.