Click here to Skip to main content
15,885,278 members
Articles / All Topics

Complexity Theory Isn’t Required to be a Programmer

Rate me:
Please Sign up or sign in to vote.
5.00/5 (4 votes)
6 May 2015CC (BY-ND 3.0)7 min read 4.7K   2  
Complexity theory isn’t required to be a programmer

Complexity is sometimes portrayed as a cornerstone to the theory of computation. What good would algorithm analysis be if we had no way to communicate our results? Theory has a role to play in our field, but I’m not sure it’s required on a typical day of programming. Perhaps I could even go as so far as saying you could be a good programmer without understanding much of anything about complexity theory.

My purpose in this article is not to question the value of complexity theory. It is a significant part of the theory of computer science and I even enjoy doing a bit of analysis sometimes. Here, I’m just wondering out loud whether it actually plays a role in the day of a typical programmer.

Practical Matters

The typical problems I see on projects are far removed from the realm of theory. We’re looking for concrete answers to specific problems we’re having right now. I need to know how to save a record in a DB, send a JSON request to the server, rotate an element on the screen, etc. While I might enjoy analysing an algorithm, the opportunity usually isn’t there.

Our process methods are tuned to producing functioning and maintainable code. My primary goal when writing a feature is not its theoretical efficiency, but it’s functional correctness. This means writing some tests, choosing an appropriate API, and putting in the correct runtime checks. Perhaps I need to get it peer reviewed before I merge to the main branch. Then I create a changelog and mark the issue as complete, or ready-for-testing.

On my typical day, week, or even month, I don’t even get close to computational complexity.

A day comes when I need to populate a collection of objects and pick some elements from it. Finally, it’s something that reminds me of the complexity theory I learned. I need this function to be efficient. I set about looking for solutions when a few realities dawn on me:

  1. I could probably solve the problem simply by creating a simple list, building a simple comparator, and then sorting the list. No analysis here; these standard library functions are efficient enough.
  2. I’m not even sure it needs to be efficient. I heard back from product management that our most extreme customer will only ever have a few thousand items in this. If I used a brain-dead brute force method, it’s possible nobody would ever notice.

Minimal Understanding

That’s all fine, but what if we really have a need to discuss algorithms? Say I’m doing a team presentation and I show that our O(n) solution is causing a problem. Bob gives me a funny look and asks, “What does O(n) mean?”.

I could stare at him blankly, unsure of why he doesn’t understand this stuff. I could just dismiss him, or I could even be mean and laugh at him. Or, I could simply answer, “It’s a measure of how long this code takes to run. Here n means the number of items in the container, and our algorithm does one comparison for each item in the container.” Bob answers, “ah, like a for loop that runs from 1 to n.” Good. Bob’s just grasped the basics of complexity theory.

Is that really all there is to it?

What is O(n)?

Here is where we might say common knowledge about complexity starts to waver significantly. Surely Bob doesn’t claim he understands the theory now, but how much do we ourselves really understand?

Consider what O(n) means: nothing. That’s right, if somebody tells me something is O(n) it truthfully tells me nothing on its own. I’d need to make all sorts of assumptions before it had semblance of meaning. First off, I need to know whether this is time or space complexity. If we’re dealing with collections people tend to mean time complexity, because most collections have a space complexity of O(n). I say most, since a common one, the radix tree, does not.

The second major assumption is that I’m measuring some significant operation. O(n) must be referring to something in particular about the algorithm. For collections, the convention seems to be the comparison operation, but is that really the best choice? In many practical situations, it is the memory allocation that is costlier than comparison. For disk based algorithms, the number of serializations could be the key cost. In some situations, it may even be the cost of calculating a hash.

If we’re looking for fine details, this hand-wavey approach is not so good. In theoretical analysis, we can use the “random-access machine”. It’s a specific model of a computer for the purpose of analysis. How often do we state we’re using this machine while doing complexity analysis?

That aside, what does the value n in O(n) actually represent. In some cases, it’s actually a measure of the number of steps, the time. But quite commonly, big-O notation is actually measuring amortized time instead. If we’re dealing with hash maps, we get even more lenient, allowing O(1) to mean the average expected time given a good hashing function.

Measurements

Does the big-O give us an idea of how an algorithm performs in practice? I wouldn’t allow somebody to replace an O(log N) algorithm with one of O(1) time complexity without first measuring it. Perhaps, big-O isn’t even the right value here, often the non-simplified complexity is more revealing. Even with that, there are still a lot more details in practice.

How does the algorithm perform given the layers of caching in the CPU? I’m sure many of us have thought about this before, and even designed algorithms around this. I’m however still not aware of how to properly integrate this with complexity theory. I know there is cache-aware analysis, but the topic isn’t widely covered it seems.

How does the algorithm perform in concurrent computing? Clearly we’d prefer an algorithm that can scale over processors to one that can’t. We can even refer to scaling as having its own complexity, like having a linear gain over N processors. But again, I’m a bit uncertain of the precise way to do this. There is something called isoefficiency, but that topic also seems uncommon.

In practice, I know that size complexity of data stores can easily dominate time complexity; it’s really costly to access large amounts of memory. I know that sequential reading is faster than random access. I know that alignment is good, but false sharing is bad.

Certainly, there is a way to put this all together in theory, but I don’t know those details. This hasn’t prevented me from analysing such situations. Simply by reading about CPU architecture, and understanding threading, our colleague Bob could probably start doing the same type of analysis.

But to be honest, the situations where such details become vastly important are exceedingly rare in practical programming. It’s quite possible to be an effective programmer without understanding cache coherency.

Let’s Be Honest With Ourselves

Algorithm analysis is an important field. The people that do this on a daily basis have a significant influence and value to our community. Our standard libraries, VMs, and operating systems are continually improved by this field of research. It’d be foolish not to recognize the value in this field.

It is however a speciality field. I don’t think one really needs complexity theory to be a programmer. I’m positive somebody could even become a really good programmer without ever seeing more than utmost basics. It’s far more important to understand the pitfalls of floating point calculations.

Knowing the theory can of course make one a better programmer. Just as knowing any number of details about computing will improve one’s abilities: cryptography, cache coherency, compiler design, functional programming, user experience, declarative programming, quality assurance and fault tolerance to name just a tiny few. There are an immense number of aspects to modern software. There’s no way we can be expected to know all of them, and certainly not to any level of expertise.

Complexity theory doesn’t stand out from this list. You can be a good programmer without knowing it.

I’ve been fortunate to work on a wide variety of projects, from engineering to media, from finance to games. Follow me on Twitter to share in my love of programming. If you’re interested in something special, just let me know.

License

This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-NoDerivatives 3.0 Unported



Comments and Discussions

 
-- There are no messages in this forum --