Programming language comparisons usually focus on the brevity and expressiveness of the language. If the solution to a programming problem has fewer lines and is “easier to understand” or “clearer” in one language than another, that is an argument for using the former over the latter. This comparison is useful if one supposes that the art of programming is primarily concerned with writing programs. It isn’t, of course. It is mostly concerned with debugging programs.

If one supposes that the art of programming is mostly concerned with debugging programs, then it also makes sense that programs should be “easier to understand” and more concise – those qualities should make it easier to figure out what the program is doing and find bugs. However, as anybody who has tried to debug a Perl one-liner knows, brevity is not always a virtue.

So, the question becomes: What makes a program harder to debug? It might be true that avoiding those things that make a program harder to debug might be as important (or more so) than those things that make it “easier to understand”.

Debugging is the process of:
a. figuring out where the bug is
b. figuring out how to fix it.

We’ll focus on the first step, since that is what typically consumes the most time. The process begins by realizing that something is amiss. This can be either because the program throws an exception (a clue that something is amiss at the spot where the exception is thrown) or a value being displayed is not the desired value. In the latter case, we realize that the bug is not at the point of display, but rather at some earlier point where the value was calculated incorrectly. We need to find that point. The same is possibly true when an exception is thrown. The exception might be caused by a division by zero, say. The actual bug may be at some earlier point where the zero was incorrectly calculated. So debugging proceeds from the point at which the realization of an error occurs, to finding the earlier point at which the error was introduced.

Making this process (that of tracing backwards to the source of the error) more tractable is the impetus for most software engineering principles and methodologies and paradigms and practices. Let us assume the simplest possible case. One has a program that begins, executes a bunch of statements, then ends. It blows up at some point. The process of hunting for the bug would involve looking at the statements starting at the point where it blew up, and then searching backwards through the program. If we use goto statements, then the error could potentially be anywhere in the program – we might have gotten nearly to the end and branched back. The existence of gotos makes it impossible to easily narrow down where the bug might have been – hence, the invective levied at the use of goto statements. If instead of using gotos we use a construct like a while loop, then as we search backwards through the program, we find the while statement, which we can pair with the corresponding end of the while loop, thus bounding how far forward in the program we need to look. Hence, the insistence on indenting the code in such a way as to locate the corresponding end of a while loop given its start. (Modern programmers might rely on their code editor to point out the corresponding end of control structure rather than rely on indentation, but that is a different discussion.)

The search for “where did this bug occur” involves starting at that point where one realizes that some data is wrong, and going back to where (previously) that value was calculated incorrectly. That is the spot which needs to be fixed. We want to partition the code base so that we don’t have to examine every line of code and check every value. We would like our programming paradigm to help us figure out what does or does not need to be examined. If we know that some code has not yet been executed at the point where we notice the problem, then clearly we don’t need to look for the bug there.

Much as gotos make it harder to figure out which code was executed before I got to “this spot”, global variables make it harder to figure out which code modified the value that you suspect is now incorrect. As with gotos, a global variable might have been modified anywhere – and you have to search the entire code base looking for candidate culprits. A local variable, by contrast, could only have been modified by code in that scope – you need to only search a limited amount of code for possible problems.

So, we need to search for “which code did I execute to get here” – which might also be phrased as “which code set this value incorrectly”. We would like to be able to say “I don’t need to check this code because we didn’t run it” as well as “I don’t need to check this code because it couldn’t have set this value”. Hence, control structures and data scoping.

The final factor to consider is the separation between application code and library code. In any implementation that one is debugging, some of the code is the code that was written for the application, and some of the code (and variables that get modified) are part of libraries that are included. The first stratagem is to differentiate between your code and included code – under the assumption that the bug is probably in your code, so you should look there first – as well as the assumption that even if the bug is in their code, you may prefer to work around their bug by confining your changes to your code. You might not be able to modify their code – or you might not wish to modify their code. We’ll refer to this quality (the ability to worry about your code without worrying about their code) as “separation of concerns”. We evaluate various approaches to debugging on the basis of how effectively we can exploit separation of concerns, control structures, and data scoping to minimize the amount of code we need to inspect and understand in order to locate the source of a bug.

The process of isolating the location involves either working with a debugger that allows one to place breakpoints and stop the code at various spots, or by inserting print statements at various spots to log the fact that one arrived at that spot and perhaps the values of some variables of interest. Debuggers allow one to poke about at a variety of locations and variables without thinking about which ones ahead of time, while print statements might require modifying and rebuilding the program repeatedly in order to ensure that one captures information about all the relevant bits. Logging frameworks, and the inclusion of logging code in applications, is mostly done in anticipation of future debugging sessions.

Debugging an imperative program then involves examining the stack at the point at which an exception is thrown. If the exception is thrown in library code, we pop the stack back to our code, and examine the parameters being passed to the library function. If incorrect, we can treat the problem in the same way as an exception at that point. If correct, then the problem possibly lies in the library, and we can consider replacing the library call with something that works. At any point in the stack, we can examine the control structure we are in or pop the stack to the calling frame and examine the parameters being passed. We can navigate the code base in a fairly targeted way as long as we refrain from using global variables and goto statements.

The advantage of functional programming is that it discourages goto statements and global variables even more aggressively than imperative programming. Functional programming often introduces other conventions that increase debugging difficulty. But that is a discussion for another time.

Let us consider debugging an object-oriented program. The first difficulty that object-orientation introduces is the notion of instance variables. From the perspective of debugging, an instance variable is not in the scope of any of the methods on the stack. An instance variable behaves (from the perspective of the methods on the stack) like a global variable — the difference being that one might restrict the search space to the code for the current class (and all its subclasses). This threatens to increase the search space, and suggests that avoiding instance variables in favor of passing arguments might be a better strategy where possible. This restriction is only possible if the language supports the protection of access to the instance variables by means of a “protected” or “private” property which prevents global access. Even for those languages which can protect instance variable access, if you make the mistake of defining public “setter” methods, then the instance variables with setter methods become effectively global variables: if the value of the instance variable is incorrect at some point, that incorrect value might have been set anywhere in your code. The original justification for “setter” methods was the realization that allowing anybody to modify instance variables made them indistinguishable from global variables – hence making debugging much more difficult. Therefore, if one prevented direct access to the instance variable, it would fix that problem. If one wanted to allow external methods to set the variable, the setter method would be constructed to guarantee that it could never be set to an invalid value – hence preserving that property of “I don’t have to search the complete body of source code to find out who might have messed this up”. If your setter method does not actively prevent the setting of incorrect values, you might as well just allow public access to the instance variable – you will have to check the entire code base for any methods which called the setter. So, to avoid making your object-oriented code harder to debug than imperative code, you need to not use any public instance variables, and also avoid defining setter methods.

But it gets worse.

Let’s consider a case where you are using an object-oriented language/paradigm – and consequently, an object-oriented framework. Let’s further assume you’ve subclassed one of the framework classes. In such a case, when you’re calling self.method on your class, it might be invoking a superclass method or one that you defined (no way of knowing without searching through the class definitions). In the worst case, you have to search the entire superclass hierarchy. Even worse, if you have overridden any superclass methods, then the superclass might be calling self.method and wind up calling a method in your subclass. If these things hold true, then there are several implications. Firstly, when you wind up halted on the stack, the stack is no longer partitioned (as it was in the imperative case) into a bunch of your code followed (possibly) by a bunch of library code. Now the stack can interleave your code with framework code, then with some more of your code, then with some more framework code. The difficulty here is that if you go up the stack until you reach a stack frame containing your code, that might not be the place to fix the problem. You might need to crawl up the stack through another layer of library code to reach a different layer of your code. It becomes more complicated to figure out at which level of the stack to focus on the fix. Additionally, any of those layers of library code might have modified instance variables which you then relied upon — so it is much harder to figure out where the erroneous values might have come from. In fact, it is so difficult that most object-oriented frameworks need to ship with the source code, because it is impossible to debug these multiple layers of entrance and exit to library methods without access to the source code. (Aside: when the object-oriented paradigm first became popular, the idea of Free Software had yet to be invented. I would argue that the rising popularity of object-oriented software forced the rising popularity of open source software because whereas it was possible to write imperative code using libraries without access to their source code, it was [and is] not possible to debug object-oriented code without access to the source code. But I digress). In fixing an object-oriented bug, you not only need to consider whether to change the way your code calls the library method or to write a new method to use instead of the library method, but also whether or not to override some third library method which might alter the behavior of the program in desired ways. The additional choice increases the complexity of deciding upon a solution.

Now, you might argue that the way to avoid these complexities of object-orientation is to restrict the use of some features.

We could sidestep the problems detailed above by:
a. never inheriting from a framework, and
b. never using instance variables.

As to the former, you also need to consider the possibility that you might take your very successful application and abstract some of the more generic sections into a new framework — in which case code which was formerly “your code” now gets promoted to “framework code”.

In that case, the advice above generalizes to:
a. never use inheritance, and
b. never use instance variables.

The technical term for a programming paradigm which uses neither inheritance nor instance variables is imperative (or functional) programming.

Another attribute of programs which is renowned for making them hard to debug is multithreading. The best way to illustrate the issue with debugging multi-threaded programs is to describe the debugging scenario as consisting of multiple stacks. So, whereas a single stack neatly split between “my code” and “their code” is the easiest to debug, a stack with multiple “my code/their code” transitions is harder to debug, and multiple stacks is even harder. The worst, of course, would be multiple stacks with multiple “my code/their code” transitions.

I should also point out (before the astute reader does) that it is possible to create alternating stack ownership in imperative programming through the use of callbacks. This is true. However, the use of a callback is more easily detected by examining the stack, since the callbacks are passed in as parameters. And if you believe that debugging a program with multiple callbacks is more difficult than debugging an equivalent program without callbacks, then you are agreeing with my underlying premise. Those of you who have tried debugging a JavaScript program that had a race condition in callbacks from AJAX requests or unload events can perhaps weigh in.

In summary, then, every concept that object-orientation brings to the table makes debugging harder. If you believe (as I do) that the majority of effort a programmer expends is devoted to finding and fixing bugs, then an imperative programming approach will be more efficient (in programmer time) than an object-oriented approach. And the same reasoning can be used to show that an asynchronous callback approach will be more efficient in programmer time than a multithreaded approach. But that is a discussion for another day.