Wednesday, December 19, 2007

The Nature of Simple

The phone rang; it was so long ago that I hardly remember it. It all comes back in that hazy historic flashback where you're never really sure if those were the original facts or just ones that got inserted later.

It was another support call. I was the co-op student back then; one of my duties was to handle the very low volume of incoming calls. For some software that might not be hard, but in this case, handling support for a symbolic algebra calculator was anything but easy.

This latest call was interesting. A Ph.D. candidate working on an economics degree was asking questions about simplifying complex equations. All in all, they were not hard questions to understand, not like the ones where I end up struggling to answer questions on how to relate some obscure branch of mathematics back into something that could be computed. Both ends of the question, the mathematics, and the computer science were often fraught with details and complexities of which I had no idea. An average question would send me back towards the university searching for anyone who could at least inform me of the basics. In those days, we didn't have Wikipedia, and certainly no World Wide Web. The Internet existed, but mostly as universities connected together with newsgroups and FTP sites. If you wanted to find out about something, that meant hitting the library and scouring through books in the old fashion way. So much easier today. But it does take some of the fun out of it.

The core of this specific set of economics-related questions had to do with simplifying several complex microeconomic models so that the results could be used in a textbook. The idea was to use the symbolic algebra calculator to nicely arrange the formulas to be readable. The alternative was a huge amount of manual calculations that opened up the possibility that they might be handled incorrectly. Mistakes could creep into the manipulations. Automating it was preferable.

Once the equations were inputted into the calculator, the user could apply a simplified function. It was fairly easy. The problem, however that prompted the phone call was that the simplified results were not the results that were expected.

In the software, there were lots of parameters that the user could control and some nifty ways to get into the depths to alter the overall behavior, but time has washed all of those details from my mind. What I do remember is that after struggling with it for an extended period of time, both myself and the grad student came to realize that what the computer intrinsically meant by 'simplifying' was not really the same as what any 'human' meant by it. We -- through some less-than-perfect process in our thinking -- prefer things to not be rigorously simplified. Instead, we like something that is close, but not exactly simple. Our rules are a little arbitrary, the algorithm in the software wasn't.

Some things hit you right away, while others take years of slowly grinding away in the background before you really understand their significance. This one was slow; very slow. Simplification, you see is just a set of rules to reduce something, but there is more to it than that. You simplify with respect to one and only one variable. There is a reason for that.

SIMPLIFICATION MADE EASY

In any comparison of multi-dimensional things, we cannot easily pin one up against the other. Our standard way of dealing with this for two dimensions -- for example -- is to take the "magnitude" of the vector representing some point on a Cartesian graph. That is a fancy way of saying we take the two dimensions and using some formula -- in this case magnitude -- apply it to both variables and then 'project' the results onto a single dimension; one variable. In two dimensions we like to use the length of the vector, but that choice is somewhat arbitrary. Core to this understanding is that we are projecting our multi-variables down to one single measure because that is the only way we can make an objective comparison between any two independent things. Only when we reached two discreet numbers, a and b, x and y, 34 and 23, can we actually have a fully objective way of saying that one is larger than the other.

Without a measure, you cannot be objective so you really don't know if you've simplified it or made it worse. In any multi-variable system of equations, you might use something like linear programming to find some 'optimal' points in the solution space, but you cannot apply a fixed simplification, at least not one that is mathematically rigorous. Finding some number of local minima is not the same as finding a single exact minimum value.

Simplification then is an operation that must always be done with 'respect' to a specific variable, even if that 'variable' is actually a 'projection' of multiple variables onto a single dimension. For everything that we simplify, it must be done with respect to a single value. For any multi-variable set, there must be a projection function.

Falling back outwards -- way away from mathematics -- we can then start to analyze things like that fact that I want to "simplify my life". On its own, the statement is meaningless because we do not know in regards to which aspect in my life I would like to make easier. If I buy more appliances, to reduce my working time -- say a food processor, for example -- I save time in chopping. But it comes at the expense of having to buy the machine, washing it afterward, and occasionally performing some type of maintenance to keep it in working order. It simplifies my life with respect to 'chopping time', but if you've ever tried to wash a food processor you may know that it takes nearly as long to get it clean as it does to actually chop something. For a 'single chop' it's not worth bothering about, only when there is a great deal of cutting, does it really make a difference. If you used it every time you cooked, it would hardly be simpler.

If we look at other things, such as a computer program, underneath there are plenty of variables to which we can simplify with respect to them. Overall, the nature of our simplification depends on how we project some subset of these variables down onto a single measure. This leads to some interesting issues. For instance, the acronym KISS, meaning "keep it simple stupid" is used frequently as dogma by programmers to justify some of their coding techniques. One finds in practice that because the number of variables and the projection function are often undefined, it becomes very easy to claim that some 'reduction' is simpler -- which it is in respect to a very limited subset of variables -- when in fact it is not simpler in respect to some other projection based on a larger number of variables. We do this all of the time without realizing it.

FILES VS. FUNCTIONS

My favorite example of this comes from an older work experience while working on a VMS machine. Back in those days, we could place as many C functions as we wanted into individual files. One developer, while working, came up with the philosophy of sticking one, and only one function into each file. This, he figured was the 'simplest' way to handle the issue. A nice side effect was you could list out all of the files in a directory and because the file name is the same as the function name, this produced a catalog of all of the available functions. And so off he went.

At first, this worked well. In a small or even medium system, this type of 'simplified' approach can work, but it truly is simplified with respect to so few variables that we need to understand how quickly it goes wrong. At some point, the tide turned, probably when the system had passed a hundred or so files, but by the time it got to 300, it was getting really ugly. On the older boxes, 300 things in a directory scrolled hideously across the screen; you needed to start and stop the paging on the terminal in VMS to see what was in the directory. That was a pain. Although many of the functions were related, the order in the 'dir' was not. Thus similar functions were hard to place together. All of this was originally for a common library, but as it got larger it became hard to actually know what was in the library. It became less and less likely that the other coders were using the routines in common. Thus the 'mechanics' of accessing the source became the barrier to prevent people from utilizing the source, which was rapidly diminishing the usefulness and work already put into it.

Oddly, the common library really didn't have that many things in common, not with respect to any library that we often see today. 300 functions that actually fit into about twelve packages was fairly small by most standards. But the 'simplification' of arranging the files enhanced the complexity of other variables, causing the library to be useless because of its overall extreme amount of complexity.

By now, most developers would easily suggest just collapsing the 300 files into 12 that were ordered by related functions. With each file containing between 10 and 30 functions -- named for the general type of functions in the file, such as date routines -- navigating the directory and paging through a few files until you find the right function is easy. 12 files vs. 300 is a huge difference.

Clearly, the solution of combining the functions together into a smaller number of files is fairly obvious, but the original programmer refused to see it. He had "zoomed" himself into believing that one-function-per-file was the 'simplest' answer. He couldn't break out of that perspective. In fact, he stuck with that awkward arrangement right up to the end, even when he was clearly having doubts. He just couldn't bring himself to 'complicate' the code.

SIMPLE DRIVING

This post was driven by my seeing several similar examples lately. It is epidemic in software development that we believe we are simplifying something when we are actually not. Not that we should confuse how humans like simplifications that are not as rigorous as mathematics. It is, however, that people tend towards simplifying with respect to far too few variables, and then they willfully ignore the side-effects of that choice.

I frequently come across documentation where someone is bragging about how simple their solution is when clearly it is not. That files vs. functions problem pops up its ugly head all over Computer Science in many different forms. Ultimately, lots of people need to get a better understanding of what it actually means to really simplify something. That way their code, their architectures, their documents, and all that they present will actually be better; not just simplified with respect to some artificially small set of variables; causing all of the other variables in the solution to become massively more complex.

When you go to simplify something, it is important to understand a) the variables and b) the projection function. If you don't, then you are simplifying essentially randomly. The expected results of that will be: random, of course. If you've spent time and effort to clean up your code, then having that come crashing down on you because you didn't see the whole problem is rather painful. Something to be avoided.

SIMPLICITY AND COMPLEXITY

To simplify something is actually quite a complex operation. To do it with many many variables takes a great deal of processing. Given a universe where the possible variables are essentially infinite, it actually becomes an incredibly challenging problem. However many of us can do it easily in our minds; it is one of the mysteries of human intelligence. With that in mind I saw this quote:

"Don’t EVER make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That's giving your intelligence much too much credit." -- Linus Torvalds.

To be fair this quote was preceded by a discussion of how the human body was the most complex thing we know. The trial-and-error feedback that is referenced is the millions of years that it took nature to gradually buildup the mechanics of our bodies. We should note that the feedback cycle has left its mark in terms of artificial complexity, particularly in funny things like our tail bones. We did have tails, so it is not hard to understand why we have tail bones, but we don't have them anymore. Also, the parameters of the feedback loop itself were subject to huge variations as many environmental and geological factors constantly changed over the millions of years. Those disruptions have left significant artificial complexity in the design. We are far from simple.

But it is not unlikely that as we gain more of an understanding of the underlying details, we are able to visualize abstract models of the human body and all of its inner workings in our minds. After all, we do that for many things currently. We work with lots of things at a specific higher-level of abstraction so that we are not impaired by the overall full complexity of their essence.

Eventually, one day we will reach a point where we understand enough of the biomechanics that a single human could draft a completely new design for the human body. For that design, I fully expect that they would be able to eliminate most of the unnecessary buildup of complexity that nature had previously installed. I give our ability to think abstractly a great deal of credit. But more importantly, I give any system of variable changing adaptations, such as evolution, even more, credit in its ability to create artificial complexity.

Applying this type of thinking towards things like software development can be interesting. Many people believe that a huge number of software developers working together on a project in a grand scale will -- through iteration if done long enough -- arrive at a stage in their development that was superior to that of just one single guiding developer. Their underlying thinking is that some large scale trial-and-error cycle could iterate down to a simplified form. If enough programmers work on it, long enough, then gradually over time they will simplify it to become the ultimate solution. A solution that will be better (simpler) than one that an individual could accomplish on their own. Leave it to a large enough group of people for long enough and they will produce the perfect answer. This I think is incorrect.

We know that with some large group of people, each individual will be 'simplifying' their portion of the overall system with respect to a different set of variables and a different projection function. Individual simplifications will collide and often undo each other. The interaction of many simplifications with respect to different variables is the buildup of complexity at the overlapping intersections. In short, the 'system', this trial-and-error feedback loop, will develop a large degree of artificial complexity deriving directly from every attempt at simplifying it.

It gets worse too. Unless the original was fairly close to the final simplified version, the group itself will drive the overall work randomly around the solution space. It is like stuffing hay into a burlap sack with lots of holes: the more you stick it in one side, the more it falls out the other holes. You will never win, and it will never get done. Such is the true nature of simplification. We've certainly seen this enough in practice, there are lots of examples out there in large corporations. In evolution, for instance, huge numbers of the earlier less optimal models get left lying around. Nature isn't moving towards a simpler model, its just creating lots of them that are very adaptive.

Oddly, only one human -- who can squeeze into their brain -- a large enough number of variables and process them, is the only real way into which we can find something that is close enough to the simplest version. But just one person. I once had a boss who adamantly claimed that "nothing good ever came from a committee", and in many ways because of the 'blendedness' of the various concepts of each of the members his statement rang very true. However, it just might be more appropriate to say "nothing simple ever came from a committee". If we want it to be simple in our human space or the mathematical one, it needs the internal consistency of having just sprung from one mind and preferably at one time. Once you get too big for a single human, it needs to be abstracted at multiple levels in order to fit properly. Strangely our knowledge itself gets built up into many layers of abstractions in a never-ending feedback cycle, but one in which only human intelligence can take advantage.

FINAL SUMMARY

Ok, time to wrap it up. We talk a good talk about simplifying things, but fairly often we are actually unsure of what that really means underneath. It is easy to wear blinders or get one's head fixed on a specific subset simplification, but it is very hard to see it across the whole. In that way, the term itself: 'simplify' is very dangerous. We use it carelessly and incorrectly when we do not affix it to a specific variable. We do not simplify things, we simplify them with respect to a specific measure. Once that is understood, you see why the road to complexity is paved with so many attempts to simplify problems; it is an important key to understanding how to build reliable computer software programs.

My final thought is that if the number of variables for use in simplifying something is infinite -- you can use one of those Turning inspired proofs about creating meta-variables to prove this -- this means that from a universal perspective, you could never get a 100% projection together. So this implies that ultimately you cannot absolutely simplify anything. There is no universal simplification. It just doesn't exist, possibly proving the statement "nothing is simple". Neat huh?

7 comments:

  1. I think you've hit on something truly profound about "projection" I think that all cognition involves some form of "projection" from some other level of thought into another one. In short the act of speaking is the projection of thought into language and so is the act of programming... and so is the act of comparison... or computation. So rather like Homer's cave wall watchers we don't see real thought but the shadows of thought stuff.

    So our challenge in teams of developers is to find the cut points where we can slice away sets of "simple" and only stitch them together when absolutely necessary. A way to find and separate out the "simple" of one perspective from the "simple" of another. I have called this a "crystalline" structure or a "fractal" structure where complexity in the whole is hidden in the simplicity of the local.

    Your ideas of "projection" and "simplicity" have given form to some of these other ideas. Thanks.

    ReplyDelete
  2. Hi Shawn,

    Thanks for your comment. Various branches of mathematics use the term projection, the most memorable to me comes from an appendix to a textbook on linear programming. An alternative approach to using the simplex method -- if memory serves me correctly -- was to essentially tack on another dimension to the existing ones, jump out of the problem, and use a projection from the higher space to find the optimal points in the original one. Whether or not I remember the math correctly, that idea of stepping out of your current plane to a higher one to navigate stuck with me as a cool concept. I 'borrowed' my use of the term projection from my (hazy) memory of that approach.

    I love your idea that speaking is a projection of though onto language. It elegantly explains why we can often 'see' something in our heads but fail to bring it down to something that we can say or write.

    For teams, to me if everyone is running around with their own idea of simple, what I really want to do is 'encapsulate' them into their own space, in a way that preserves some type of overall consistency, but allows them some freedom to create what they want. Some of my older posts dig into encapsulation. Slicing up the work to preserve that is a big unconquered problem in software development.

    I've certainly seen your fractal structure problem enough times in the big code bases. Everything is similar, but different and the inconsistencies themselves are to blame for a large number of problems. But nobody ever wants to go back and fix them (the obvious, but entirely unloved solution).


    Paul.

    ReplyDelete
  3. Your comment about the program meaning something different by simplifying reminds me of eta-reduction in Haskell using (the magnificent) lambdabot (on #haskell on freenode).

    On several occasions I have passed lambdabot a haskell function to eta-reduce and instead of removing one argument from the end and making it implicit as I expect it removes every argument and hands me back something I find incomprehensible. Simplified indeed!

    ReplyDelete
  4. Hi Thomas,

    It is amazing how often we run into simplification problems without ever realizing it. It took me a long time to even try and put words to my understandings.

    While people have their own lessor definition of simple, I'm pretty sure that even dogs for example have their own twisted version of logic. Dog-logic makes perfect sense if your a canine, but for the rest of us the dog is just acting strangely :-)

    Paul.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Same goes for Optimization. Simpllification is a kind of optimization, by the way.

    ReplyDelete
  7. Hi Astrobe,

    Thanks for the comment. Yes, you can easily see simplification as an optimization to remove unwanted complexity. Normalization too, is another well-known set of re-ordering rules. They are all significant, yet stable transformations to (mostly) similar representations.

    Paul.

    ReplyDelete

Thanks for the Feedback!