Wednesday, November 21, 2007

Mind the Gap

Sometimes to get a clearer perspective on a problem we need to leave its proximity and venture a little deeper into the woods.

For this particular post, we need to go way out into the forest before we can come back and make sense of what we are seeing. With a little patience and time, I think you'll find that the rewards from exploring are well worth the stumbling around in the bushes and trees.

As an abstraction, pure mathematics is nothing short of absolute perfection. It exists in its own idealistic dimension with no interference from our world in any way.

As such, arithmetic for example is always consistent no matter where you are in the universe. It never changes, it is never wrong, it cannot be reduced to anything lower and it cannot be refactored into anything simpler. Quite an accomplishment and a model of perfection. The same is true of many other branches of pure mathematics.

As more of the real world gets involved in the equations -- such as physics -- the details become messier, but with respect to what they subscribe to, it is still close to perfect. The details cause complexity, but it is inherent in the problem domain, and thus acceptable.

Going out even further to some mathematically based soft science such as Economics we see even more real world detail, but also another type of complexity that is derived directly from humanity. I'd call it "errors", but that's not really fair. It is better described as the "not perfectness" that is driven by human personalities, history, interaction and thinking. It is working knowledge that is full of artificial complexity; where some reduction always exists that could simplify it. Unlike inherent complexity, it is not necessary.

So, we have this point of perfection that we often drift way from because of the complexity of the underlying details and also because of our own introduced complexity. These points are all important.

There is nothing we can do about real world complexity. It is as complex as it needs to be in order to function. That complexity comes directly from the problem domain and is unassailable.

The complexity we have created ourselves, however, is more interesting. Underneath it lies some minimal real world complexity, but more because of how we got there the complexity we are dealing with is significantly larger. This leads to a "gap" between the perfect solution and the one that we currently have. It is an important gap, one that bears more digging and understanding.

Looking at software, Computer Science revolves around building tools to manipulate data. In both our understanding of the data and our understanding of the tools themselves we have significant room for introducing problems.

With a bit of practice, it is not hard to imagine the perfect tool manipulating the perfect data representation. We are always driven to complete tasks, so on a task by task basis we can close our eyes and picture the shortest path to getting the task finished. We can compare this perfect version of what we are trying to accomplish to the tools and data representations that we actually have intuitively measuring the gap between the two. And it is a surprisingly large gap.

I am often very aware of just how big of a gap. I was lucky to work on some very complex but extremely well written systems, so I have a good sense of how close we can really come to a perfection solution. I can guess at the size of the minimum gap, and while some gap must always exist the minimum can be way smaller than most people realize. There is a huge latitude for our software to improve, without actually changing any of the obvious functionality.

We don't need fancy new features, we just things that work properly.

Over time the trend that I have seen is for our software to get larger and more sophisticated. It is clearing improving, but at the same time however, the gap has also been steadily growing; wiping away many of the improvements.

Given our current level of technological sophistication, today's software should be far better than it actually is. We have the foundations now to do some amazing things but we don't have the understanding of the process to implement them. We have a huge gap. Our software is much farther way from its potential than it was twenty years ago. A trend that is only getting worse with time.

The size and growth rate of the gap are important. It not only gives us an indication of how well we have done with our science, but also what our future directions will be. Given that the gap is growing faster than ever, my virus-laden PC that is choked by bad software and malicious attempts at crime and profit comes as a real embarrassment to the software industry. In the past we have been lucky and the expectation of the users has diminished, but one should really brace themselves for a backlash. We can only make so many excuses for the poor behavior of our systems before people get wise.

For instance, a virus obviously isn't necessary to make a computer function properly. As such, it is clearly unnecessary complexity that comes from the wrong side of the gap. If things had been written better, creating a virus would be impossible. The gap widened to allow spyware and viruses to became reality; then it widened again to make anti-software a necessity. Now we consume mass amounts of human effort on problems that shouldn't exist.

Computers are deterministic machines. Specifically what I mean is that they behave in completely predictable ways and with some digging all of the visible actions of the machine should be explainable. It is surprising then to encounter experienced professionals who tell stories of their machines "just acting weird". If it were one or two people we might write it off as a lack of training or understanding, but who amongst us has not experienced at least one recent unexplained behavior with their PC?

More to the point, for professionals working for years, hasn't this been getting more and more common? All of these instabilities come from our side of the gap. Being deterministic, our computer systems could be entirely built to be explainable. They should just function. Weird behavior is just another aspect of the gap.

The gap is big and getting bigger, but is it really important? When we build tools, we need our foundations to be stable so that we understand the effects of the tool. Building on a shaky foundation makes for a bad implementation. The gap at the base only gets wider as you build on top of it. Nothing you build can shrink it. If it is too wide, you can't even bridge it. The usefulness of a computer and the quality of our tools is dependent on the size of the gap.

So, standing back here in the forest, we see where we are currently with our latest and greatest technologies. Somewhere over the next hill, down into the valley and up the other side is the place where we could be if we could find a way to minimize the existing gap. We can see it from here, but we can never get there until we admit that 'here' is not where we want to be. If we stay were we are the things we build only get farther away from our potential. We amuse ourselves with clever bits of visual trickery, but progress doesn't mean more dancing baloney, it really means closing the gap. Something we should do as soon as possible.

2 comments:

  1. Another great, thought-provoking post. I have the same itch, the one that comes from realizing "'here 'is not where we want to be". So, how could we get to that "other place"? Lead on! :-)

    ReplyDelete
  2. I actually think we could get a lot closer if we only did two things: a) change our perspective of software and b) change our process, a bit.

    For the first change, most developers have a functionality-centric view of their work. I've always had a data-centric view, which I find simplifies both the design and coding. It also helps to write things that inherently enforce consistency. The computer is such a powerful tool, we should be able to utilize it to aid us in building better systems.

    The other change is with our processes. Relatively small differences would make huge improvements if they are in the right direction. I'm still playing around with a methodology, which I'll release in draft one of these days that takes a slightly different approach towards development. It's very common practices except for two points. For design it tries to layout the system as a very small set of 'blueprints', with just enough detail to ensure that what is built is what is expected. For testing it tries to detach the testers from the developers and broaden their responsibilities, presuming that they are mostly under utilized.

    We all really need to re-examine our processes since they are the most defective part of software development. Until we figure out 'how' to development software, 'what' we develop is less important.

    ReplyDelete

Thanks for the Feedback!