Tuesday, November 9, 2010

Reducing Test Effort

Last week a reader, Criador Profundo, asked a great question on my last post Code Validation:

“Could you comment on how tests and similar practices fit in with the points you have raised?”

I meant to reply earlier, but every time I started to organize my thoughts on this issue I found myself headed down some closely-related side issue. It's an easy area to get side-tracked in. Testings is the point in software development where all of the development effort comes together (or not), so pretty much everything you do from design, to coding, to packaging affects it in some way.

Testing is an endless time sink; you can never do enough of it, never get it perfected. I talked about tactics for utilizing limited resources in an older post Testing for Battleships (and probably a lot of the others posts as well).  I’ll skip over those issues in this one.

The real question here is what can be done at the coding level to mitigate as much of the work of testing as possible. An ideal system always requires some testing before a release, but hopefully not a full battery of exhaustive tests that takes weeks or months. When testing eats up too many resources, progress rapidly slows down. It’s a vicious cycle that has derailed many projects originally headed out in the right direction.  

Ultimately, we’d like for previously tested sections of the system to be skipped if a release isn’t going to effect them. A high-level ‘architecture’ is the only way to accomplish this. If the system is broken up into properly encapsulated self-contained pieces, then changes to one piece won’t have an effect on the other pieces. This is a desirable quality.

Software architecture is often misunderstood or maligned these days, but it is an absolute necessity for an efficient iterative based development. Architectures don’t happen by accident. They come from experience and a deliberate long-term design. They often require care when extending the system beyond its initial specifications. You can build quickly without them, but you’ll rapidly hit a wall.

Encapsulation at the high level is the key behind an architecture, but it also extends all of the way down at every level in the code. Each part of the system shouldn’t expose its internal details. It’s not just a good coding practice, it’s also essential in being able to know the scope of any changes or odd behaviors. Lack of encapsulation is spaghetti.

Redundancies -- of any kind: code, properties, overloaded variables, scripts, etc. -- are also huge problems. Maybe not right away, but as the code grows they quickly start to rust. Catching this type of rusting pushes up the need for more testing. Growth slows to a crawl as it gets quickly replaced by testing and patching. Not being redundant is both less work, and less technical debt. But its easier said than done.

To avoid redundant code, once you have something that is battle-tested it makes little sense to start from scratch again. That’s why leveraging any and all code as much as possible is important. Not only that, but a generalized piece of code used for fifty screens is way less work to test, then fifty independently-coded screens. Every time something is shared, it may become a little more complex, but any work invested in testing is multiplied. Finding a bug in one screen is actually finding it in all fifty. That is a significant contribution.  

The key to generalizing some code and avoiding redundancies in a usable manner is abstraction. I’ve talked about this often, primarily because it is the strongest technique I know to keep large-scale development moving forward. Done well, not only does development not slow down as the project progresses, it actually increases it. It’s far easier to re-use some existing code, if it is well-written, then it is to start again from scratch.

Sadly, abstraction and code re-use are controversial issues in software development because programmers don’t like having to look through existing code bases to find the right pieces to use, they fear sinking too much time into building up the mechanics, and because it is just easier to splat out the same redundant code over and over again without thinking. Coding style clashes are another common problem. Still, for any large scale, industrial strength project it isn’t optional. It’s the only way to avoid an exponential growth in the amount of work required as the development progresses. It may require more work initially, but the payoff is huge and it is necessary to keep the momentum of the development going. People often misuse the phrase “keep it simple, stupid” to mean writing very specific, but also very redundant code, however a single generalized solution used repeatedly is far less complex than a multiple of inconsistent implementations. It’s the overall complexity that matters, not the unit complexity.

Along with avoiding redundancies comes tightening down the scope of everything (variables, methods, config params, etc.) wherever possible. Technically it is part of encapsulation, but so many programmers allow the scope of their code or data to be visible at a higher level, even if they’ve tried to encapsulate it. Some misdirected Object Orient practices manage to just replace the dreaded global variable, with a bunch of global Objects. They’re both global, so they are both the same problem. Global anything means that you can’t gauge the impact of any changes, which means re-testing everything, whether it is necessary or not. If you don’t know the impact of a change, you’ve got a lot more work to do.

Another issue that causes testing nightmares is state. Ultimately, the best code is stateless. That is, it doesn’t reference anything that is not directly passed into it, and its behavior cannot change if the input doesn’t change. This is important because bugs can be found accidentally as well as on purpose, but if the test case to reproduce the bug is too complex to reproduce, it will likely be ignored (or assumed to have magically disappeared) by the testers. It’s not uncommon for instance to see well-tested Java programs still having rare, but strange threading problems. If they don’t occur consistently, they either don’t get reported or they are summarily dismissed.

There are plenty of small coding techniques as well. Consistency and self-discipline are great for reducing the impact of both bugs and extending the system. Proper use of functions (not to big, not to small, and not to pedantic) makes it easier to test, extend and refactor. Making errors obvious in development, but hiding them in the final system helps. Limiting comments to “why” and trying to avoiding syntactic noise are important. Trying to be smart, instead of clever, helps as well. If it’s not obvious after a few months of not looking at it, then it’s not readable and thus a potential problem.

Ultimately once the work has been completed and tested a bit, it should be set aside and ignored until some extension is needed later. If you’re forced to constantly revisiting the code then you’re not building anymore. It’s also worth noting that if the code is any good there will be many many people that look at it over the decades that it remains in service. The real indicator of elegance and quality is how long people continue to use the code. If it’s re-written every year, that says a lot about it (and the development shop). (Of course it can also be so bad that nobody has the nerve to look at it, and it just gets dumped in by default).

There is a big difference between application development and system’s programming. The latter involves many complex technical algorithms, usually based on non-intuitive abstractions. It doesn’t take a lot of deep thinking to shift data back and forth between the GUI and the database, but it does to deal with resource management, caching, locking, multiple-processes, threading, protocols, optimizations, parsing, large scale sorting, etc. Mostly, these are all well-explored issues and there is a huge volume of available knowledge about how to do them well. Still, it is not uncommon to see programmers (of all levels of skill) go in blindly and attempt to wing it themselves. A good implementation is not that hard, but a bad one is an endless series of bugs that are unlikely to ever be resolved, and thus an endless series of testing that never stops. Programmers love to explore new territory, but getting stuck in one of these traps is usually fatal. I can’t even guess at the number of software disasters I’ve seen that come from people blindly diving in without first doing some basic research. A good textbook on the right subject can save you from a major death march.

Unit testing is hugely popular these days, but the only testing that really counts in the end is done at the system level. Specifically, testing difficult components across a wide range of inputs can be faster at the unit level, but it doesn’t remove the need to verify that the integration with other pieces is also functioning. In that way, some unit testing for difficult pieces may be effective, but unit testing rather simple and obvious pieces at both the unit level and the system level is wasted effort, and it creates make-work when extending the code. Automated system testing is hugely effective, but strangely not very popular. I guess it is just easier to splat it out at the unit level, or visually inspect the results.

From a user perspective, simple functionality that is easily explained is important for a usable tool but it also makes the testing simpler as well. If the test cases are hugely complicated and hard to complete properly, chances are the software isn’t too pleasant to use. The two are related. Code should always encapsulate the inherent difficulties, even if that means the code is somewhat more complicated. An overly-simple internal algorithm that transfers the problems up to the users may seem elegant, but if it isn’t really solving the problem at hand, it isn’t really useful (and the users are definitely not going to be grateful).

There are probably a lot more issues that I’ve forgotten. Everything about software development comes down to what you are actually building, and since we’re inherently less than perfect, testing is the only real form of quality control that we can apply. Software development is really more about controlling complexity and difficult people (users, managers AND programmers)  than it is about assembling instructions for a computer to follow (that’s usually the easy part). Testing is that point in the project where the all of the theories, plans, ideas and wishful thinking come crashing into reality. With practice this collision can be dampened, but it’s never going to be easy, and you can’t avoid it. It’s best to spend as much initial effort as possible to keep it from becoming the main source of failure.

No comments:

Post a Comment

Thanks for the Feedback!