The most accurate and informative source of information about software development and enterprise IT in the entire universe

Friday, April 1, 2005

The Case For Dysfunctional Testing

You testers, QA specialists, performance jockeys, you poor ISO 9001 efficiency experts—you’ve got it wrong. All wrong.

All this talk about functional testing. It’s a load of hooey, and you know it.

What’s the point of functional testing? You don’t want to know if your code functions.

Your designers and architects are smart, they’ve listened to the end user and created some killer UML diagrams. The UI requirements are solid. Your programmers are top of the heap. Of course it functions. It’s just plain stupid to waste time and money, and mess up your delivery schedule, with so-called functional testing.

What you want to perform is dysfunctional testing. You don’t want to know what works in your n-tier application. You want to know what doesn’t work. As that funny-looking yellow guy on The Simpsons would say, “D’oh!” And Homer ought to know, he’s a nuclear engineer. (He kinda looks like my brother-in-law Fred, only better looking.)

I know that this blinding insight may come as a surprise. But it’s not your fault. Nobody talks about dysfunctional testing. Googling for dysfunctional testing brought up 11 results. Eleven! By contrast, Googling the phrase functional testing led to 253,000 results. That’s just a crime.

It’s clear that everyone has their priorities wrong.

What do I mean by dysfunctional testing? Finding out where the code breaks. It’s just that simple. Really. Stop looking to see what works, and instead see what doesn’t work. Imagine that your million-line client/server system is 98 percent bug-free.

When you’re doing functional testing, you’re testing 100 percent of the application. That could take hours during the nightly test run. Stupid! If you just test the two percent that doesn’t work, you’ve cut the test time down to minutes.

The advantages of dysfunctional testing are all the more compelling when your organization gets higher in the quality scale. Say you’re working you’re way up the Capability Maturity Model ladder, stuck between rungs 2 and 3, and your million lines of code is 99.8 percent functional. Imagine the bottom-line benefits of testing only the 0.2 percent of the code that’s dysfunctional—that’s only 2,000 lines of code.

How hard can it be to test 2,000 lines of code, remediate the dysfunctions, and then refactor like mad to improve the runtime performance? Not hard at all. You’ll be at Six Sigma in no time, guaranteed. Take that, W. Edwards Deming!

Before you nominate me for the Malcolm Baldrige National Quality Award, let’s talk about how to actually achieve dysfunctional testing within your enterprise test/QA organizations. Believe it or not, it’s not as simple as you might think.

A key component of dysfunctional testing is something that I call “root cause analysis” — that’s a term that I’ve just made up. What you need to do is find the root cause of your software defects. In most cases, it’s easy to determine the cause: bugs. Yes, that’s it. Sometimes it might be a design flaw, but I’d put serious money on it being bugs.

There are different ways you can get rid of bugs. One way is to use a debugger. “Yes, I just got de bugger,” is something that I say a lot, ha ha! (You’ve got to have a good sense of humor if you’re working on cleaning up 2,000 lines of code before your afternoon jog.) Another way is to do a code walk through. But that can take a long time, and if you’re not good at dysfunctional testing, you might look at the wrong 2,000 lines.

That’s definitely not recommended by the President’s Council on Physical Fitness and Sports. A better approach is to employ a number of techniques, such as overload testing — that is, you whack the code so hard that it shatters. Forensic analysis will show you where it broke.

(Don’t confuse overload testing with namby-pamby load testing, where you gradually increase the transaction load on an application server while monitoring its performance on a curve. Like, who has time for that?)

Another approach is to adopt seVere Programming, my agile methodology that applies the dysfunctional testing metaphor at the daily work level. In VP, developers work in teams of three: one writes the code, one tries to breaks the code, and the third programmer smacks the first programmer with a ruler whenever a defect is found. It’s foolproof! (If the first developer runs away, this is called the Scram methodology.)

In conclusion: Functional testing is stupid. Dysfunctional testing is smart.

The Nobel Foundation knows where to find me.

No comments: