Test-driven development or TDD is a relatively new practice and one I've only played around until recently.
The space program is always doing things for a first time. First time in orbit, first time to leave orbit, first time to the moon, first time on the moon - you get the picture. What is remarkable is their success rate when doing things for the first time. When we compile a program, what is the likelihood it will compile correctly the first time? Probably not all that impressive a statistic, but then again, nothing serious is really at stake. But the space program has a lot at stake - human lives for starters. So how do they do it? I do know they do a lot of testing. They do every conceivable test they can on the ground, in labs, in physical simulators, in computer simulators, you name it. They have processes, with exact sequences of steps worked out. They test these. So when it comes time for the real thing, they've done about every conceivable test you can imagine and have essentially tested the entire system separately. They also do not make giant steps. Remember, it was Apollo 11 that first put men on the moon. There were several missions prior to that. Docking missions, go to the moon and back missions, etc. They did one thing at a time and made sure it was safe and sound.
I look at TDD as a methodology to take baby steps with 100% certainty. Your system nevers gets into a state of confusion because you never let it get that way. When systems become unwieldy it is because they are in a state of confusion to greater or lesser degree. TDD is the stable datum that tells you that your system is working - it is the green light, the go ahead that it's safe to launch.
TDD does take discipline. This is not a negative. It just is what it is. What I like is you make a test fail by default and then it is your job to make it pass. That makes sense. You are saying it is safer to make the test fail from the start and then I have to prove it passes. That's acknowledging that Murphy does exist and you are taking him into account.
So what are the steps to TDD?
- Write a test.
- Run the test. It fails to compile because the code you're trying to test doesn't even exist yet! (This is the same thing as failing.)
- Write a bare-bones stub to make the test compile.
- Run the test. It should fail. (If it doesn't, then the test wasn't very good.)
- Implement the code to make the test pass.
- Run the test. It should pass. (If it doesn't, back up one step and try again.)
- Start over with a new test!
One part of TDD that I have not fully accepted (yet) is you write your tests before you really have a design. I think the idea here, which I do agree with, is to determine what the test is based on the requirements - that is a good thing. For example, if we are writing a program to calculate the boiling point of water at 300°F, well we should have a method that tests for the correct answer. We do not need to know yet how this is implemented. As developers, we look at testing what we have already written, but we should be testing what the user is expecting to get or see or whatever. So in that case, a test in the absense of a design is acceptable. However, there is a point where we do need to get more granular. Is my list returning the correct count of items in the list. So I think there are both types of tests that need to be written.
TDD involves doing small, baby steps - just like the space program. It involves testing everythinng separately and thorougly, just like the space program. It is thorough and I'll be sharing my actual, real life experiences with TDD over the next few months.
Comments