Iterative Design

Test-driven development teaches you not to do all your design up front. Your implementation is organic, it evolves as you write test cases and implement code. Refactoring functionality in your IDE helps a great deal. By taking small steps and rigorously refactoring out duplicate code and other code smells, you polish your implementation. Seldom does the end product look like what you might have designed up front.
The opposite of this is the waterfall method of design: capture all your requirements, design the heck of the proposed system, and then code it.
I’ve always found this latter methodology to be extremely suspect: it’s not very resilient when confronted by changes brought about by learning about the problem space as you go along, or to changes brought about by the business need evolving, and so on.
I’ve been involved in several projects where the users captured all their needs for the software as requirements. They used either Word or products like CaliberRM. Of course, some requirements were deemed no longer necessary half way through the project, some were ambiguous in the extreme and resulted in a lot more work, some weren’t properly fleshed out, and others changed because a new approach to development was deemed superior.
Let’s face it, writing requirements is ruddy hard. Thinking about a problem in a virtual vacuum always is: sometimes you just need to see the different possibilities before you can make a decision. All of this experience in the school of hard knocks has taught me to be wary of projects where too much design was done up front.
Better software is written if there is just enough design at the start to provide an overall goal for the software. You should gather enough requirements to indicate whether or not to use a multi-tier type implementation, or whether you should be using web services, or whether you need to support a web farm, or whether a simple Win32 app will suffice. These kinds of decisions are hard to change once a substantial amount of code has been written.
But once this goal has been agreed on, once the path seems fairly well indicated, you should concentrate on building well-designed classes that cooperate well and that are well supported by unit tests, and do it in a step-wise manner. This way you’ll be able to quickly and easily provide simple prototypes to the users. Faster and more regular prototypes elicit better feedback. Better feedback will help guide you towards better design. The code you write will be easier to adapt as you (and your users) learn more about the system.
Having worked for a library and tools vendor in the past, I can firmly say that this is a much better way to develop component library code. The exercise of writing a unit test, as if the library code already existed, is a great way to visualize how your library will be used. By writing different unit tests, you can experiment with different ways of calling the library without having reams of code creating inertia, or causing resistance to change. It also allows you to refine how the interface to your library should look.
There are several downsides to working this way, of course. One is that it requires a lot of user involvement. "I know that lots of things don’t work yet, but is this along the lines of what you wanted? What’s wrong with it? I can produce a different prototype tomorrow, could you test it and comment on it?"
Another downside is that it’s hard to estimate how long a particular feature would take, which makes it hard to decide how to write a fixed-price contract, and so on.
Nevertheless, I’m of the firm belief that better software is produced by an iterative design methodology as well as iterative implementation through TDD.