The Drive to Write Free Software. Part 3

Evolution: Knowledge Wants to be Free

We have seen that an economic and historical analysis of this subject is useful, but not completely satisfying. Perhaps the real roots of the free and open source software movements lie not in economics or history, but in human nature itself.

If you step way back, and begin looking from a distance at the forces that drive life here on this planet, it does not take long to become aware of a force that we, for lack of a better term, call evolution. At bottom, evolution is about the dissemination of knowledge. In particular, it is about the dissemination of knowledge encapsulated in the genetic structure of the creatures that inhabit this planet. That is an odd form of knowledge, but it is knowledge nonetheless.

When people talk about genes, and about the evolution of a species, they don’t always think about mathematics or information sciences. But at bottom, genes are all about mathematics and information. Genes are a form of knowledge encoded in a structure that is not really so different from a computer language. The famous double helix that underlies our genetic structure is something that can be duplicated almost exactly on a computer. In fact, when it came time, in the 1990’s, to unravel the secrets of our genetic structure, real progress was slow until people began to use computers to map the human genome.

Genes track information in a manner that is directly analogous to the way computers encode information in bits and bytes. Genes have their own language, consisting of four characters, just as computers are based on a binary language. In other words, human genes are more than a little bit like little computers. Genetic information contains the code for the very structure of our physical being, just as the bits and bytes in a computer form the structure of a computer program. The information encoded in genes is the information that is used to determine the color of our eyes, hair and skin, the structure of our bones, the kinds of diseases we are prone to and are likely to resist, even to some degree the structure of our nervous system. All of these things are dependent on information encoded in genes.

The behavior of computer programs, and even their appearance, are also encoded in a series of bits and bytes not so different from the information in a gene. In other words, information is information, whether it is encoded in a human gene or encoded in a computer program.

If you want to understand the development of life on earth, you have to understand genetics. Life emerged from tiny one celled animals into complex creatures such as cats, deer and humans due to the different ways in which knowledge, encoded in genes, can be combined and recombined. This whole subject is explained beautifully in the extraordinary book Microcosmos by Lynn Margulis and Dorion Sagan.

But why did life evolve this way? Why weren’t genes content just to stay in little one celled animals? What force drove them to create more and more complex hosts? Genes are the driving force behind evolution. Without DNA and RNA and the whole relentless, combinatorial drive to evolve, life as we know it would not exist. Why is the information in genes continually reaching out to form more and more complex, more and more sophisticated, forms of life? Is there something inherent in the nature of knowledge that wants to expand, that wants to be free? Apparently, the answer to this question must be yes.

Whether this force is a manifestation of God’s will, or of randomly driven nature, is not really the question here. If God created this world, then certainly one of Her primary engines of evolution was the force that demands that knowledge be spread, be disseminated, that it continue to grow. The desire of knowledge itself, of life itself, to evolve and grow is simply one of the laws of life as we know it.

The written history of the human race is in effect the unbinding of recorded knowledge from our genetic structure, and the encoding of that knowledge in books, media and computers. As people learned to encode knowledge first in written text, then in printed text, and finally in computers, they in effect harnessed the power of knowledge itself. Modern life evolves so quickly because we can encode knowledge in books and computers, much as knowledge about the structure of a living being can be encoded in a gene.

You might think that I am trying to set up an analogy here between knowledge as we know it in books, film and computers, and knowledge that is encoded in the humane genome. But I do not view this as an analogy. I think information is information no matter how it is stored. This information drives physical (but not spiritual) evolution here on earth, and it wants to be free to do its work. Now we have entered an age when genes emerge not through random events in nature, but through direct manipulation by people. In other words, knowledge has found a new way to force its evolution.

The point to grasp here is that human knowledge is not just an abstraction, it is a force of nature, it is one of the basic principles with which God imbued creation. The idea of trying to wrap up knowledge inside copyright or patent law suddenly becomes absurd when seen from this perspective. You can’t control so powerful a force with such crude tools. (This is not a diatribe against copyright law. Notice, for instance, that I have a copyright notice at the top of this article. Copyrights and patents are useful tools, but they are not as primary, not as powerful, as the urge to obtain and disseminate knowledge.)

People write free software because software is knowledge, it is the very force of nature itself, and you can’t suppress knowledge. Life itself, first in the form of genes, but then later in the form of written words and finally as binary data, is all about the dissemination and evolution of knowledge.

You can’t suppress this force by insisting that only corporations can control knowledge. It is not just that some people find the idea of giving such knowledge to corporations repugnant, but that life itself won’t put up with restrictions of that type. Knowledge wants to be free, it wants to spread itself across not only this planet, but the entire universe.

When powerful forces try to bind knowledge and make it the plaything of an economic elite, they are fighting a battle that hopefully can never be won. They think that they can own knowledge, and that they can force us to only borrow it for short periods of time. They have the source, we get only binary data, they have the rights, we have to agree to EULA’s that take away any meaningful sense of ownership that we can have of that software. In the long run, however, knowledge will escape from their clutches. If it does not, then life as we know it will stop evolving, and we will be frozen in place. That is, we will die.

So that is why people write free software. Software is a form of knowledge. Knowledge is part of the fabric of life. Knowledge wants to be free so that life can evolve. People write software for the same reason they build houses, or fall in love. We were born to create and share knowledge. It is one of our deepest and most profound instincts.

Corporations try to control this knowledge by hiding the source code for their software. The US government tries to hide this knowledge by enshrining it in a corporate monopoly they believed useful to their conception of the state. But what happens? The strangest of all things. Something that from a particular perspective makes no sense at all! People start building software for free on their own, in their spare time! What sense does that make? What can possibly be motivating these people? How can we make sense of what they are doing? What possible explanation is there for this huge, wildly successful, seemingly irrational, international movement to create free software? What is it that wants to be free? From what does it want to escape? Why does it want to escape? What is its purpose?

The people who want to bind knowledge with laws, who want to own it, who want to possess it for their own benefit, will tell you that knowledge is property. That they own it. They will even try to “own” the knowledge encoded in genes. They will literally try to copyright the genes that form the very substance of life itself. (And yes Virginia, this is already happening.) But knowledge doesn’t want to be owned. And certainly it doesn’t want to be owned by something as lowly on the cosmic scale of things as a human being sitting in an office in Washington DC or in Silicon Valley. The force driving the spread of knowledge is much more powerful than a group of middle aged men and women sitting in government or corporate buildings.

Does this mean that corporations and private enterprise have no part to play in the development of software? Of course not. Knowledge will use any tool available to help it grow and spread. Sometimes market forces are a great means of enhancing the spread of knowledge. In those cases, corporations and human knowledge work together to achieve the same ends. But it is not the corporation that is in charge, it is nature itself. Knowledge wants to spread, and it will use individuals, governments, corporations, educational institutions, monasteries, whatever tools are available, to help it achieve that end. But it will not make itself subservient to any particular corporation or denomination. Knowledge, and God’s will, are greater than any individual, any corporation, any religion, or any educational institution.

Why do people write software for free? It probably makes more sense to ask why software wants to be written. But when you put the question that way, then the whole idea of people trying to bind knowledge by legal means, or by obfuscating the source, becomes a bit laughable. It’s just not going to work, and everyone in the software development community knows that it is not working. If you have doubts, go spend a half an hour on SourceForge, on the Apache site, and you will know that it is not working! But there are some people who don’t want you to look at it that way. They have a vested interest in being sure that you don’t look at it that way.

So tell me: Why do people write free software? It seems a bit enigmatic at times, this urge to write software for free. If we decide that life is all about making money, then it makes no sense at all. But maybe life is about more than just money. Maybe the really powerful forces in life aren’t economic. But if it’s not money that motivates these people, then what is it? Is life really about economics, or are there other forces in play here? If so, what are those forces? Whatever they are, they must be very deep, and very powerful. What theory is there that is large enough to account for such an extraordinary phenomenon?

The Drive to Write Free Software. Part 2

History: The Origins of the Free Software Movement

Sometimes difficult questions can be answered by looking at history. In discovering the roots of a movement, we can often learn something about its causes. So let’s try following the historical record for a bit, and see where that leads us.

During the late sixties, and through the early eighties, many of the greatest contributions to software emerged from the universities and corporate think tanks. One way or another, this software was available free of charge to the computer community. Just as academics shared software, so did the workers at big corporate think tanks. They lived, in effect, in a free, open source, software community. And they liked living there, and they didn’t want the open sharing of knowledge to end. Computers also came with complete suites of software, and usually shipped with source. Especially from a management position, this was not the same things as free, open source software. Yet to the developers who worked on these machines, it felt as if the software and its source came for free. If you want to read more about this part of computer history, you can start with Steven Levy’s famous book called Hackers.

But as smaller, more portable computers developed in the eighties, this situation changed. Suddenly software was being written by corporations for sale to people who had money. Companies like Microsoft, Novell, Lotus and others emerged, and began selling software, but not the source to the software. Knowledge was no longer freely available. Instead, it was something that had to be purchased. In universities, and at corporate think tanks, source was usually available. But ironically, when cheaper computers made software more widely available, that was precisely when corporations stepped in and tried to claim the intellectual rights to knowledge that had previously been freely available, at least to those in the corporate think tanks or in academia.

Both the academics at major universities, and some of the personnel from the great corporate think tanks such as Bell Labs, felt that this was a betrayal of the values they had cultivated during the previous two decades. Previously knowledge flowed freely among the small group of people who had access to computers. Now many more people could own computers, but the source to the software was locked up. As a result, a small group of these developers formed a community that valued free software. The heart of their argument was that owning the source to computer programs was important, and having the right to recompile a program was important. On a more idealistic level, many of them believed that knowledge about computers was the province of humanity itself, not of individuals or corporations. To them, it made no more sense to talk of owning a compiler or algorithm than it did to talk of owning the rights to the syntax of the English language. Ultimately, their argument was that proprietary software represented a restriction on the field of computer science, and on their rights as free individuals in a free society.

Particularly in the academic world, there was a sense that the computer community was working to create a tool that could be used for the good of mankind. The idea that knowledge which could benefit everyone should be owned by a corporation was repugnant to some people. These people wanted to live free, and they wanted knowledge to be freely accessible. They didn’t want to be told how, when, or to what extent they were free to use a piece of information. You can read more about this world view in Eric Raymond’s, The Art of UNIX Programming.

Clearly the thoughts of this small group of people in academia and in corporate think tanks does not provide a complete explanation for a trend as large as the Open Source Movement. Their ideas are simply far to abstract and too idealistic to gain hold in a country like America at the present time. Nevertheless, their ideas and their efforts formed one of the major motivating forces behind the creation of the free software movement.

The history of computer science in academia and in corporate think tanks explains what happened, but not why it happened. We know that people want to be free, and that they want knowledge to be freely available, but it is more difficult to understand why they want these things. To understand why people want to share the source for their programs, to see why they want knowledge to be free, we have to explore this subject further.

The Drive to Write Free Software. Part 1

I had lunch with a colleague the other day. We talked about a free, open source project that we use at CodeFez. We both agreed that the project was well designed and well crafted. But after a bit, my friend turned to me and said, with obvious sincerity, “But I just don’t get it! Why do people build free software? What motivates them? It doesn’t make any sense!” I had no definitive, irrefutable answer to that question. But it did seem the sort of question that led to interesting speculation.

Economics: Rounding Up the Usual Suspects

There are certain obvious, yet superficial, answers to the question of why the open source movement exists, and why people build free software. For instance, it is difficult to compete on an economic basic with companies that have a monopoly or near monopoly position in a market. In the absence of legislation limiting the scope of these monopolies, the only alternative is to build free software. The free market system collapses in the face of a monopoly. Free software is one alternative that promotes competition and choice in a market dominated by massive forces with virtually unlimited power.

A less dramatic force driving the free software movement can be seen in corporations where software developers need tools. Developers in corporations work for departments, and each department has a budget. As a rule, these budgets are not designed to be flexible, but instead set up a static framework in which developers are expected to work. Hampered by these budgets, it is often difficult, though by no means impossible, for developers to buy the tools they need. As a result, software developers have formed small international coalitions to develop the software tools that they need. Go to SourceForge and you can see tens of thousands of these tiny international coalitions creating software tools under the aegis of the open source movement.

As powerful and important as they are, the economic and legal forces discussed in this section of this article are not really the basis of the free software movement. They answer some questions, but they leave too many other questions unanswered. Why aren’t people unhappy using the software provided by a monopoly? Why should employees bother to gang together to solve their employer’s problems? It is clear that to understand free software, one needs to dig a little deeper.

Who’s Buying Borland?

If I had a dollar for every rumor that has been circulated about Borland getting bought out by , I could buy the company myself.

The latest rumor has Microsoft buying Borland. In the past I’ve heard that Novell, BEA, IBM, Corel (oh, wait, that rumor was true!), Oracle, CA, SAP, HP, and McDonalds. Okay, I made that last one up. But nevertheless, every one of those rumors has been just that – a rumor. As far as I know, there hasn’t been a serious attempt to buy Borland since the Corel fiasco. Borland’s stock price has gone up and down on these rumors over the years, but no one aside from Corel has ever made a serious bid.

I’m no Mergers & Acquisitions expert, but it seems to me that if someone were going to buy Borland, they would have done so already. Borland is only getting stronger. I’d guess that all that money in the bank makes them tough to buy if they don’t want to be bought. Because Borland has one foot planted firmly in both the Java and .Net spaces, it makes only half the company attractive to most companies out there. MS wouldn’t have a clue what to do with JBuilder, and BEA would look at Delphi like we all would look at a man from Mars. Borland has a lot of valuable parts, but the some of those parts doesn’t really appeal to any one entity. In the end, it seems unlikely that anyone could or would really buy Borland. But it sure makes for interesting speculation on the Yahoo BORL board.

But lets imagine that someone did buy Borland. Such a company would have an interesting conundrum: what to do with the widely disparate development tool sets that Borland owns? Should a Java-ish company try to jump into the .Net world with Delphi? Should a .Net-minded company try to do the same into the Java world?

The only concern I personally would have would be for the future of Delphi. A company buying Borland may or may not see the value in Delphi; thus the specter of Borland being bought is a bit scary to us Delphi fans. Delphi going away would be a Very Bad Thing™ for the developer community on the .Net side of things. Delphi’s demise would leave .Net developers at the mercy of one company – the dreaded Microsoft. And of course, we can’t have that, now, can we?

Borland is a much stronger company than the average IT “expert” seems to realize, and they do have more bases covered in the software development market than any other company, even Microsoft. Sometimes we developers forget that Borland is made up of tools that cover many areas beyond development tools. They have StarTeam, CaliberRM, Together, Visibroker, and OptimizeIt. Borland has been doing more than merely preaching the ALM message, they’ve been acting on it, putting themselves years ahead of the competition in many areas. And in doing so, they’ve made themselves large enough and diverse enough that they would be a hard pill to swallow.

In the end, I’m inclined to believe that rumors of Borland’s acquisition have been greatly exaggerated.

One Reason Nick Hodges Doesn’t Quite Get OOP

Nick Hodges has written an entertaining article on what he percieves as the failings of the Microsoft .NET team’s attempt to design and code an object-oriented famework. Along the way he takes a few additional swipes at the C# language.

In this article I could have outlined my disagreements with Nick’s specific allegations about the Framework, or I could have talked about the sheer difficulty of writing a complex framework, or I could have explained how cross-language cultural issues make using a different framwork difficult. However, I decided instead to focus on one paragraph from Nick’s article:

"Maybe someday someone can explain to me why so many classes in the FCL are marked sealed. Shoot, why is it even possible to ‘seal’ a class. What the heck is that all about? Who are you to say I can’t improve or enhance your class? If your class somehow needs to be sealed, then I say you have a design problem. Now, despite the fact that most of your OOP languages include the ability to “seal” a class — C#, C++, Smalltalk — I am undaunted in my view. I was hoping that the FCL designers would be the ones to see the light and let me descend from the String class. Shoot, you can’t swing a dead cat in the FCL without hitting a sealed class that desperately needs enhancing.

Let’s focus in on the real issue: Should a modern object-oriented language allow classes to be sealed, and thereby bar subclassing from them?

The answer to this question involves a detour into designing libraries. The success of modern OO languages is due, in part, to the ability to use libraries for the development of large-scale systems. In an ideal world, those libraries should be secure, reusable, well-tested, and performant. In the real world, they sometimes miss the mark, but we can at least hope that they are reusable.

To be reusable, a modern OO library depends on the pillars of OOP: encapsulation, polymorphism, and inheritance. Since the development of Java, inheritance has been mostly replaced by delegation or composition. Indeed, way back in 1995, the Gang of Four said this: Favor object composition over class inheritance. (Page 20 of Design Patterns. It’s one of the two predicates on which the rest of the book depends.)

Encapsulation is an important principle for libraries since it enables the writer’s of the library to hide the functional implementation of their classes and methods. This in turn means that classes can guarantee that the data they hide can be only changed by methods of the class itself. If you use the Design by Contract pattern — and you should — then you will always be sure that the parameters to your methods are valid. But you only need to apply the contract to outward-facing methods. The inner private or protected methods don’t need to obey the contract because they are only called from code you control and own.

Since your code is the only code that can write to the class’ private fields you automatically make the class easier to test, make its behavior easier to predict and document, and make the methods easier to profile and optimize.

Another great benefit of encapsulation is a strong contract with the outside world: Here is this class and here’s the interface to it (defined as a set of methods, properties, and events). The class is a black box with certain well-defined knobs and switches on it. The maintenance programmer at the library vendor who has to fix/extend the class in some way has one of two possible avenues to explore (although they can overlap):

  1. An internal change to the implementation
  2. Or a change to the interface.

The first can be done almost with impunity so long as the published behavior doesn’t change (encapsulation means never having to say you’re sorry for an internal change). The second is a contract-breaker and the maintenance programmmer has two possible solutions: make the breaking change and suffer the slings and arrows, etc, or possibly write a new class altogether (the old "Ex" suffix solution). Both are nasty.

There is a great problem with using encapsulation though. That is inheritance, one of the other great principles of object-orientation (although as I mentioned above somewhat deprecated these days).

Consider this from the library writer’s point of view. You must write a base class that encapsulates some behavior and you want to make it extensible so that some unknown programmer in the future can subclass it in some unknown way. You know that encapsulation is good; however, you have a unique problem: you must break encapsulation in order to provide override points for the subclasser. You look surprised, perhaps. Yet, why otherwise have the protected keyword? The very existence of this keyword means that encapsulation is being broken, albeit for the limited use of someone who will be subclassing the base class (which in reality means everyone).

All of a sudden, this class no longer has this strong encapsulation contract with the rest of the world. You have to expose — to a certain extent — how you are implementing the class. A corollary is that you have to provide a weaker contract to the subclasser: I promise not to change the implementation of my class "too much", with some hand-wavy gesture.

But it doesn’t stop there. As soon as you expose part of the implementation of the class, you’ll be opening the door to someone who will say: you know, it’s nice that this class is subclassable, but I really need access to this little private field for my own derived class. Please? Pretty please?

Of course, another problem to solve is how to fit the extensibility points for polymorphism into your base class by marking some methods as virtual. (Java has the opposite problem: since all methods are virtual by default, which do you mark final? Or do you just ignore the issue?) Since virtual methods are known to be slower at calling (there’s a double redirection going on) you don’t usually want to go the whole hog and mark all protected/public methods as virtual. All that will do is to bring down the ire of the premature optimizer.

We used to wrestle with this constantly at TurboPower. For at least one product, we even went to the extent of having a compiler define that switched all private sections to protected ones, just because we didn’t know how to solve the "expose part but not all" inheritance problem. And I think we were fairly intelligent people. It’s just that the problem of designing a class hierarchy or framework that can efficiently be extended by third-party programmers is hard. And then you have to document it, hopefully well enough that those third-party developers can understand how to extend your base class.

There is another problem (another? you’re nuts: writing libraries is easy, dude) that, frankly, not many programmers appreciate or even care about. That is one of security. You see the whole point of polymorphism is that you can pass around objects that look like BenignBaseClass instances but are in fact HostileDerivedClass instances. Every time you implement a method in your library which takes an instance of BenignBaseClass, you must ensure that the method is robust in the face of potentially hostile instances of derived types. You cannot rely upon any invariants which you know to be true in BenignBaseClass, because some hostile hacker might have subclassed it, overridden the virtual methods to screw up your logic, and passed it in. Evil laughter.

Between a rock and a hard place, eh? In essence you just can’t have pure encapsulation and unrestricted inheritance. It just doesn’t work like that; never has done. Fooey to those pillars of old-style object-orientation, welcome to compositional object-orientation. The King is Dead, Long Live the King.

Don’t use inheritance unless you are writing a self-contained set of classes in your library or framework. I now use inheritance so infrequently that I always seem to have to reread the C# Programming Language book to understand how to call the base class’ constructors. Go with what the Gang of Four were saying 10 years ago (as Delphi 1 was just coming out): prefer composition over inheritance. Of course, for that your library or framework has to be designed around interfaces, and that takes some mental acuity or you won’t get the abstractions right. It’s not as hard as determining extensibility points of your base classes, but still challenging.

And since your library or framework users are modern OOP programmers, they understand the issues and welcome being able to use interfaces, and you can seal your classes, at least those that you determine should not be extended. Enforce encapsulation, it’s the strongest of the pillars. After all, in C# at least, if you get it wrong (and one of your users comes up with the canonical case for allowing inheritance to work), you just unseal the class. It’s a non-breaking change. (The reverse is not true: someone might have written a derived class.)

Admittedly there is a tradeoff here.  On the one hand you have the developers who want to save a little development time and effort by treating any old object as a "bag o’ fields" (if it has some methods, w00t, bonus!), and on the other hand you want to design and implement a fully-featured, robust, secure, predictable, testable library in a reasonable amount of time. The latter will certainly involve sealing classes that you don’t want developers subclassing for whatever reason.

Sealing classes is a perfectly valid thing to do. Throw away those awkward frameworks based on class inheritance. Move away from class inheritance to implementation inheritance. The grass is definitely greener over here.

Microsoft and OOP

I have had this theory for quite a while that the Microsoft community – both inside and outside of the company — doesn’t quite get objects. I think they mostly get it — .Net wouldn’t be what it is if they didn’t — but there are just so many places where things just aren’t quite right that I think that overall, they just don’t quite get it. Now, I’m quite aware of the arrogance implicit in that statement, and I am quite aware that the comments that will follow this article will no doubt question my intellectual capacity, but I’m going to plow ahead anyway. What the heck.

I guess I can’t say for sure why I have this theory; it’s just something that sticks in the back of my mind every time I talk to a Microsoft-type person. I’ve been asked “What do you need an object for?”. They’ve said things like “VB6 is object-oriented” and “Oh, we can do that just as fast without objects”. I’ve heard “You don’t need polymorphism to be object-oriented.” (huh?) My theory is further bolstered as I work with .Net’s Framework Class Library (FCL). (Maybe someday someone can explain to me why so many classes in the FCL are marked sealed. Shoot, why is it even possible to “seal” a class. What the heck is that all about? Who are you to say I can’t improve or enhance your class? If your class somehow needs to be sealed, then I say you have a design problem. Now, despite the fact that most of your OOP languages include the ability to “seal” a class — C#, C++, Smalltalk — I am undaunted in my view. I was hoping that the FCL designers would be the ones to see the light and let me descend from the string class. Shoot, you can’t swing a dead cat in the FCL without hitting a sealed class that desperately needs enhancing. Oh well.)

But don’t get me wrong, I’m quite happy to say that, despite some irritating anomalies, the FCL and the rest of the .NET framework have been a big jump forward for MS in terms of their embrace of OOP — but it sure took them long enough.(For the sake of my sanity, I pretend that MFC isn’t really an OOP framework.) They are only about eight years behind Delphi and the VCL. That’s eight years of maturity that isn’t present in the framework. Nevertheless, despite it’s depth and scope, the FCL has a lot of quirks that indicate the folks in Redmond still don’t quite get it.

For instance, why is there a separate Connection class for each database type in ADO.NET? OracleConnection, SQLConnection, OLEDBConnection – one for each database! And you can only connect a SQLDataAdapter to a SQLConnection. If ADO.NET were properly designed, like, say, oh, I don’t know, the Borland Data Provider architecture is, then the concept of a “Connection” would be properly abstracted out as a single object that could be interchanged or replaced based on the back-end database. If ADO.NET is supposed to abstract out data access, why aren’t the base classes database independent? Why do I have to use Oracle-specific enumerations with OracleConnection and SQL Server specific enumerations with SQLServer? I’ll tell you – because ADO.NET isn’t designed properly, that is why. Someone somewhere along the line didn’t quite get it. The interfaces are there for ADO.NET to be programmed against, but the connection classes in ADO.NET fail to take advantage of them properly. IDBConnection has a ChangeDatabase method – why can’t I change from an Oracle database to a SQL Server one?

One of the purported great things about the FCL is the extensive use of interfaces, but I keep running into places where an interface sure would be nice, but isn’t there. The example that brought this to mind recently for me was the System.Web.UI.WebControls.Style class. The Style class allows you to set properties — Bold, Underline, Font, etc. — and have those values rendered as part of an ASP.NET control. Well, I was building a control that needed a very specific type of Style, but the problem I quickly ran into was that I didn’t want all of the properties of the Style class to be part of my new Style – in this case it was the various Border related properties. The problem, of course, is that the whole ASP.NET component architecture assumes that any and all styles for a control will descend from the Style class, and if the Style class has stuff attached to it that you don’t want, then too bad for you.

Wouldn’t it have been better if instead of a ControlStyle property, which must take a Style class or one of its descendants, there were an IStyle interface that knew how to extract a style string, and which let component developers implement it however they like? It might look as simple as this:

IStyle = interfaces
function GetStyleString: string;

I’m designing off of the top of my head here, but such an interface would allow me to design any class I like to provide the styles for my components. When it comes time to apply the style, the control could just call the GetStyleString method and add it to the style=”whatever” tag of my control and there you have it. It would be up to me to ensure that the string was properly formatted, and I could have any style settings that I please. Instead, in order to get the styles that I want, I have to hack on my own style classes, foregoing the alleged advantages of the FCL.

I’m not saying that the FCL sucks – far from it. But I am saying that I run into situations like this one more than I should. How about this – try reading in a text file, altering the third line of text in the file, and then writing it back out again. In the VCL, that is about four lines of code. In the FCL, it’s a bit tougher. You have to create Readers and Writers and Lord knows what else. What a hassle. Why not a neat object to do that?

More ADO.NET complaints: Why is it so tough to get a data value out of a table? I have to write:

CustomerID := Convert.ToInt32(MyDataset.Tables[0].Rows[0]['CUSTID']);

when the above code is crying out to be

CustomerID := MyDataset.Tables[0].Rows[0]['CUSTID'].AsInteger;

Or, in other words, clearly a field value in a row of a DataTable should be an object, with methods attached to it to convert it to whatever it needs to be. Like the VCL has been doing since, oh, 1995. OOP code is supposed to reduce the amount of code that you have to write by encapsulating common functionality. That isn’t happening in the above code, that’s for sure. Heck in general, it always seems like I have to write way too much code in my ADO.NET applications.

(And while I’m at it, surely I am not the only one that finds the complete lack of the concept of a current record in ADO.NET a glaring omission. I’m not, am I? Oh, sure, you can get a (unfortunately named) CurrencyManager from a visual control, but then of course your cursor is coupled with the user interface. That’s plain wrong.)

Now look, I know that the FCL is huge, and it’s a conglomeration of the work of hundreds if not thousands of programmers, and no doubt some of them have a better grasp of OOP principles than others. But there just seems to be enough of these little quirks in it to make me wonder if Microsoft doesn’t quite get it. It’s the little things that always add up. But hey, I suppose that when the FCL is as mature and refined as the VCL, it will probably have worked out this kind of thing. Only about eight more years to go.

NUnit and Code Coverage with NCover

Unit tests work best when you have close to 100 percent coverage of the methods in your project. The problem, of course, is that it is not always easy to know if you have tested every method in your application. Problems also occur when tests do not thoroughly cover all the paths through a particular method. For instance, there may be logical branches in a method that are never explored. The solution to problems of this type is a tool called NCover, which is based on the JCover tool and similar utilities.

NCover is a free tool that creates reports based on the runs of your unit tests. The report the tool creates shows every method that your tests called, every method that was not called, and every method that was partially called. For instance, you can see that 100 percent of the routines in the class called FtpData were called, while only 35 percent of the code in the class called Files were called. You can also see that the method called FileSize reports having 80 percent of its code called. In particular, you can see that the test checks what happens if the method is called with a valid file name, but it does not confirm what happens if the method is called with an invalid file name.

The NCoverBrowser is a free utility which you can use to explore the XML files returned when you run NCover. Here it reports the run of a unit test that tests two classes called Files and FtpData.

This article discusses NCover and the theory behind its use. My goal is simply to let you know that such tools exist, and to give you a quick tutorial on the simple task of using NCover.

Overview of Code Coverage Theory

Unit testing gives programmers a sense of security. Test infected programmers are addicted to this sense of security, and hence enjoy writing tests. The more cautious, and the more driven, you are, then the more inclined you will be to adopt unit testing as a way of life.

Unit testing also gives you the confidence you need to refactor your code. We can all think of ways to improve our code, but frequently we don’t dare make those changes because we can’t guess all the consequences involved. If, however, we know that 100 percent of our code is covered with unit tests, then we can dare change almost any part of our code because our unit tests should discover all the possible errors that might arise from the refactoring.

The fear, of course, is that somewhere there is a method that we have not thoroughly tested, or perhaps failed to test at all. The idea behind a code coverage tool like NCover is to reveal any uncalled methods or code paths in your test suites.

One argument against NCover is that it can be used as a blunt tool to bludgeon reluctant programmers into fully unit testing their code. Almost anyone, even a manager, can read an NCover report and discover whether or not you are being a good girl or boy. To avoid such difficulties, it would be best to not discuss this kind of tool with non-programmers.

Installing NCover

There is a copy of NCover on SourceForge that comes with source, but right now I am using a second free version, that does not ship with source. You can download the latest version of NCover from The downloaded zip file contains a simple Windows install file called NCoverSetup.msi.You can run this file by double clicking on it, or by going to the command line and moving into the directory where the file lives and typing start NCoverSetup.msi. Since this is a command line utility, you might try to avoid the temptation to install the program in a directory that has spaces in its path.

You should next get a copy of the NCoverBrowser from You can unzip the NCoverBrowser download into the same directory where you installed NCover itself. Then make sure that directory is on your path, go the command prompt, and type NCover:

NCover v1.3.2 - Code Coverage Analysis for .NET - NOTE: This profile driver application is deprecated. Use NCover.Console or NCover.GUI instead. Usage: ncover /c <command line> [/a <assembly list>] /c Command line to launch profiled application. /a List of assemblies to profile. i.e. "MyAssembly1;MyAssembly2" /v Enable verbose logging (show instrumented code)

If you see output similar to what I show here, then you have probably installed NCover correctly.

Using NCover

NCover comes in the form of an executable with an unusual name: NCover.Console.exe. It can be used to run any .NET program:

NCover.Console MyProgram.exe 

In this case, however, we want to see the coverage for an unit test. To run a unit test, just execute nunit-console and pass in the name of the DLL that contains the tests that you want to examine. A simplified form of the unit test part of the command line would therefore look like this:

nunit-console NUnitDataTests.dll

If you throw in NCover, you need to add the /c switch to specify the input. The end result looks like this:

NCover.console.exe /c "nunit-console" "NUnitDataTests.dll"

Throw in the paths to the various files involved, and you end up with the more complex command line:

NCover.Console /c "d:\bin\Compilers\NUnit 2.2\bin\nunit-console" \ 

Please note that this would be typed all on one line, without the forward slash.

You can also add another parameter using the /o switch to specify the output file for your program run:

NCover.Console /c "d:\bin\Compilers\NUnit 2.2\bin\nunit-console" \ 
  "d:\src\csharp\NUnitDataTests.dll" \
  /o D:\src\csharp\webapps\CodeCoverage\Coverage1.xml

This is a lot to type all at one time. Therefore you may find the simplest way to run NCover is to build a batch file that will run one of your unit tests:

set COVERAGE_FILE="D:\src\csharp\webapps\CodeCoverage\Coverage1.xml"
NCover.Console /c "d:\bin\Compilers\NUnit 2.2\bin\nunit-console" \
  "d:\src\csharp\NUnitDataTests.dll" /o %COVERAGE_FILE% 

Note that this batch file puts the output from a run of the NCover program in a file called Coverage1.xml. It then launches Coverage1.xml in the NCoverBrowser utility so that it is easy to read:



Reading the NCover Output

As you have seen, the simplest way to read the output from NCover is to use the NCoverBrowser. However, you do not have to use the browser.

The raw output from NCover is a an XML file. Let’s take a look at an abbreviated version of one section of that file, which corresponds to the FileSize method highlighted:

<method name="FileSize" class="Falafel.Utils.Files">
<seqpnt visitcount="13" line="100" column="4" document="Files.cs"/>
<seqpnt visitcount="13" line="102" column="5" document="Files.cs"/>
<seqpnt visitcount="13" line="103" column="5" document="Files.cs"/>
<seqpnt visitcount="0" line="107" column="5" document="Files.cs"/>
<seqpnt visitcount="13" line="109" column="3" document="Files.cs"/>

I’ve ended up cutting quite a bit of the XML to make the output from NCover more readable. For instance, the actual code shows the complete path to Files.cs, and the endpoints are also specified for each line: endline="107" endcolumn="15". However, what you see here should give you an idea of what the code actually looks like that NCover produces.

The most important point’s to notice here are that there are five active lines in the FileSize method, and that four of them were visited 13 times. One line, however, was never visited. If you look back, you can see a visual illustration of this fact.

NOTE: In this particular case, I have not written 13 different tests of the simple FileSize method. Rather, this method gets called frequently by other parts of my code that are exercised by my unit tests.

Some methods in this program were visited exactly once:

<method name="FileExists" class="Falafel.Utils.Files">
<seqpnt visitcount="1" line="90" column="4" document="Files.cs"/>
<seqpnt visitcount="1" line="91" column="3" document="Files.cs"/>

Other methods weren’t visited at all:

<method name="RenameFile" class="Falafel.Utils.Files">
<seqpnt visitcount="0" line="193" column="4" document="Files.cs"/>
<seqpnt visitcount="0" line="194" column="3" document="Files.cs"/>

You could parse this XML file to make up reports of methods that need to be covered, or lines of code in methods that are not covered. In practice, however, I usually just use the NCoverBrowser to explore my code and let it highlight the methods that need attention.


NCover is an easy to use utility that provides a simple mechanism for discovering what percentage of your code is covered by unit tests, and specifically what methods need your attention.

In the ideal Test Driven Development model, programmers would always write their tests first, and then write methods that would fulfill the promise inherent in the tests. If you write your code that way, then you probably will have little need for a tool like NCover. However, if you are taking over someone else’s code that is not properly covered with unit tests, or if you do not follow the standard TDD methodologies, then you will probably find the easy to use NCover utility very useful.

Building C# Projects with NAnt

NAnt is a cross platform, open source, build tool for use with Mono or .NET. You can use it for automating builds, automating unit testing runs, or for version control. NAnt has no built in GUI interface, nor will it write your unit tests for you. Instead, it provides a powerful means of scripting these tasks so that they are performed automatically with a single command. With NAnt, it is easy to write scripts that work unchanged on both Linux and Windows.

There is a direct parallel between NAnt and the make or nmake tools used by C/C++ developers. The primary advantage that NAnt has over make is that it is written in C# and is designed for use with .NET and Mono. A secondary advantage is that it provides many tools that make it easier to create cross-platform code. For instance, NAnt has custom classes for copying files, deleting files, unzipping files, retrieving data over an HTTP connection, etc. Each of these tasks are written in C# code that works wherever the .NET platform has been implemented. In practice, this means it works on Linux and Windows.

If you are familiar with the Java tool from the Apache foundation called Ant, then you already understand most of what you need to know about NAnt. The primary reason for creating NAnt was simply to have a version of Ant that was optimized to work with .NET.

NOTE: There is no direct parallel in the Delphi IDE to NAnt, though if you have used batch files to create scripts for building your Delphi projects, then you have engaged in the kind of tasks that NAnt automates. There is a stable verion of Ant for Delphi called Want.

If you have been using Visual Studio or Delphi and found that the IDE was not powerful enough to perform your build tasks, then you have an obvious need for a tool like NAnt. In general, there is no build task, no matter how complex, that NAnt can’t be configured to run.For instance, NAnt makes it relatively easy to build multiple assemblies in a particular order and to copy the results to any location on a local or remote machine.

Even if you are happy building your projects inside Visual Studio or Delphi, you may still find that NAnt is useful. In particular, NAnt can help you automate the task of running unit tests, and it can help you automate other tasks. All in all, there are some 75 built in tasks that are available inside the current NAnt builds.

Installing NAnt

Short version: Download the NAnt binaries, unzip the package they come in, and put the bin directory on your path. That is really all there is to it, and if you have no further questions, you can safely skip ahead to the section on using NAnt.

NAnt comes with source, but I suggest getting the binary package first. If you want to work with the source packages, then I would use the binary version of NAnt to build the source package. After all, NAnt is designed to make the process of building C# code extremely simple.

You will find a link to the NAnt binary download on the NAnt home page at, or else you can go to the NAnt SourceForge project and follow the link to the download page. At the time of this writing NAnt was up to release candidate 3 of version 0.85. This means that you can download the to get the binaries, or download to get the source code. I provide these latter links primarily so you can see the file naming scheme. Since updates occur frequently, you should go directly to the download page and get the latest files yourself.

NOTE: If you are used to standard commercial releases, you might be a bit intimated by the fact that NAnt is only at version 0.85. However, you have to remember that there is no need to rush the delivery of free, open source, projects. As a result, an open source product at version 0.85 is often the rough equivalent of a 1.5 or 2.0 version of a commercial project. NAnt is unlikely to earn the 1.0 moniker until it contains a wide range of features and a very low bug count.

Once you have downloaded and unzipped the binary files, you should put the bin directory where NAnt.exe is stored on your system path. There are some 14 different assemblies included in this project, so it will not help to try to copy NAnt.exe to some convenient location. Furthermore, I would not suggest copying the exe and all 14 DLL’s somewhere, as that is likely to lead to DLL hell when you want to upgrade the product to a new version.

If you also downloaded the source, then you can now go to the root of the unzipped source project and type the word NAnt at the command prompt . This will automatically build the project, placing the output in a directory called build. If you don’t like the default location for this output, you can specify the output directory during the build process by typing:

NAnt prefix=<MyPreferredLocationForTheOutput>

For instance, you might write:

nant prefix=d:\bin\compilers\nant

NOTE: It is possible to download the source to NAnt and to build it using either Visual Studio or NMake. However, it is much simpler to follow the steps outlined above.

Using NAnt

NAnt is based on an easy to understand technology that is driven by XML. In its simplest form, you need only put the XML defining the tasks you wish to execute in a file called Then place your XML file in an appropriate directory, usually the root directory of your project, and simply type the word NAnt.exe at the command line. :NAnt will automatically discover and run the script.

NOTE: If you have a large project, it is common to have one NAnt script calling another script. For instance, you might have one base script in your root directory, then have child scripts in the root directory of each of the assemblies making up your project. The exact syntax for doing this will be discussed in future articles. If you only have one script in each directory, then you can call them all If you need to place multiple scripts in a single directory, then you can give them different names, and explicitly call the script when you run NAnt, using the following syntax: NAnt -buildfile:d:\src\csharp\Simple\

Consider the following brief example

<?xml version="1.0"?>

<project name="Getting Started with NAnt" default="build" basedir=".">

  <target name="build" description="Build a simple project">
		<csc target="exe" output="Simple.exe" debug="true">
				<include name="simple.cs" />


This simple script will compile the following short C# program:

using System;

namespace SimpleNameSpace
	public class Simple
		static void Main(string[] args)
			Console.WriteLine("What we think, we become.");

Notice the project tag at the top of the build script:

<project name="..." default="build" basedir="."> 

As you can see, it states that the default target for the project is named build. Looking carefully at the script, you can see that there is a target named build:

 <target name="build" description="...">

This target has a single task in it called csc:

<csc target="exe" output="Simple.exe" debug="true">
   <sources> <include name="simple.cs" /> </sources> 

NAnt defines a series of tasks, which you can read about in a special section of the NAnt help file called the Task Reference. The csc task helps you build C# files. There are about 75 other tasks that come with NAnt, and you can create you own tasks by writing C# code and adding it to NAnt. Tasks that ship with NAnt include modules for copying, moving and deleting files, for running NUnit scripts, for the changing or reading the environment, for executing files, for accessing the Internet, for working with regular expressions, and so on.

Multiple Targets

You can define more than one task inside an NAnt XML file. Here is a complete script containing both a build target and a clean target

<?xml version="1.0"?>

<project name="Simple" default="build" basedir=".">

	<description>A simple NAnt script.</description>

	<property name="debug" value="true" overwrite="false" />

	<target name="clean" description="Clean up the directory">
		<delete file="Simple.exe" failonerror="false" />
		<delete file="Simple.pdb" failonerror="false" />

	<target name="build" description="compile Simple.cs">
		<csc target="exe" output="Simple.exe" debug="${debug}">
				<include name="Simple.cs" />


The clean target calls the delete task twice in order to delete the files that were created when the build target was run. The clean target can be accessed by issuing the following command at the shell prompt:

nant clean  

As mentioned earlier, running NAnt without any parameters will run the default task, which in this script is defined as build.

<project name="Simple" default="build" basedir=".">
Defining Properties

Notice that a simple property is defined in the XML file:

<property name="debug" value="true" overwrite="false" />

The value of the property is then accessed by using a simple $ and curly brace syntax similar to that used to define a variable or a macro in a make file:

<csc target="exe" output="Simple.exe" debug="${debug}"> 

When the script is run, the ${debug} syntax is replaced with the value of the property called debug, which in this case is set to true.

<csc target="exe" output="Simple.exe" debug="true">

You can often simplify your XML files by defining several properties:

<?xml version="1.0"?>

<project name="Simple NAnt Script" default="build" basedir=".">

	<description>A simple NAnt build file.</description>

	<property name="debug" value="true" overwrite="false" />
	<property name="fileName" value="Simple" overwrite="false" />

	<target name="clean" description="clean up generated files">
		<delete file="${fileName}.exe" failonerror="false" />
		<delete file="${fileName}.pdb" failonerror="false" />

	<target name="build" description="compile source">
		<echo message="${fileName}"  />
		<csc target="exe" output="${fileName}.exe" debug="${debug}">
				<include name="${fileName}.cs" />


Notice that this script defines a second property called fileName, which is set to the value Simple. By merely changing the value of this one property, you can effect changes in the five other locations where the property is used in the script:

<include name="${fileName}.cs" /> 

This gives you the same kind of support for reuse in your XML build files that you can get by defining properties or variables in your source code. Features of this kind are important because they help to show the power and flexibility of a tool like NAnt.


NAnt provides an intuitive and powerful means of controlling the build process, and of automating common tasks encountered during the development process. It comes with a rich set of predefined tasks that cover most developer’s needs. However, you can write C# code to add your own tasks to NAnt if you have special needs that are not available in the default release of the project.

Last week when discussing mock objects, I mentioned that there were commercial tools which perform a similar task. The is true for NAnt. There are commercial tools such as FinalBuilder that perform many of the same tasks that NAnt performs. Some of these tools have fancy features that can sometimes help speed the development cycle. I encourage you to explore these tools. NAnt, however, has the advantage of being a free, open source product that ships with source, and that is based upon respected technology which is not likely to become outdated in the foreseeable future. Because NAnt comes with source, and because it is designed to be extensible, you will find it easy to write your own NAnt modules that perform custom tasks. That kind of extensibility is not always available in commercial products.

Visual tools can solve a certain class of programming problem, but there are many instances in which source code proves to be the most powerful solution to a difficult programming problem. NAnt is a powerful and flexible enough tool to give you the kind of control that you need over project development. In future articles I will explore of the many advanced features available to developers who take the time to master the simple NAnt syntax.

Test Your DotNet GUI with NUnit and Mock Objects

Unit testing is an easy technology to learn, but very difficult to master. In particular, problems often occur when developers try to start testing user interfaces, modules that are not complete yet, database code, or code that depends on network interactions. There are various ways to solve these kinds of problems, but one of the most interesting involves the use of mock objects.

This article provides a brief introduction to the syntax and basic principles of mock objects. Anyone who is already familiar with the basic principles of unit testing should be able to follow this article with no difficulty. This article differs from most of the other introductions to mock objects found on the web in that it goes beyond showing you the simple syntax for using mock objects and focuses on introducing the rationale behind this school of programming. Other articles found on the web show you how to write the syntax for creating mock objects, but don’t explain why you are creating them and what kinds of problems they solve. This article attempts to flesh out this subject matter by discussing more than the basic syntax, and hence giving you a start on understand how and when to correctly design applications that can be tested with mock objects.

The theory behind mock objects is a relatively deep subject that can be discussed at considerable length. However, one needs a place to start an in depth discussion, and the goal of this article is to give you a basic understanding of the technology so that we can examine it in more depth at a later date. In particular, this article demonstrates how to use mock objects to test code that has heavy dependencies on a graphical user interface element.

This article does not enter into advanced discussions of mock theory, test isolation, interaction tests, state tests, and mock objects vs stubs. That type of subject matter will be addressed in additional articles to be written at a later date. When reading about these advanced matters, you will retroactively see why starting out by learning how to mock up graphical objects is a good idea. You will also find that mock objects are great tool for writing stubs.

NOTE: In this article I will show how to use the lightweight implementation of mock objects that is built into NUnit. I chose to do this because NUnit is widely distributed, widely understood, and easy to use. If you read this article, and think that you want to use mock objects in your own code, you might consider using NMock, DotNetMock, EasyMock.NET, NCover or a commercial mock object implementation such as TypeMock. I believe, however, that you would be wise to start out learning about mock objects using the NUnit code shown here, and then apply that knowledge to more advanced tools once you understand the basics. There is nothing wrong with the lightweight mock object framework provided with NUnit, and if it suits your needs, then you can safely use it for all your testing.

The article begins with an explanation of what mock objects are and presents a simple example of one kind of problem they are designed to solve. Then you will see how to handle the simple syntax involved with creating a mock object using NUnit. If you don’t want to read some useful and easy to understand theory about how mock objects work, then you can skip right to the sections on understanding the syntax and implementing mock objects. The two key code samples are Listing 1 and especially Listing 2.

Introduction to Mock Objects

You will never be able to unit test your code unless you design it properly. The key to creating code that can be unit tested is to ensure that you engage in loose coupling. Loosely coupled code is code that can be easily decomposed into discreet objects or packages/assemblies. If your code is all bunched together into one monolithic ball and you can’t initialize one section of it in isolation from the rest, then your code is not loosely coupled. Code that is not loosely coupled is difficult to test

When creating loosely coupled code, usually it is helpful to provide interfaces for the key objects in your program. The ideal is to have loosely coupled objects that can be initialized in isolation from one another, and that can be accessed through interfaces. Loosely coupled code of this type is both easy to maintain and easy to test.

Loosely coupling your code is particularly important when it comes to working with hard to test areas such as interfaces and databases. Be sure that you create code that gets input from the user in one class, and code that performs operations on that data in a second class. A quick metric to use when designing classes of this type runs as follows: Be sure that each class you create performs one, and only one, major task.

Working with GUI Interfaces

It helps to look at a specific example when thinking about what it mean to perform one and only one major task in a class. At the same time, we will see how to separate easily testable code from difficult to test graphical user interface code.

Most dialogs have a button labeled OK that the user presses after the user has entered data. To properly unit test your code, you need to make sure that data is transferred from the dialog class that contains the OK button to a separate class the holds the data. This ensures that your user interface supports only the task of getting data from the user, and does not also try to store that data or perform operations on that data. It is this second class that will prove to be easy to test.

NOTE: It is important to properly separate the task of getting the input from the user from the task of performing operations on that input. For instance, if you have code that ensures that a user can only enter digits in an input box, then that code belongs with the input dialog; it is part of getting input from the user. If however, you want to store that data in a database, or if you want to perform a mathematical calculation on that data, then you want to move such code out of the input dialog before attempting to store it in the database, and before you perform calculations on it.

Most people who write code of this type without planning ahead will create a dialog that mixes the tasks of receiving input from the user with the task of performing operations on that data. By doing so the commit two errors:

  1. The have one class perform two major tasks.
  2. The put code that needs to be tested inside a GUI interface class that is hard to test.

Our natural instincts lead us astray when we write this type of code. It takes a conscious effort to begin to properly design applications that have a user interface.

The objection to the idea of separating data operations from user input operations is that it requires writing additional code. Instead of writing just one class, you now have to write two classes: one class for the input dialog, and one for holding the data and performing operations on it. Some developers object that writing the additional code takes more time, and it ends up bloating the code base for a program. The reposte is simply that one needs to choose: do you want to write less code or do you want to write code that is easy to test and maintain? My personal experience has shown that it is better to have code that is easy to test and maintain.

NOTE: Just to be absolutely clear: The primary reason to split up your code into two classes is to make it easy to maintain. The additional benefit of making the code easy to test simply falls out naturally from that initial decision to support a good architecture. I should add that you usually don’t need to unit test the graphical user interface itself. The people who created your GUI components did that for you. When was the last time you had a input box malfunction on you? It just doesn’t happen. The code we need to test is the code that performs operations on our data, not the code that gets the data from the user.

Enter the Mock Object

If you have decided to properly decompose your code into separate classes for the GUI and for containing your data, then the next question is how one goes about testing such code. After all, the code that contains your data still needs a way to obtain input. Something has to feed it data. In a testing scenario, if you decide to get input for the data class from the interface module, then you are no better off than before you decomposed your code. The dialog is still part of your code, and so you are still stuck with the difficulty of automating a process that involves getting input from the user. To state the matter somewhat differently, what is the point of promoting loose coupling if you don’t ever decouple your code?

The solution to this dilemma is to allow something called a mock object to stand in for your input dialog class. Instead of getting data from the user via the input dialog, instead, you get data from your mock object.

If your code were not loosely coupled, then you could not remove the input dialog from the equation and substitute the mock object for it. In other words, loose coupling is an essential part of both good application design in general, and mock object testing in particular.

At this stage, you have enough background information to understand what mock objects are about, and what kind of problem they can solve. Exactly how the syntax for creating mock objects is implemented is the subject of the remaining sections of this article.

Writing Mock Objects

Now that you understand the theory behind mock objects, the next step is to learn how to write a mock object. I will first explain how the syntax works, then show how to implement a mock object.

Understanding the Syntax

Mock objects are generally built around C# interfaces. (I’m now talking about the C# syntactical element called an interface; I’m not talking about graphical user interfaces.) In general, you want to create an interface which fronts for the object that you want to mock up.

Consider the case of the input dialog we have been discussing in this article. You will want to create an interface that can encapsulate, as it were, the functionality of that input dialog. The point here is that it is awkward to try to use NUnit to test dialogs of this type, so we are creating a mock object as a substitute for this dialog. As you will see later in this article, creating the interface is a key step in the process of developing our mock object.

Suppose you have an input dialog  that gets the user’s name and his or her age. You need to create an interface that would encapsulate this entire class.


The input dialog that we want to mock up with our mock object.

Here is an interface that can capture the information from this dialog:

public interface IPerson
string UserName { get; }
int Age { get; }

The InputDialog should implement this interface:

	public class InputDialog : System.Windows.Forms.Form, IPerson
private int age;
private String name;
public int Age
get { return age; }
set { age = value; }
} public String UserName
get { return name; }
set { name = value; }

Note in particular that InputDialog descends from System.Windows.Forms.Form, but it implements IPerson. The complete source for this class can be found here.

The class that will contain and perform operations on the data from the InputDialog will consume instances of IPerson. The full source code for this class, called PersonContainer, will be shown and discussed later in this article.

	public class PersonContainer
IPerson person; public PersonContainer(IPerson person)
this.person = person;

Now you can create an instance of your dialog and pass it to your data container after the user inputs data:

	private void button1_Click(object sender, System.EventArgs e)
InputDialog inputDialog = new InputDialog();
PersonContainer personContainer = new PersonContainer(inputDialog);

If you are not used to working with interfaces, please examine this code carefully. The variable inputDialog is of type InputDialog. Yet notice that we pass it to the constructor for PersonContainer, which expects variables of type IPerson:

public PersonContainer(IPerson person)

This works because InputDialog supports the IPerson interface. You can see this by looking at the declaration from for InputDialog:

public class InputDialog : System.Windows.Forms.Form, IPerson

The key point to grasp here is that the constructor for PersonContainer doesn’t care whether the variable passed to it is of type InputDialog or of type FooBar, so long as the class supports the IPerson interface. In other words, if you can get it to support the IPerson interface, then you can pass in a variable of almost any type into PersonContainer’s constructor.

By now, the lights should be going on in your head. In our production program, we are going to pass in variables of type InputDialog to PersonContainer. But during testing, we don’t want to pass in InputDialogs, because they are graphical user interface elements, and are hard to test. So instead, we want to create a mock object that supports the IPerson interface and then pass it in to PersonContainer. Exactly how that is done is the subject of the next two sections of this text.

Implementing the Data Object

Before we create the mock object, we need to see the data object. This is the object that will consume both the InputDialog, and the mock object. In other words, this is the object that we want to test.

It is usually best to put code like this into a separate assembly. Again, we do this because we want to support loose coupling. You want your primary project to contain your main form, and the InputDialog and PersonContainer reside in a separate assembly.

NOTE: Right now, you can see more clearly than ever just why so many people do not adopt unit testing, or fail when they attempt to adopt it. We all talk about getting the architecture for our applications right, but in practice we don’t always follow the best practices. Instead, we take short cuts, falsely believing that they will "save time."

the structure for your project as it appears in the Solution Explorer. Notice that the main program contains a form called MainForm.cs, which in turn calls into InputDialog and PersonContainer. These latter object are both stored in a separate assembly called LibraryToTest.


The structure of the project after it has been properly designed to contain a main program and a supporting library. The code that we want to test resides in its own library where it is easy to use.

Notice the references section in the library contains System.Drawing and System.Windows.Forms. I had to explicitly add these, as they were not included by default. To add a reference, right click on the References node in the Solution Explorer and bring up the Add References dialog. Add the two libraries.


Choose Project | Add Reference to bring up this dialog. Double click on items in top of the dialog to move them down to the Selected Components section at the bottom of the dialog.

Listing 1 shows a simple object called PersonContainer that could consume objects such as InputDialog that support the IPerson interface. Notice that I store both the interface and the data container in this one file.

Listing 1: The source code for the class that you want to test. It consumes objects that support the IPerson interface.

using System;

namespace CharlieMockLib
    public interface IPerson
        string UserName { get; }
        int Age { get; }

    public class PersonContainer
        IPerson person;

        public PersonContainer(IPerson person)
            this.person = person;

        public String SayHello()
            return "Hello " + person.UserName;

        public String DescribeAge()
            return person.UserName + " is " + person.Age + " years old.";


Be sure you understand what you are looking at when you view the code shown in listing 1. This is code that we want to test. The most important point is that in your main program it will have a dependency on a GUI interface element which in this case is called InputDialog. It is hard to unit test a GUI element such as a dialog, so we are working around that problem by creating a mock object and passing it instead of the InputDialog. To make this possible, we have defined an interface called IPerson which is supported by both InputDialog and our mock object.

NOTE: From here on out, you need to have NUnit installed on your system in order to follow the code examples. NUnit is a free open source project.

Implementing the Mock Object

From the discussion in the previous sections, you can surmise that it would not be difficult to manually create a class that supports IPerson and would therefore act as a mock object that you can pass in to your data container. Though not difficult intellectually, performing tasks of this type can become a monotonous exercise. What the NUnit mock object classes do for you, however, is to make it easy for you to create a mock object. The take the pain out of the process.

By now, you are anxious to see the mock object itself. Begin by creating a new class library and adding it to the solution that you want to test. Add the nunit.framework and nunit.mocks to the references section of your class library. If these two items do not appear in the Add Reference dialog, then you need to press the Browse button, and browse to the place where you installed nunit. You will find nunit.framework.dll and nunit.mocks.dll in the nunit bin directory.


Adding the references to nuit.framework and nunit.mocks to your project. You can reach this dialog right clicking on the references section shown in Figure 05.

After you have added these two assemblies to your project, you should see them in Solution Explorer.


Viewing the references sections of your project in the Solution Explorer. Note that you can see both nunit.framework and nunit.mocks.

Now that you have added the libraries necessary to support NUnit, you are ready to write the code for creating a mock object. After all this build up, you might expect this code to be fairly trick. In fact, you will find that it is quite straightforward, as you can see in Listing 2.

Listing 2: The code for the mock object.

using System;

namespace MockObjectTest
    using System;

    namespace NUnitMockTest
        using NUnit.Framework;
        using CharlieMockLib;
        using NUnit.Mocks;
        public class NUnitMockTest
            private const String TEST_NAME = "John Doe";

            public NUnitMockTest()

			  public void TestPersonAge()
                DynamicMock personMock = new DynamicMock(typeof(IPerson));
                PersonContainer personContainer = 
                    new PersonContainer((IPerson)personMock.MockInstance);
                personMock.ExpectAndReturn("get_UserName", TEST_NAME);
                personMock.ExpectAndReturn("get_Age", 5);            
                Assert.AreEqual("John Doe is 5 years old.", 

The code uses nunit.framework and nunit.mocks: It also depends on CharlieMockLib, which is the namespace in which the PersonContainer shown in Listing 1 resides:

using NUnit.Framework; 
using CharlieMockLib; 
using NUnit.Mocks;  

You can see that the [TestFixture] and [Test] attributes are added to our code, just as they would be in any unit test.

The first, and most important, step in creating a mock object is to create an instance of the DynamicMock class. The NUnit DynamicMock class is a helper object that provides an easy way for us to "mock" up an implementation of the IPerson Interface. Here is an example of how to construct an instance of this class:

DynamicMock personMock = new DynamicMock(typeof(IPerson));

Notice that we pass in the type of the IPerson interface. We are asking the NUnit mock object implementation to create an object for us that will automatically and dynamically support the IPerson interface.

The next step is to retrieve an instance of our mock object from its factory and pass it in to the PersonContainer:

IPerson iPerson = (IPerson)personMock.MockInstance
PersonContainer personContainer = new PersonContainer(iPerson);

If you want, you can save a little typing by doing this all on one line:

PersonContainer personContainer = 
  new PersonContainer((IPerson)personMock.MockInstance);

Now we need to initialize the values for the two properties on the IPerson interface we have created:

private const String TEST_NAME = "John Doe";

personMock.ExpectAndReturn("get_UserName", TEST_NAME);
personMock.ExpectAndReturn("get_Age", 5);

Calls to ExpectAndReturn inform our mock object of the properties that we plan to call, and the values that we want our mock object to return. The first parameter in the first call informs our mock object that we plan to call the UserName property exactly once, and that we expect it to return the value John Doe. The second call to ExpectAndReturn does the same type of thing for the Age property. In terms of our whole project, you can think of these two lines as saying: "Pretend that the user popped up the InputDialog and entered the value John Doe for the user name, and the value 5 for the age." Of course, the input dialog is never used.

NOTE: I find it peculiar that NUnit wants us to pass in get_ prefixed to the name of properties that we want to call. Other implementations of mock objects do not require that you prefixget_ before calling a property.

The final step in this process is to run our actual test to see if our container properly handles input from our mocked up instance of InputDialog:

Assert.AreEqual("John Doe is 5 years old.", personContainer.DescribeAge()); 

As you can see, the PersonContainer calls each of these properties exactly one time:

public String DescribeAge() 
  return person.UserName + " is " + person.Age + " years old."; 

The call to Verify will fail if the UserName or Age properties are called more than once. This can happen if there is an error in your code, or if you view one of the properties in the watch window of your debugger.


This article gave a (warning: oxymoron ahead) detailed overview of how to use mock objects. The majority of the article was dedicated to explaining why you would want to use mock objects, and in explaining how they can be used to solve a particular type of problem. The actual implementation of a mock object took up less than half of this article.

I should point out three important facts:

  1. Mock objects are not designed solely for solving the problem of testing the graphical user interface for an application. They are also used for mocking up database access, network access, or incomplete parts of large projects. Many developers, particularly in the XP tradition, use mock objects for all the secondary layers in their application. In other words, whenever one object in a program depends on another object from your program, then these hardcore mockers use mock objects.
  2. The NUnit mock objects are not the only solution for testing a graphical user interface. In particular, there are commercial products such as TypeMock that offer advanced facilities and greater ease of use. Furthermore, various tools, including TestComplete, (a company in which Falafel is a part owner), can also be used for testing user interfaces. Many of these commercial testing tools provide shortcuts that may be easier to use than the process shown here.
  3. As mentioned earlier in this article, the NUnit implementation of mock objects is lightweight. In particular, the release notes for NUnit state: "This facility is in no way a replacement for full-fledged mock frameworks such as NMock and is not expected to add significant features in upcoming releases. Its primary purpose is to support NUnit’s own tests. We wanted to do that without the need to choose a particular mock framework and without having to deal with versioning issues outside of NUnit itself." I feel compelled to add, however, that if the NUnit mock objects shown in this article meet your needs, there is no reason for you to upgrade to another tool.

Mock objects can play a very important role in unit tests. Hopefully this brief introduction to the topic should give you the information you need to use them in your own testing process.

Globalization: Fiddling While Rome Burns

Thomas Friedman, the author of “The World is Flat,” explains in this morning’s New York Times that the economic engine in Bangalore, India is reaching a new phase. "We’re going from a model of doing piecework to where the entire product and entire innovation stream is done by companies here," the CEO of a large Indian company told Friedman.

From reading the press, one gets the impression that Americans think that only a few million jobs are leaving America and headed to India and other third world countries. Since most of these jobs are in the tech industry, most American’s feel safe.

Unfortunately, this overly simplistic world view is being challenged by what is happening in India. At first, it was American companies who were hiring foreigners to take tech jobs. But once the people in places like Bangalore learn the trade, the next step is for them to start running their own businesses.

Think what happens here in America. Employees of big corporations get an idea, and then they break off and start their own companies, giving them names like NetFlix, Zone Labs, or even (in extreme cases) Falafel. Some of the worlds biggest companies, such as Intel, were also start ups created by employees who split away from larger companies who taught them their trade. What we are doing in India is teaching people a trade. Eventually they won’t need us any more, and will start their own companies.

Friedman sees part of this cycle, but he backs away from facing reality head on. He still appears to believe that we will remain in control of this process, that we will be running it. “What will be left for the Western companies is the ‘ideation,’ the original concept and design of a flagship product (which is a big deal), and then the sales and marketing,” Friedman says.

But if the workers in Bangalore are already learning to start their own tech companies, why won’t they eventually come up with their own ideas and start their own sales and marketing companies? To claim that only Americans are able to come up with new ideas, or to market them, is to practice an extreme form of racism. If there is one thing that even the early stages of globalization has proven, it is that all people, everywhere, are capable of doing any task on which they set their hearts and minds.

As our corporations hire people in Bangalore, we are beginning an inevitable process in which we train them to do advanced technical jobs. At the same time, we are also showing them how to run companies, how to market products, and how to create new products. The end result is that we will undermine our tax base, our technological edge, and other factors that have made America’s high standard of living possible.

Globalization is Inevitable

I believe that globalization is both a good thing, and an inevitable consequence of life in modern society. However, it is incredibly naïve for Americans to sit back and think that Adam Smith’s invisible hand will automatically guide us through this period to safe shores. We need more than a theory, we need a concrete plan.

Denying the inevitability of globalization would be the equivalent of sticking our heads in the sand. Attempting to create laws to prevent globalization by making outsourcing illegal would be an equally futile undertaking. But simply sitting on our hands and watching mutely while our jobs and businesses move overseas to cheaper labor markets is equally foolish.

America, and all industrialized societies, are facing a crisis now that computers have made it possible to move jobs and business around the globe in search of cheap labor. Those of us in the tech industry have a front row seat, and can watch this process as it evolves.

Many Americans, however, aren’t aware that the crisis even exists. Others think it can be legislated away, and some think we will magically resolve the problem by just sitting back and letting the mystical free market work. All of these ideas are hopelessly quaint and naïve. Jobs are leaving America at a huge rate, and there is no plan for bringing them back. As the jobs leave, then inevitably, so will the businesses. Without the businesses and the jobs, there is no tax base for running a country as large and sophisticated as America.

What we need are intelligent politicians and businessmen who are willing to actively work to solve these problems. We all need jobs, we all need the skills necessary to compete in the modern world. When will we hear important people in this country stand up and address these issues in plain language? We shouldn’t settle for vague promises, we need specific plans.


Here is how Friedman ends his article:

“Indeed, I now understand why, when China’s prime minister, Wen Jiabao, visited India for the first time last April, he didn’t fly into the capital, New Delhi – as foreign leaders usually do. He flew directly from Beijing to Bangalore – for a tech-tour – and then went on to New Delhi.

“No U.S. president or vice president has ever visited Bangalore. “

I am by no means a supporter of all Thomas Friedman’s ideas. However, in columns like the one he wrote this morning, and in books like “The World is Flat,” he at least tries to come to terms with the consequences of globalization. Most of the rest of the press, and most of our politicians, are completely blind to the importance of the huge changes taking place in our economy, and in the economy of cities like Bangalore. They fiddle, and Rome burns. How long are we going to put up with this foolishness?