Building C# Projects with NAnt

NAnt is a cross platform, open source, build tool for use with Mono or .NET. You can use it for automating builds, automating unit testing runs, or for version control. NAnt has no built in GUI interface, nor will it write your unit tests for you. Instead, it provides a powerful means of scripting these tasks so that they are performed automatically with a single command. With NAnt, it is easy to write scripts that work unchanged on both Linux and Windows.

There is a direct parallel between NAnt and the make or nmake tools used by C/C++ developers. The primary advantage that NAnt has over make is that it is written in C# and is designed for use with .NET and Mono. A secondary advantage is that it provides many tools that make it easier to create cross-platform code. For instance, NAnt has custom classes for copying files, deleting files, unzipping files, retrieving data over an HTTP connection, etc. Each of these tasks are written in C# code that works wherever the .NET platform has been implemented. In practice, this means it works on Linux and Windows.

If you are familiar with the Java tool from the Apache foundation called Ant, then you already understand most of what you need to know about NAnt. The primary reason for creating NAnt was simply to have a version of Ant that was optimized to work with .NET.

NOTE: There is no direct parallel in the Delphi IDE to NAnt, though if you have used batch files to create scripts for building your Delphi projects, then you have engaged in the kind of tasks that NAnt automates. There is a stable verion of Ant for Delphi called Want.

If you have been using Visual Studio or Delphi and found that the IDE was not powerful enough to perform your build tasks, then you have an obvious need for a tool like NAnt. In general, there is no build task, no matter how complex, that NAnt can’t be configured to run.For instance, NAnt makes it relatively easy to build multiple assemblies in a particular order and to copy the results to any location on a local or remote machine.

Even if you are happy building your projects inside Visual Studio or Delphi, you may still find that NAnt is useful. In particular, NAnt can help you automate the task of running unit tests, and it can help you automate other tasks. All in all, there are some 75 built in tasks that are available inside the current NAnt builds.

Installing NAnt

Short version: Download the NAnt binaries, unzip the package they come in, and put the bin directory on your path. That is really all there is to it, and if you have no further questions, you can safely skip ahead to the section on using NAnt.

NAnt comes with source, but I suggest getting the binary package first. If you want to work with the source packages, then I would use the binary version of NAnt to build the source package. After all, NAnt is designed to make the process of building C# code extremely simple.

You will find a link to the NAnt binary download on the NAnt home page at http://nant.sourceforge.net/., or else you can go to the NAnt SourceForge project and follow the link to the download page. At the time of this writing NAnt was up to release candidate 3 of version 0.85. This means that you can download the nant-0.85-rc3-bin.zip to get the binaries, or download nant-0.85-rc3-src.zip to get the source code. I provide these latter links primarily so you can see the file naming scheme. Since updates occur frequently, you should go directly to the download page and get the latest files yourself.

NOTE: If you are used to standard commercial releases, you might be a bit intimated by the fact that NAnt is only at version 0.85. However, you have to remember that there is no need to rush the delivery of free, open source, projects. As a result, an open source product at version 0.85 is often the rough equivalent of a 1.5 or 2.0 version of a commercial project. NAnt is unlikely to earn the 1.0 moniker until it contains a wide range of features and a very low bug count.

Once you have downloaded and unzipped the binary files, you should put the bin directory where NAnt.exe is stored on your system path. There are some 14 different assemblies included in this project, so it will not help to try to copy NAnt.exe to some convenient location. Furthermore, I would not suggest copying the exe and all 14 DLL’s somewhere, as that is likely to lead to DLL hell when you want to upgrade the product to a new version.

If you also downloaded the source, then you can now go to the root of the unzipped source project and type the word NAnt at the command prompt . This will automatically build the project, placing the output in a directory called build. If you don’t like the default location for this output, you can specify the output directory during the build process by typing:

NAnt prefix=<MyPreferredLocationForTheOutput>

For instance, you might write:

nant prefix=d:\bin\compilers\nant

NOTE: It is possible to download the source to NAnt and to build it using either Visual Studio or NMake. However, it is much simpler to follow the steps outlined above.

Using NAnt

NAnt is based on an easy to understand technology that is driven by XML. In its simplest form, you need only put the XML defining the tasks you wish to execute in a file called NAnt.build. Then place your XML file in an appropriate directory, usually the root directory of your project, and simply type the word NAnt.exe at the command line. :NAnt will automatically discover and run the script.

NOTE: If you have a large project, it is common to have one NAnt script calling another script. For instance, you might have one base script in your root directory, then have child scripts in the root directory of each of the assemblies making up your project. The exact syntax for doing this will be discussed in future articles. If you only have one script in each directory, then you can call them all NAnt.build. If you need to place multiple scripts in a single directory, then you can give them different names, and explicitly call the script when you run NAnt, using the following syntax: NAnt -buildfile:d:\src\csharp\Simple\MyFile.build.

Consider the following brief example

<?xml version="1.0"?>

<project name="Getting Started with NAnt" default="build" basedir=".">

  <target name="build" description="Build a simple project">
		<csc target="exe" output="Simple.exe" debug="true">
			<sources>
				<include name="simple.cs" />
			</sources>
		</csc>
	</target>

	</project>

This simple script will compile the following short C# program:

using System;

namespace SimpleNameSpace
{
	public class Simple
	{
		static void Main(string[] args)
		{
			Console.WriteLine("What we think, we become.");
		}
	}
}

Notice the project tag at the top of the build script:

<project name="..." default="build" basedir="."> 

As you can see, it states that the default target for the project is named build. Looking carefully at the script, you can see that there is a target named build:

 <target name="build" description="...">

This target has a single task in it called csc:

<csc target="exe" output="Simple.exe" debug="true">
   <sources> <include name="simple.cs" /> </sources>
</csc>  

NAnt defines a series of tasks, which you can read about in a special section of the NAnt help file called the Task Reference. The csc task helps you build C# files. There are about 75 other tasks that come with NAnt, and you can create you own tasks by writing C# code and adding it to NAnt. Tasks that ship with NAnt include modules for copying, moving and deleting files, for running NUnit scripts, for the changing or reading the environment, for executing files, for accessing the Internet, for working with regular expressions, and so on.

Multiple Targets

You can define more than one task inside an NAnt XML file. Here is a complete script containing both a build target and a clean target

<?xml version="1.0"?>

<project name="Simple" default="build" basedir=".">

	<description>A simple NAnt script.</description>

	<property name="debug" value="true" overwrite="false" />

	<target name="clean" description="Clean up the directory">
		<delete file="Simple.exe" failonerror="false" />
		<delete file="Simple.pdb" failonerror="false" />
	</target>

	<target name="build" description="compile Simple.cs">
		<csc target="exe" output="Simple.exe" debug="${debug}">
			<sources>
				<include name="Simple.cs" />
			</sources>
		</csc>
	</target>

	</project>

The clean target calls the delete task twice in order to delete the files that were created when the build target was run. The clean target can be accessed by issuing the following command at the shell prompt:

nant clean  

As mentioned earlier, running NAnt without any parameters will run the default task, which in this script is defined as build.

<project name="Simple" default="build" basedir=".">
Defining Properties

Notice that a simple property is defined in the XML file:

<property name="debug" value="true" overwrite="false" />

The value of the property is then accessed by using a simple $ and curly brace syntax similar to that used to define a variable or a macro in a make file:

<csc target="exe" output="Simple.exe" debug="${debug}"> 

When the script is run, the ${debug} syntax is replaced with the value of the property called debug, which in this case is set to true.

<csc target="exe" output="Simple.exe" debug="true">

You can often simplify your XML files by defining several properties:

<?xml version="1.0"?>

<project name="Simple NAnt Script" default="build" basedir=".">

	<description>A simple NAnt build file.</description>

	<property name="debug" value="true" overwrite="false" />
	<property name="fileName" value="Simple" overwrite="false" />

	<target name="clean" description="clean up generated files">
		<delete file="${fileName}.exe" failonerror="false" />
		<delete file="${fileName}.pdb" failonerror="false" />
	</target>

	<target name="build" description="compile source">
		<echo message="${fileName}"  />
		<csc target="exe" output="${fileName}.exe" debug="${debug}">
			<sources>
				<include name="${fileName}.cs" />
			</sources>
		</csc>
	</target>

	</project>

Notice that this script defines a second property called fileName, which is set to the value Simple. By merely changing the value of this one property, you can effect changes in the five other locations where the property is used in the script:

<include name="${fileName}.cs" /> 

This gives you the same kind of support for reuse in your XML build files that you can get by defining properties or variables in your source code. Features of this kind are important because they help to show the power and flexibility of a tool like NAnt.

Summary

NAnt provides an intuitive and powerful means of controlling the build process, and of automating common tasks encountered during the development process. It comes with a rich set of predefined tasks that cover most developer’s needs. However, you can write C# code to add your own tasks to NAnt if you have special needs that are not available in the default release of the project.

Last week when discussing mock objects, I mentioned that there were commercial tools which perform a similar task. The is true for NAnt. There are commercial tools such as FinalBuilder that perform many of the same tasks that NAnt performs. Some of these tools have fancy features that can sometimes help speed the development cycle. I encourage you to explore these tools. NAnt, however, has the advantage of being a free, open source product that ships with source, and that is based upon respected technology which is not likely to become outdated in the foreseeable future. Because NAnt comes with source, and because it is designed to be extensible, you will find it easy to write your own NAnt modules that perform custom tasks. That kind of extensibility is not always available in commercial products.

Visual tools can solve a certain class of programming problem, but there are many instances in which source code proves to be the most powerful solution to a difficult programming problem. NAnt is a powerful and flexible enough tool to give you the kind of control that you need over project development. In future articles I will explore of the many advanced features available to developers who take the time to master the simple NAnt syntax.

Test Your DotNet GUI with NUnit and Mock Objects

Unit testing is an easy technology to learn, but very difficult to master. In particular, problems often occur when developers try to start testing user interfaces, modules that are not complete yet, database code, or code that depends on network interactions. There are various ways to solve these kinds of problems, but one of the most interesting involves the use of mock objects.

This article provides a brief introduction to the syntax and basic principles of mock objects. Anyone who is already familiar with the basic principles of unit testing should be able to follow this article with no difficulty. This article differs from most of the other introductions to mock objects found on the web in that it goes beyond showing you the simple syntax for using mock objects and focuses on introducing the rationale behind this school of programming. Other articles found on the web show you how to write the syntax for creating mock objects, but don’t explain why you are creating them and what kinds of problems they solve. This article attempts to flesh out this subject matter by discussing more than the basic syntax, and hence giving you a start on understand how and when to correctly design applications that can be tested with mock objects.

The theory behind mock objects is a relatively deep subject that can be discussed at considerable length. However, one needs a place to start an in depth discussion, and the goal of this article is to give you a basic understanding of the technology so that we can examine it in more depth at a later date. In particular, this article demonstrates how to use mock objects to test code that has heavy dependencies on a graphical user interface element.

This article does not enter into advanced discussions of mock theory, test isolation, interaction tests, state tests, and mock objects vs stubs. That type of subject matter will be addressed in additional articles to be written at a later date. When reading about these advanced matters, you will retroactively see why starting out by learning how to mock up graphical objects is a good idea. You will also find that mock objects are great tool for writing stubs.

NOTE: In this article I will show how to use the lightweight implementation of mock objects that is built into NUnit. I chose to do this because NUnit is widely distributed, widely understood, and easy to use. If you read this article, and think that you want to use mock objects in your own code, you might consider using NMock, DotNetMock, EasyMock.NET, NCover or a commercial mock object implementation such as TypeMock. I believe, however, that you would be wise to start out learning about mock objects using the NUnit code shown here, and then apply that knowledge to more advanced tools once you understand the basics. There is nothing wrong with the lightweight mock object framework provided with NUnit, and if it suits your needs, then you can safely use it for all your testing.

The article begins with an explanation of what mock objects are and presents a simple example of one kind of problem they are designed to solve. Then you will see how to handle the simple syntax involved with creating a mock object using NUnit. If you don’t want to read some useful and easy to understand theory about how mock objects work, then you can skip right to the sections on understanding the syntax and implementing mock objects. The two key code samples are Listing 1 and especially Listing 2.

Introduction to Mock Objects

You will never be able to unit test your code unless you design it properly. The key to creating code that can be unit tested is to ensure that you engage in loose coupling. Loosely coupled code is code that can be easily decomposed into discreet objects or packages/assemblies. If your code is all bunched together into one monolithic ball and you can’t initialize one section of it in isolation from the rest, then your code is not loosely coupled. Code that is not loosely coupled is difficult to test

When creating loosely coupled code, usually it is helpful to provide interfaces for the key objects in your program. The ideal is to have loosely coupled objects that can be initialized in isolation from one another, and that can be accessed through interfaces. Loosely coupled code of this type is both easy to maintain and easy to test.

Loosely coupling your code is particularly important when it comes to working with hard to test areas such as interfaces and databases. Be sure that you create code that gets input from the user in one class, and code that performs operations on that data in a second class. A quick metric to use when designing classes of this type runs as follows: Be sure that each class you create performs one, and only one, major task.

Working with GUI Interfaces

It helps to look at a specific example when thinking about what it mean to perform one and only one major task in a class. At the same time, we will see how to separate easily testable code from difficult to test graphical user interface code.

Most dialogs have a button labeled OK that the user presses after the user has entered data. To properly unit test your code, you need to make sure that data is transferred from the dialog class that contains the OK button to a separate class the holds the data. This ensures that your user interface supports only the task of getting data from the user, and does not also try to store that data or perform operations on that data. It is this second class that will prove to be easy to test.

NOTE: It is important to properly separate the task of getting the input from the user from the task of performing operations on that input. For instance, if you have code that ensures that a user can only enter digits in an input box, then that code belongs with the input dialog; it is part of getting input from the user. If however, you want to store that data in a database, or if you want to perform a mathematical calculation on that data, then you want to move such code out of the input dialog before attempting to store it in the database, and before you perform calculations on it.

Most people who write code of this type without planning ahead will create a dialog that mixes the tasks of receiving input from the user with the task of performing operations on that data. By doing so the commit two errors:

  1. The have one class perform two major tasks.
  2. The put code that needs to be tested inside a GUI interface class that is hard to test.

Our natural instincts lead us astray when we write this type of code. It takes a conscious effort to begin to properly design applications that have a user interface.

The objection to the idea of separating data operations from user input operations is that it requires writing additional code. Instead of writing just one class, you now have to write two classes: one class for the input dialog, and one for holding the data and performing operations on it. Some developers object that writing the additional code takes more time, and it ends up bloating the code base for a program. The reposte is simply that one needs to choose: do you want to write less code or do you want to write code that is easy to test and maintain? My personal experience has shown that it is better to have code that is easy to test and maintain.

NOTE: Just to be absolutely clear: The primary reason to split up your code into two classes is to make it easy to maintain. The additional benefit of making the code easy to test simply falls out naturally from that initial decision to support a good architecture. I should add that you usually don’t need to unit test the graphical user interface itself. The people who created your GUI components did that for you. When was the last time you had a input box malfunction on you? It just doesn’t happen. The code we need to test is the code that performs operations on our data, not the code that gets the data from the user.

Enter the Mock Object

If you have decided to properly decompose your code into separate classes for the GUI and for containing your data, then the next question is how one goes about testing such code. After all, the code that contains your data still needs a way to obtain input. Something has to feed it data. In a testing scenario, if you decide to get input for the data class from the interface module, then you are no better off than before you decomposed your code. The dialog is still part of your code, and so you are still stuck with the difficulty of automating a process that involves getting input from the user. To state the matter somewhat differently, what is the point of promoting loose coupling if you don’t ever decouple your code?

The solution to this dilemma is to allow something called a mock object to stand in for your input dialog class. Instead of getting data from the user via the input dialog, instead, you get data from your mock object.

If your code were not loosely coupled, then you could not remove the input dialog from the equation and substitute the mock object for it. In other words, loose coupling is an essential part of both good application design in general, and mock object testing in particular.

At this stage, you have enough background information to understand what mock objects are about, and what kind of problem they can solve. Exactly how the syntax for creating mock objects is implemented is the subject of the remaining sections of this article.

Writing Mock Objects

Now that you understand the theory behind mock objects, the next step is to learn how to write a mock object. I will first explain how the syntax works, then show how to implement a mock object.

Understanding the Syntax

Mock objects are generally built around C# interfaces. (I’m now talking about the C# syntactical element called an interface; I’m not talking about graphical user interfaces.) In general, you want to create an interface which fronts for the object that you want to mock up.

Consider the case of the input dialog we have been discussing in this article. You will want to create an interface that can encapsulate, as it were, the functionality of that input dialog. The point here is that it is awkward to try to use NUnit to test dialogs of this type, so we are creating a mock object as a substitute for this dialog. As you will see later in this article, creating the interface is a key step in the process of developing our mock object.

Suppose you have an input dialog  that gets the user’s name and his or her age. You need to create an interface that would encapsulate this entire class.

 

The input dialog that we want to mock up with our mock object.

Here is an interface that can capture the information from this dialog:

public interface IPerson{	string UserName { get; }	int Age { get; }} 

The InputDialog should implement this interface:

	public class InputDialog : System.Windows.Forms.Form, IPerson	{		private int age;		private String name;
		public int Age		{			get { return age; }			set { age = value; }		}
		public String UserName		{			get { return name; }			set { name = value; }		}

Note in particular that InputDialog descends from System.Windows.Forms.Form, but it implements IPerson. The complete source for this class can be found here.

The class that will contain and perform operations on the data from the InputDialog will consume instances of IPerson. The full source code for this class, called PersonContainer, will be shown and discussed later in this article.

	public class PersonContainer	{		IPerson person;

		public PersonContainer(IPerson person)		{			this.person = person;		}

Now you can create an instance of your dialog and pass it to your data container after the user inputs data:

	private void button1_Click(object sender, System.EventArgs e)	{		InputDialog inputDialog = new InputDialog();		inputDialog.ShowDialog(this);		PersonContainer personContainer =
		  new PersonContainer(inputDialog);	}

If you are not used to working with interfaces, please examine this code carefully. The variable inputDialog is of type InputDialog. Yet notice that we pass it to the constructor for PersonContainer, which expects variables of type IPerson:

public PersonContainer(IPerson person)

This works because InputDialog supports the IPerson interface. You can see this by looking at the declaration from for InputDialog:

public class InputDialog : System.Windows.Forms.Form, IPerson

The key point to grasp here is that the constructor for PersonContainer doesn’t care whether the variable passed to it is of type InputDialog or of type FooBar, so long as the class supports the IPerson interface. In other words, if you can get it to support the IPerson interface, then you can pass in a variable of almost any type into PersonContainer’s constructor.

By now, the lights should be going on in your head. In our production program, we are going to pass in variables of type InputDialog to PersonContainer. But during testing, we don’t want to pass in InputDialogs, because they are graphical user interface elements, and are hard to test. So instead, we want to create a mock object that supports the IPerson interface and then pass it in to PersonContainer. Exactly how that is done is the subject of the next two sections of this text.

Implementing the Data Object

Before we create the mock object, we need to see the data object. This is the object that will consume both the InputDialog, and the mock object. In other words, this is the object that we want to test.

It is usually best to put code like this into a separate assembly. Again, we do this because we want to support loose coupling. You want your primary project to contain your main form, and the InputDialog and PersonContainer reside in a separate assembly.

NOTE: Right now, you can see more clearly than ever just why so many people do not adopt unit testing, or fail when they attempt to adopt it. We all talk about getting the architecture for our applications right, but in practice we don’t always follow the best practices. Instead, we take short cuts, falsely believing that they will "save time."

the structure for your project as it appears in the Solution Explorer. Notice that the main program contains a form called MainForm.cs, which in turn calls into InputDialog and PersonContainer. These latter object are both stored in a separate assembly called LibraryToTest.

 

The structure of the project after it has been properly designed to contain a main program and a supporting library. The code that we want to test resides in its own library where it is easy to use.

Notice the references section in the library contains System.Drawing and System.Windows.Forms. I had to explicitly add these, as they were not included by default. To add a reference, right click on the References node in the Solution Explorer and bring up the Add References dialog. Add the two libraries.

 

Choose Project | Add Reference to bring up this dialog. Double click on items in top of the dialog to move them down to the Selected Components section at the bottom of the dialog.

Listing 1 shows a simple object called PersonContainer that could consume objects such as InputDialog that support the IPerson interface. Notice that I store both the interface and the data container in this one file.

Listing 1: The source code for the class that you want to test. It consumes objects that support the IPerson interface.

using System;

namespace CharlieMockLib
{
    public interface IPerson
    {
        string UserName { get; }
        int Age { get; }
    }

    public class PersonContainer
    {
        IPerson person;

        public PersonContainer(IPerson person)
        {
            this.person = person;
        }

        public String SayHello()
        {
            return "Hello " + person.UserName;
        }

        public String DescribeAge()
        {
            return person.UserName + " is " + person.Age + " years old.";
        }

    }
}

Be sure you understand what you are looking at when you view the code shown in listing 1. This is code that we want to test. The most important point is that in your main program it will have a dependency on a GUI interface element which in this case is called InputDialog. It is hard to unit test a GUI element such as a dialog, so we are working around that problem by creating a mock object and passing it instead of the InputDialog. To make this possible, we have defined an interface called IPerson which is supported by both InputDialog and our mock object.

NOTE: From here on out, you need to have NUnit installed on your system in order to follow the code examples. NUnit is a free open source project.

Implementing the Mock Object

From the discussion in the previous sections, you can surmise that it would not be difficult to manually create a class that supports IPerson and would therefore act as a mock object that you can pass in to your data container. Though not difficult intellectually, performing tasks of this type can become a monotonous exercise. What the NUnit mock object classes do for you, however, is to make it easy for you to create a mock object. The take the pain out of the process.

By now, you are anxious to see the mock object itself. Begin by creating a new class library and adding it to the solution that you want to test. Add the nunit.framework and nunit.mocks to the references section of your class library. If these two items do not appear in the Add Reference dialog, then you need to press the Browse button, and browse to the place where you installed nunit. You will find nunit.framework.dll and nunit.mocks.dll in the nunit bin directory.

 

Adding the references to nuit.framework and nunit.mocks to your project. You can reach this dialog right clicking on the references section shown in Figure 05.

After you have added these two assemblies to your project, you should see them in Solution Explorer.

 

Viewing the references sections of your project in the Solution Explorer. Note that you can see both nunit.framework and nunit.mocks.

Now that you have added the libraries necessary to support NUnit, you are ready to write the code for creating a mock object. After all this build up, you might expect this code to be fairly trick. In fact, you will find that it is quite straightforward, as you can see in Listing 2.

Listing 2: The code for the mock object.

using System;

namespace MockObjectTest
{
    using System;

    namespace NUnitMockTest
    {
        using NUnit.Framework;
        using CharlieMockLib;
        using NUnit.Mocks;

        [TestFixture]
        public class NUnitMockTest
        {
            private const String TEST_NAME = "John Doe";

            public NUnitMockTest()
            {
            }

			  [Test]
			  public void TestPersonAge()
            {
                DynamicMock personMock = new DynamicMock(typeof(IPerson));
                PersonContainer personContainer =
                    new PersonContainer((IPerson)personMock.MockInstance);

                personMock.ExpectAndReturn("get_UserName", TEST_NAME);
                personMock.ExpectAndReturn("get_Age", 5);            

                Assert.AreEqual("John Doe is 5 years old.",
					personContainer.DescribeAge());
                personMock.Verify();
            }
        }
    }
}

The code uses nunit.framework and nunit.mocks: It also depends on CharlieMockLib, which is the namespace in which the PersonContainer shown in Listing 1 resides:

using NUnit.Framework;
using CharlieMockLib;
using NUnit.Mocks;  

You can see that the [TestFixture] and [Test] attributes are added to our code, just as they would be in any unit test.

The first, and most important, step in creating a mock object is to create an instance of the DynamicMock class. The NUnit DynamicMock class is a helper object that provides an easy way for us to "mock" up an implementation of the IPerson Interface. Here is an example of how to construct an instance of this class:

DynamicMock personMock = new DynamicMock(typeof(IPerson));

Notice that we pass in the type of the IPerson interface. We are asking the NUnit mock object implementation to create an object for us that will automatically and dynamically support the IPerson interface.

The next step is to retrieve an instance of our mock object from its factory and pass it in to the PersonContainer:

IPerson iPerson = (IPerson)personMock.MockInstance
PersonContainer personContainer = new PersonContainer(iPerson);

If you want, you can save a little typing by doing this all on one line:

PersonContainer personContainer =
  new PersonContainer((IPerson)personMock.MockInstance);

Now we need to initialize the values for the two properties on the IPerson interface we have created:

private const String TEST_NAME = "John Doe";

personMock.ExpectAndReturn("get_UserName", TEST_NAME);
personMock.ExpectAndReturn("get_Age", 5);

Calls to ExpectAndReturn inform our mock object of the properties that we plan to call, and the values that we want our mock object to return. The first parameter in the first call informs our mock object that we plan to call the UserName property exactly once, and that we expect it to return the value John Doe. The second call to ExpectAndReturn does the same type of thing for the Age property. In terms of our whole project, you can think of these two lines as saying: "Pretend that the user popped up the InputDialog and entered the value John Doe for the user name, and the value 5 for the age." Of course, the input dialog is never used.

NOTE: I find it peculiar that NUnit wants us to pass in get_ prefixed to the name of properties that we want to call. Other implementations of mock objects do not require that you prefixget_ before calling a property.

The final step in this process is to run our actual test to see if our container properly handles input from our mocked up instance of InputDialog:

Assert.AreEqual("John Doe is 5 years old.", personContainer.DescribeAge());
personMock.Verify();

As you can see, the PersonContainer calls each of these properties exactly one time:

public String DescribeAge()
{
  return person.UserName + " is " + person.Age + " years old.";
}

The call to Verify will fail if the UserName or Age properties are called more than once. This can happen if there is an error in your code, or if you view one of the properties in the watch window of your debugger.

Summary

This article gave a (warning: oxymoron ahead) detailed overview of how to use mock objects. The majority of the article was dedicated to explaining why you would want to use mock objects, and in explaining how they can be used to solve a particular type of problem. The actual implementation of a mock object took up less than half of this article.

I should point out three important facts:

  1. Mock objects are not designed solely for solving the problem of testing the graphical user interface for an application. They are also used for mocking up database access, network access, or incomplete parts of large projects. Many developers, particularly in the XP tradition, use mock objects for all the secondary layers in their application. In other words, whenever one object in a program depends on another object from your program, then these hardcore mockers use mock objects.
  2. The NUnit mock objects are not the only solution for testing a graphical user interface. In particular, there are commercial products such as TypeMock that offer advanced facilities and greater ease of use. Furthermore, various tools, including TestComplete, (a company in which Falafel is a part owner), can also be used for testing user interfaces. Many of these commercial testing tools provide shortcuts that may be easier to use than the process shown here.
  3. As mentioned earlier in this article, the NUnit implementation of mock objects is lightweight. In particular, the release notes for NUnit state: "This facility is in no way a replacement for full-fledged mock frameworks such as NMock and is not expected to add significant features in upcoming releases. Its primary purpose is to support NUnit’s own tests. We wanted to do that without the need to choose a particular mock framework and without having to deal with versioning issues outside of NUnit itself." I feel compelled to add, however, that if the NUnit mock objects shown in this article meet your needs, there is no reason for you to upgrade to another tool.

Mock objects can play a very important role in unit tests. Hopefully this brief introduction to the topic should give you the information you need to use them in your own testing process.

Making Wrong Code Not Compile

The rest of the programming world is linking to Joel Spolsky’s latest post about the need for Hungarian notation. Joel makes a nice distinction between "Apps Hungarian" and "Systems Hungarian". The latter is the one we all know and hate, all lpszThis and dwThat. The former is more interesting in that it uses prefixes to describe the role of the data in the application and therefore what can be done to it and how it can be used.

But as far as I’m concerned, all this talk about Hungarian notation is just rubbish.

The essence of Joel’s argument is that you, the developer, become attuned to the prefixes and you notice when variables whose names use different prefixes are used inconsistently. You should read Joel’s post since I’ll be discussing his example; go ahead, read it now. I’ll wait.

Now this all sounds groovy baby, and indeed I imagine several developers have been swayed by Joel’s argument (and I know he can be very persuasive) and have suddenly decided to use "Apps Hungarian".

Well, I’m not swayed: I think it’s awful, a complete throwback to the 80s. Why? Because to me Joel’s argument is antithetical to modern object-oriented practices. In fact it just reeks of old-style C programming.

Consider again Joel’s example: given a string variable it’s hard to say whether its value is the original input from the user (that may contain spurious HTML tags) or the encoded value (where the angle brackets from the spurious HTML tags are converted to their character encodings). From this he proposes using prefixes for string variable names so that you can know whether the values are "safe" (i.e., encoded) or "unsafe" (i.e., raw, direct from the user).

Well to me a string is a string is a string. It’s just an array of characters, with no other structure or semantic meaning at all. That’s it. Period. It’s just, you know, a primitive type. If you want a string to have some other overlaid semantic meaning, such as safeness, then it is no longer a simple primitive string. It is a string with new behavior; it is a string with extra properties. Certain actions are allowed with this string, others are not.

And Joel’s argument is that we should implement this through a naming convention? Wow. To me, it sounds like a new type. A class. You know: something that encapsulates data, that enforces specific behavior on that data, that constrains what you can do with the data. Then the compiler can help you maintain type safety and behavior safety. Wow, using the compiler to ensure we don’t write bad code? What a concept.

So, off the top of my head, not saying this is how I’d really do it in a production application, that your mileage may vary, etc, I’d write a UserText class with a constructor that accepted the original string from the Request instance. There would be two methods, GetSafeText() and GetUnsafeText() to return the two variants of the original string. There might be other methods as well: Store() and Load() to save and read the data from the database. Etc.

Think I’m talking rubbish? Look at the Url class in the .NET Framework. See what I’m getting at? Joel would have you prefix string variable names and have your eyeballs enforce URL type safety. The Framework designers didn’t take that naïve solution and instead gave us a class with certain behaviors and in using this class the compiler forces us to use URLs properly. That’s just — shock, horror — so type- safe.

In fact, I’d have to say that if you have "primitive data" that has other attributes or properties, or that is constrained in some way, then it should be an object, an instance of a class that you write to enforce the constraints, etc. Let the compiler take the heavy load of making sure you use the data properly, not some wacky naming convention.

Sounds like a plan to me.

Advice for the New Delphi Marketing Guy

An open letter to the new Delphi Marketing Guy:

I am glad to hear that there is a fresh face tasked with the difficult job of marketing Delphi. I’m glad because every time there is a new marketing person, it represents an opportunity to radically change the way Delphi is marketed. From reading your web site I must say that I am really encouraged. You appear to be far more technically savvy than your predecessors have been, and you clearly have a “Developer Relations” bent. That’s great. Your Zamples site is terrific. Here’s hoping you “sound” like a developer and not a marketeer!

One of the first things I am sure you will discover is that, right or wrong, many folks consider “Delphi Marketing” to be an oxymoron. You probably are making it one of your top priorities to change this state of affairs. In fact, if after twelve to eighteen months on the job, the only thing you feel you’ve accomplished is that the Delphi community no longer holds this attitude, I would say that you will have been a roaring success and will deserve a huge raise. Simply changing that one perception would be a huge step forward.

Now, I’m not a marketing guy. I admit it. I’ve never taken a marketing class, and I’ve never had a marketing job. But I do know what I like when I am marketed to and I have been hanging around the Delphi community for ten years. I’m in the business of selling Delphi and Delphi services, so I have seen a thing or two over the years. As a result, I do have some humbly-offered advice for you:

  1. Get a copy of The ClueTrain Manifesto. Buy it. Read it. Live it. Be it. In my view, the very first thing you need to do is to bring Delphi marketing into the 21st century by realizing that “Markets are Conversations”. The Internet has transformed the way marketing is done, and I must say I don’t think that in the past, the folks doing Delphi marketing have realized this. It seems that all Delphi marketing has been done in the classic “Sell Tide on the Soap Operas” mode, with Marketing 101 textbook techniques and horribly over-controlled “marketing campaigns.” That’s not the way it gets done anymore. Most of what follows here flows from the basic concepts in that book.

  2. When you get done with the ClueTrain, read everything Guy Kawasaki has written. Guy Kawasaki knows all about marketing technology in the technology age. One of Delphi’s greatest strengths is the community of developers who believe very passionately in Delphi as a tool, as a language and as a product. Guy knows how to harness these folks, and you’d do well to try to do the same.

  3. Walk the halls where the Delphi team works and read the Dilbert cartoons posted there. Scott Adams is a genius. I’m a firm believer that anybody can get the pulse of an organization and the ills that effect it by reading the Dilbert cartoons posted on people’s office doors and in their cubicles. Wandering the halls and reading the Dilbert strips posted there will be one of the best ways for you to find out what the team thinks about the problems and issues with the product and the company.

  4. Post to your blog two or three times a week. The fact that one of the first things you did on the job is to set up a blog and invite a conversation is extremely encouraging. That is really cool. Now, the trick is to stick with it. Too many blogs at http://blogs.borland.com are pretty much dark. Post what you are doing. Post where you go, the conversations you have with other Borlanders, with customers, with the execs. If you are doing market research, post about it. You don’t have to post the results, just post what you are interested in, where you are looking for information. Ask your customers questions in your blog and then respond to their comments. Get other Delphi team members to blog more. Talk about your boat, your life, funny stuff that happens at work, whatever. But just keep posting.

  5. Don’t sound like a marketing guy. I think that much of what Borland is doing with the SDO strategy is really cool. However, a lot of it sounds like marketing, not like straight talk. I’ve read it carefully, and I’m not even sure I know what it means. However, the talk that Boz Elloy gave at Borcon, particularly the skit done by the Sales Engineers, was much better. It was clear, concise, and delightfully devoid of marketing-speak. I think that Boz’s talk was so effective because he realized he was talking to developers. There’s a reason that marketing guys are such ripe targets for Dilbert cartoons. If you sound like a marketing guy, people will tune you out. Normal, rational people can’t understand the language spoken by marketeers. “Process” and “paradigm” and “maximizing” and all that stuff needs to be banned. Converse, don’t “market”.

  6. Be an active newsgroup participant. Put on your asbestos suit and start posting in the newsgroups. Clearly label yourself as the Delphi marketing guy. Start out by being adamant that you won’t discuss the past, as that is gone forever. Insist that you only want to talk about the future. You’ll be flamed and berated. You will be inundated with tons of input, flames, comments, insight, advice, and even total nonsense from all of us arm-chair marketers. But these guys and gals that are hollering at you are the heart and soul of Delphi. You must have a thick skin and listen to them. Converse with them. Talk to them. Inform them. Get to know them. They are your soldiers, your eyes and ears in places you can never be. They love Delphi. They want to spread the good news of Delphi. Be there for them to help them do that.

  7. Join the fight for more money, resources and freedom for the Borland Developer Network. BDN is utterly essential for Borland and Delphi’s success, but I sometimes get the feeling that no one outside of Developer Relations realizes this. BDN is a huge, yet totally under-utilized marketing tool. Developers need resources, code, examples, articles, support and more. Having all of that in abundance on BDN makes every Delphi sale that much easier. The Developer Relations guys do heroic, McGyver-like work in providing content on the site with a shoe string budget, masking tape, baling wire and some glue. They need more and better resources to get the job done. They need more freedom to publish content without the lawyers breathing down their back. They need strong, clear support at the highest levels. You can help them get that, and get a great marketing tool in return.

  8. Go after disaffected Visual Basic programmers. You want a rich, ripe market for Delphi? A fecund field ready for harvest? Go after the rather large group of Visual Basic programmers who are quite unhappy about what Microsoft is doing with Visual Basic. Don’t know what is going on? Give this a read and get a feel for what is going on. Remind these folks that Borland has a twenty year legacy of not doing exactly what Microsoft is doing to them. These guys are ready for the plucking. Go for it.

It’s really, really hard for open letters like this one not to sound smug, and I’ve tried hard not to be smug, but I suspect that I’ve failed. Please forgive that. All of this is probably no more than the delusions of a chuckle-headed Delphi programmer, so maybe you should treat it that way. But maybe there are some good nuggets of truth in there that might work and make the words “Delphi Marketing” roll a little more smoothly off the tongue of the average Delphi developer.

Two Essay Collections for the Price of One

One of my business partners gave me a Barnes&Noble gift card for Christmas. Retailers love these gift cards. First, they get to hold onto the money that the gift card represents without the loss of product while the gift card holder leaves the gift card sitting on his desk for a few months as I have. Second, I think I read somewhere that something like fifty percent of all gift cards never even get redeemed, and of course that is pretty much free money. Gotta love free money.

Anyway, I did leave that gift card sitting on my desk for a few weeks while Barnes&Noble racked up interest on the cash it represented. Then while I was cruising around the Internet as well all are wont to do, I ran across this essay about what I’d wish I’d known in high school written by a guy named Paul Graham. It was marvelous, and I poked around the rest of his website and found a lot of good writing. I also found that he had recently published a collection of his essays in a book called Hackers & Painters. Well, that gift card was sitting there, and so after a quick trip to the local Barnes&Noble, I was the proud owner of a copy of Mr. Graham’s book. And then I read it. And now I’m going to write about it.

First, let me start by saying that Graham is clearly a super-smart guy and a very savvy business man. He basically invented the concept of the online e-commerce site, and ended up selling his company to Yahoo for what I am sure was a sum that left him free to do whatever he pleases with his life, like, say, writing and publishing essays. He is the guy that invented the Bayesian filtering technique that most of us now use to help get rid of SPAM emails. He appears not only to be a brilliant programmer, but also an accomplished artist, and the title of his book (it is the title of one of the essays in the book as well) reflects his two disciplines. Graham writes beautifully and thinks clearly, and his essays are a pleasure to read, even when you don’t agree with him.

This collection of essays can really be divided into two parts. The first nine essays cover the general topic of “interesting thoughts about the technological age that we live in.” The first, “Why Nerds are Unpopular” will resonate strongly with anyone who was “brainy” or interested in technology back in high school. Others, like “Hackers and Painter” and “Good Bad Attitude” discuss the notion of “hackers” and the role they play in society and technology. For Graham, a hacker is someone who makes things (much like painters, he points out) and who can make a computer do what he wants. He devotes quite a bit of space to discussing who hackers are, what they do, and why they are the way they are.

In this first section, he also gives a refreshingly insightful view into economics, and why free societies are so productive and innovative. If you want to understand what wealth is, and how to create it, read “How to Make Wealth”. You won’t find a better discussion of the topic in any economics text book. The refreshing part comes when Graham argues the obvious point that wealth only gets created when people are able to keep for themselves the fruit of their work. He points out that the true benefit of allowing people to become rich is not that they will offer you a job (as is commonly pointed out), but that the rich person is far more likely to sell you things like cars and tractors and computers – things that make us all better off. If people aren’t allowed to keep the riches that flow from hard work and innovative thinking, then you end up waiting in line for bread everyday – a point he makes clear in “Mind the Gap”.

It’s all wonderful stuff in the first half. But then we get into the second half, and well, it, uhm, well, it bugged me. The second half can be summed up as the “LISP is the holy grail of programming languages, and if you aren’t doing LISP then you are an idiot” section. At least that is how it came off to me.

Let me say that I don’t know thing one about LISP, other than it uses a lot of parentheses. The only LISP code that I’ve ever seen is the code Graham puts in his book. I’m sure that LISP is a powerful and useful language with lots of neat features. But I’m also quite sure that LISP can’t and doesn’t do a lot of the things that need to be done by us programming geeks. I’ve managed to accomplish quite a bit over the years without LISP, as I am sure many of you have. This is a fact that appears to have escaped Graham.

Graham is so big on LISP because he believes that his use of LISP for building his e-commerce site was the thing that kept him ahead of his competition. LISP apparently allowed Graham and his partner to implement new features more quickly and to match the features of the competition almost before the competition shipped those announced new features. I have no doubt that this is true. However, I have to question whether Graham’s success was due to LISP, or to the fact that he and his partner were just, well, smarter and more diligent than the competition. It seems to me that a better explanation is that they would have been able to do what they did no matter what language they had chosen.

The last six essays in the book are all dedicated to programming languages, including discussions about what a programming language is, what good ones and bad ones are like, and what the “dream” programming language would be like. Not surprisingly, the dream language looks a lot like LISP, and a “good” programming language has a lot in common with – surprise! — LISP.

For instance, Graham argues that LISP is so cool because it has an elegant solution to a problem that he puts forward: “..write a function that generates accumulators – a function that takes a number n, and returns a function that takes another number i.” Well, when I read that, I confess the first thing that pops to my mind is “Write a function that returns a function? Why in the world would I even want to do that?” Now, there is probably a very good reason for wanting to do that if you write LISP, but if you don’t, why would you want to? Discussion follows about how all other languages other than LISP can’t do this elegantly or can’t even do it at all. And of course, at the end of all this, I’m thinking “who cares”? Graham has provided a useless problem that LISP apparently solves well, and then denigrates all other languages for not being LISP. I found the whole discussion irritating. I admit that I’m certainly missing something in the problem, but the fact that I’ve never needed to do what is proposed is quite telling to me.

Of course, as a Delphi programmer, the fact that he wrote this sentence — “At the other end of the continuum are languages like Pascal, models of propriety that are good for teaching and not much else.” — completely undermines any argument he might make. Say what you will about Pascal, but it is good for a lot more than teaching, and the fact that Graham doesn’t know that certainly limits his ability to discuss the utility of the various programming languages out there.

Nevertheless, despite the LISP love-fest, Hackers & Painters is a stimulating and thought-provoking book. When discussing the economics and sociology of the computer age, this book is wonderful. When telling us all how great LISP is, it drags and gets preachy. Graham is a gifted writer, and I have subscribed to his RSS feed in order to keep up with his writings. (Anyway to filter RSS entries for the word LISP?) I look forward to hearing more from Mr. Graham on the topics of economics and the world of geeks and hackers. I’ll probably skip the essays on the wonders of LISP.

A New Way to Sell Delphi

In my last article, I wrote about selling development services. And of course, this got me thinking. (What doesn’t get me thinking is probably what you are asking about right now.) Anyway, it got me thinking about how to sell shrink-wrapped software. (A disclaimer first: I’ve never been involved with the production and sale of a shrink-wrapped product, but I have bought a lot of shrink-wrapped software. Plus, I stayed in a Holiday Inn last night). Selling software is an interesting endeavor. I think that the way it is sold needs to change, and that this change is being driven, like so many other things, buy the way the Internet affects everything.

(Another aside – what do I mean by “shrink-wrapped software?” I mean any software that you market and sell to individual customers. It need not actually be delivered in a box; it can be sold purely over the Internet, for instance.)

Selling shrink-wrapped software is hard. It’s tough figuring out how to price your product. Deciding what license to use, how to collect the money, how to deliver the product, what to deliver in the product, how to market it and whom to market it to are all difficult, challenging decisions. Can’t be easy to do.

So I was going to make up some fictional company to show what thoughts I’ve come up with, but I’m not going to do that. I’m going to talk about an idea I have for Borland. Now, I’m no expert on this, but I do see an obscene amount of customer feedback on the newsgroups, and I am a customer myself, so I’m not totally pulling this out of .. uhm, thin air. Yeah, thin air.

In general, Borland sells their products as if they were cars. Every year or so they product a new version, have a big announcement, and go on a marketing blitz to make people aware of the new version. Frankly, I think this an outdated and outmoded way to sell software. I also think it isn’t really what customers of software development tools want.

As a general rule, I think that if you are in the technology business, and you have been doing something the same way that you did before the Internet came to the fore, then you need to rethink the way you are doing it. How Borland sells software is a good example. Borland is selling software the same way they did before the Internet changed things. They are selling Delphi the same way they did before ubiquitous newsgroups made communication between users easy and copious, before eCommerce became the norm, before blogs made putting out information to customers a piece of cake, and before the ClueTrain Manifesto discussed and laid plain the need to change. Or, put another way, I think it is time that Borland rethinks how they sell software.

In addition, I think that Borland faces a unique business dilemma. Selling software development tools isn’t like selling other software. To a large degree, Borland’s customers rely on Borland products for their livelihoods. Whether it’s a consultant, a small development shop, or a large corporation, committing to a development tool is an important decision involving a lot of time and money. It’s fairly painless to change word processors. Changing development tools is a huge commitment. Therefore, Borland needs to sell their software in a way that recognizes this unique relationship that they have with their customers.

Now, let me be clear — I’m not trying to tell Borland what to do; I am merely making some suggestions, offering some food for thought, tossing out some cud for them to chew. I don’t have nearly enough information about Borland’s business to even begin to think that I could run Borland better than it is being run. I’ve frequently said that people shouldn’t try to claim that they know more or better than Borland unless they really do, and I’m not claiming that at all. That said, I do think I have some interesting ideas that they ought to consider. I am a customer, and I know what I want, and I see a lot of comments by customers and I think I know what they want.

Okay, here’s what I think: Selling software like cars is old-fashioned and needs to be changed. (Shoot, the way we sell cars in this country is nuts, but that’s another article…) Anyway, I think that the idea that versions of Delphi ought to be discreet, distinct events separated by time measured in years ought to end. Borland should consider the idea of selling Delphi only as a subscription – sell nothing but Software Assurance. Customers could purchase variable lengths of service, getting discounts for longer commitments. They could renew at anytime. Prices could be adjusted to ensure revenue streams aren’t altered much by this change. Customers could even pay a larger fee for individual updates.

Then, the really big change: Borland should plan and release quarterly updates to the product. These updates should include bug fixes, incremental improvements to existing features, and new features. Quarterly releases could be a goal, and not firm targets. New versions could be released as builds and features stabilize. New features could be implemented one or two at a time. The frenzy of producing, marketing, and selling a single release would be replaced with the task of selling a product as a concept and a commitment.

This is a win/win scenario. Customers would love this. They’d be getting frequent updates with frequent fixes to problems. They’d be getting a steady stream of new features, reducing the learning curve for any individual release. Bugs would be quickly fixed. The company’s commitment to Delphi would be clear, and customers really like clear commitment to products that they buy.

Borland would love it because their revenue streams would be smoother and steadier. The pressure on the R&D team would be lowered, as they would no longer be trapped in the frenetic cycle of pushing for a big release. Smaller, incremental releases allow for more flexibility and a steadier, more deliberate release schedule. The push to finish any particular release in a specific quarter would go away, because there would be a steady revenue stream. Features and fixes could be allowed to “stew in the pot” for the right amount of time because the pressure for any particular feature to be released at a specific point in time would be illuminated.

This change is needed because the Internet makes things move too fast. Long spans of time between releases of a product are not conducive to customer loyalty and satisfaction. The ease of distributing even large software packages has caused the marketplace to be more demanding of such frequent updates. Problems in software are made readily apparent to large swaths of a customer base because of the instantaneous communications possible on the web. Since news spreads quickly on the Internet, Borland needs to be able to respond just as quickly. Features get announced as vaporware, and Borland needs a vehicle to more quickly respond to such announcements. Everything is moving faster, and Borland needs to be able to move as fast as the folks in the left lane.

The time has come for a change in the way that shrink-wrapped software is sold. Making a commitment to a product and to customers by providing a steady, regular update to a tool is what customers desire. This is especially true for customers of development tools.

Gambas: A Fast Visual Basic for Linux

Gambas 1.0 has shipped. Gambas is a free, open source, Visual Basic like development environment for Linux. It has a built in visual designer, a built in debugger, components, a properties window (object inspector), and code insight. You can currently access MySQL and PostgreSQL databases from Gambas programs.

Gambas is not source compatible with Visual Basic. Instead, it contains an improved, rearchitected version of the Basic language. There is definitely a good deal of C++ code in the source for the project, but the IDE itself was written in Gambas, just as Delphi was written in Delphi. This is undoubtedly one of the reasons that the product is so good. The developers used Gambas to build and debug Gambas, and therefore took the time to make sure it was clean and functional.

This article is a very preliminary review of the product. It is a first look, describing my immediate impressions upon installing and loading Gambas.

A Delphi-Like IDE

The Gambas IDE has free floating windows like the original Delphi, and unlike Visual Studio. There is a green run button just like in Delphi, and you can compile the project to a Linux executable in one step.

Performance: Lightening Fast

Gambas is fast and responsive. On my aging 1700 MHZ Fedora 2 system, I would estimate that response time in the IDE is roughly equivalent with Delphi 1 or 2. There is a barely perceptible lag between the time I push the green run button and the time when the compiled program first appears, but it is well under a second. If I put a break point on the first line of code in a button response method, I can sense that there is a lag before I hit the break point, but it is not a humanly measurable period of time. It is not quite instant, but it is a small enough period of time that I am not able to measure it.

A color coded version of CodeInsight appears instantly when you need it. If I type in a variable, such as ListBox1, then type a period, the list of methods on the object appears as quickly as my machine can redraw. CodeInsight picks up on new methods that I add to my main class instantly, without me having to recompile the code. For instance, if I add a method called Foo to my main class, then Gambas sees it immediately when I type the period after the word me. (me plays the same role in Basic as the words this or self do in Java, C++ and Delphi. It is a way of referencing the current object.)

 

Event Model

Gambas has an event model very similar to that found in Delphi or VB 6. You can access the list of built in events for a component by right clicking on it. For instance, if you right click on a button, you can choose Events from the popup menu, and then select one of 16 events to automatically create the wrapper code for your event. Selecting the DblClk event creates the following code:

PUBLIC SUB Button1_DblClick()
END;  

Other events you can create on a button include: Click, Drag, DragMove, Drop, Enter, GotFocus, KeyPress, KeyRelease, Leave, LostFocus, Menu, MouseDown, MouseMove, MouseUp and MouseWheel.

 

Component Model

Gambas has a simple toolbox, containing about 25 components. You can double click the icon for any of these components in the ToolBox to make an instance of the component appear in the upper left hand corner of the currently selected form.

Gambas comes with the following built in components: Label, Image, TextLabel, ProgressBar, Button, CheckBox, RadioButton, ToggleButton, TextBox, ComboBox, TextArea, ListBox, ListView, TreeView, IconView, GridView, ColumnView, Frame, Panel, TabStrip, ScrollView, DrawingArea, Timer, GambasEditor, LCDNumber, Dial, SpinBox, ScrollBar, Slider, TableView, Splitter, Workspace.

I did not detect any context sensitive help, but pressing F1 brought up the help file in less than one second. I was then able to search on the name of my currently selected component to get very minimal, but complete, hyperlink style help. For instance, if I typed in the word Button, I got a list of the properties, methods and events on the button. If I clicked on the name of any of the events or methods, I was taken to a short description of that event or method. The declaration for the item was also listed in the help pane.

The components in Gambas appear to be based on the QT library. Since Gambas ships with source, and runs on Linux, there will be no need for a complex, Kylix-like license with TrollTech, and a simple recompile of Gambas itself will link you to the most recent QT library.

You can create your own components in Gambas, but in this first version, you must write them in C or C++. The components must be developed in the Gambas source tree, and at least part of Gambas itself must be recompiled in order to integrate your component into the IDE. The object model for these components looked reasonable on first glance, but it is of course a major disappointment to find that one can’t create them in Basic. This was easily the most disappointing find in my first look at Gambas. However, the second release of Gambas is scheduled to support native components built in Basic.

Regular Expressions, Movies, and other Miscellaneous Features

A quick perusal of the help files showed that there are various other components and tools that ship with Gambas. I found tools adding scripting, regular expressions, and multimedia movies to your applications. There were internet components for creating sockets, working with serial ports, and querying DNS servers. A compression library was also built into the Gambas tool chest.

I have not used, or tried to use, any of these advanced components. In many cases, these advanced components, such as the movie tool, appear to be wrappers around QT components. I hope to come back and revisit this subject in future articles.

The product comes with various sample programs. Most of them compiled immediately after installation with no fuss or extra effort on my part. A few of them accessed components which I had not installed yet, but they popped up a clear explanation of the error, and the IDE handled the exception flawlessly. I accessed the examples from the File | Open Example menu choice.


Installation

Gambas comes with source. There are binary releases you can download for many of the major distributions. However, I just downloaded the source for the project in a tar ball. I then typed the following three lines of code to compile and install the project:

./configure
make
su -c “make install”

After completing these steps, I launched Gambas by typing the following command:

gambas

If you are familiar with Linux, you should have no trouble installing Gambas using the method I show here. I would say it took me about ten minutes to download and install Gambas. If you are new to Linux, or a very occasional Linux user, then you should look carefully at the Gambas download page and see if you can find a way to install Gambas using the binary installation tools.

Summary

I don’t think I ever started actually crying during the first few minutes in which I used Gambas. However, my eyes did sting a bit, and there was a funny churning sensation in my heart and stomach areas.

This is, of course, what we had all hoped for when Borland announced a Linux version of Delphi. Rather than indulging in yet another Kylix postmortem, I will simply say that at first glance this appears to be an extraordinary win for the open source movement in general, and the SourceForge community in particular.

This project was apparently developed primarily by a single individual, Benoît Minisini. There were others involved in the project, but Benoit was definitely the chief architect and creator of the majority of the source code for the tool. He has outlined an ambitious future for Gambas.

I do not know how well Gambas is going to hold up under careful scrutiny, nor how well the IDE holds up when large Gambas projects are loaded into it. I have read that Gambas has database connectivity, and that it is an important and central part of the product, but I have not yet tested that feature.

There are some shadows in this otherwise bright picture. Gambas uses QT, so there is no cost to distribute free GUI based applications written in Gambas, but if you want to sell a product written in Gambas, you must talk to TrollTech. Having to build components in C or C++ is definitely less than ideal, and I look forward to seeing the Basic component model promised in version 2.0. Some people might find the free floating, Delphi 7 style windows to be less than ideal. But considering that this is a free tool, one can’t help but consider these minor drawbacks.

Regular CodeFez readers will know that I am a big supporter of the open source movement. Yet frankly, this project caught me a little off guard. I simply was not expecting anything quite this promising. Despite all my enthusiam for open source, I still tend to underestimate the power of the movement. It frequently exceeds my expectations. And that, of course, was the sensation that I used to get back in the mid-nineties, when Delphi was hot.

I know that finding acceptance for a Visual Basic like IDE on Linux will be an uphill battle. However, for windows developers who want to move to Linux, this could well be a major find. If it can live up to expectations, here at last is an environment that brings the promise of true visual development to Linux.

J2EE Strategies Part III

Input Validation

This is Part III in a IV part series on J2EE Design.

Input validation is always required even in simple web applications. The question is where do you do it: client or server-side? Each has advantages. Client side validation is nice because it provides instant feedback for the user. However, the downside to client validation is that it must be written in a scripting language, typically Javascript. Scripting languages are good for very small scripts (hence the name), but don’t scale well to serious development. They don’t have good variable scoping, strong types, or a sufficient object model.

Server-side validation allows you to use the full facilities of the Java language, which is of course quite robust. However, to do server-side validation requires a round trip to the server, meaning that you lose the nice instant gratification of client-side validation. Which should you use?

Client-side Validation

Because client-side validation must rely on scripting languages (whose short comings are listed above), it should always be thought of a part of the view of the application, not the model. In other words, you should never embed any business logic in JavaScript. It is fine for simple validations like "Must be all characters" or "Must be in phone number format". However, it should never be used for logic such as "No customer can have a credit limit over $1000". This is a business rule because it has more to do with why you are writing the application than how you are writing it. These business rules are prone to change and therefore should be consolidated in model classes. They should never be scattered throughout the presentation layer of your application in Javascript!

Server-side Validation

All business-level validations should be handled by model JavaBeans (or Enterprise JavaBeans) on the server. By placing the code in a central, logical location, you create a much more maintainable web application. Of course, this means that you frequently have a 2 tiered approach to validation: the client handles formatting and simple validations while the server code does the business level validation.

Forms and Validation with Struts

Again, Struts provides a nice illustration on how to design server-side validation in a flexible way. One of the common chores in web applications is the handling of forms. The JSP mechanism of automatically populating the fields of a Java bean with form post parameters goes a long way in reducing the tedious code that would have to be written otherwise. Struts has built on top of this mechanism to provide an easy way to handle populating beans in wizard style interfaces and handling validation.

When creating a Struts form, you have the option of creating a Struts form bean. This is a regular Java bean with typical accessors and mutators for the fields of the bean (and very little other code) that subclasses the org.apache.struts.action.ActionForm base class. Note that these beans should not connect to a database to populate themselves or provide a lot of other infrastructure code. They should be very lightweight classes that basically encapsulate entity information. Other beans should handle persistence and other duties. In fact, it is typical to have one bean that acts as a collection of the entities represented by a Struts form bean (like the ScheduleBean used in the Action class discussed above).

One of the additional sections in the struts-config XML document allows you to declare a class as a form-bean. Look at the top of the sample struts-config document above. A form-bean declaration allows you to specify a name for the form bean and map that to a specific class (in this case, schedule.ScheduleItem). In the action-mapping section of the config file, you can automatically associate a form bean with an action. Notice the "add" action in the config file. The additional "name" parameter allows you to specify a form bean to use with that action. Once you have a form bean associated with an action, Struts will perform the following services for you before invoking the action method:

  • Check the user’s session for an instance of the bean under the name specified in the struts-config file. If one doesn’t yet exist, Struts creates one and adds it to the user’s session
  • For every request parameter that matches one of the setXXX methods of the bean, the appropriate set method will be called
  • The updated ActionForm bean is passed to the Action as a parameter

This is similar to the standard JSP behavior of handling request parameters that map to fields of a JavaBean. However, Struts performs more services for you. Struts also comes with a collection of custom JSP tags, split into 4 categories. One of the categories allows you to replace standard HTML tags for input elements with "smarter" Struts tags. If you use the Struts HTML tags, it will also automatically populate the input fields on the form from the ActionForm whenever the page is visited. This makes it really easy to handle wizard style interfaces. Notice that an ActionForm bean doesn’t have to correspond to a single page. More typically, it corresponds to a single set of user information. So, you can have an ActionForm that spans multiple pages. Using the Struts HTML tags, the input fields the user has already filled in will be automatically populated as the user moves back and forth between the pages of the wizard. For an example of a JSP that uses the custom tags, see the following listing.

Listing 10: An HTML form using custom Struts tags


<%@ taglib uri="/WEB-INF/struts-html.tld"
    prefix="html" %>
<%@ taglib uri="/WEB-INF/struts-bean.tld"
    prefix="bean" %>
<%@ taglib uri="/WEB-INF/struts.tld"
    prefix="struts" %>
<jsp:useBean id="scheduleItem" scope="request"
             class="schedule.ScheduleItem" />
<jsp:useBean id="scheduleBean" scope="page"
             class="schedule.ScheduleBean" />
<% pageContext.setAttribute("eventTypes",
 scheduleBean.getEventTypes()); %>
<HTML>
<HEAD>
<TITLE>
ScheduleEntryView
</TITLE>
</HEAD>
<BODY>
<H1>
Add Schedule Item
</H1>
<hr>
<html:errors/>
<html:form action="add.do">
<table border="0" width="30%" align="left">
  <tr>
    <th align="right">
      <struts:message key="prompt.duration"/>
    </th>
    <td align="left">
      <html:text property="duration"
                     size="16"/>
    </td>
  </tr>
  <tr>
    <th align="right">
      <struts:message key="prompt.eventType"/>
    </th>
    <td align="left">
      <html:select property="eventType">
        <html:options collection="eventTypes"
                 property="value"
                 labelProperty="label"/>
      </html:select>
    </td>
  </tr>
  <tr>
    <th align="right">
      <struts:message key="prompt.start"/>
    </th>
    <td align="left">
      <html:text property="start"
                         size="16"/>
    </td>
  </tr>
  <tr>
    <th align="right">
      <struts:message key="prompt.text"/>
    </th>
    <td align="left">
      <html:text property="text"
                         size="16"/>
    </td>
  </tr>
  <tr>
    <td align="right">
      <struts:submit>
        <bean:message key="button.submit"/>
      </struts:submit>
    </td>
    <td align="right">
      <html:reset>
        <bean:message key="button.reset"/>
      </html:reset>
    </td>
  </tr>
</table>
</html:form>
</BODY>
</HTML>

This listing shows some other features of Struts tags as well. One of the automatic features of Struts is form validation. The struts-config file allows you to flag an action associated with a form bean to enable validations. This assumes that you have added a validation method to your form bean class. A sample validation method is shown here.

Listing 11: The validate() method of a form bean.

public ActionErrors validate(
        ActionMapping actionMapping,
        HttpServletRequest request) {
    ActionErrors ae = new ActionErrors();
    if (duration < 0 || duration > 31) {
        ae.add("duration", new ActionError(
           "error.invalid.duration", "8"));
    }
    if (text == null || text.length() < 1) {
        ae.add("event text",
            new ActionError("error.no.text"));
    }
    return AE;
}

The validate() method is automatically called by the Action object after the population of the form bean fields but before any of the code in the Action is performed. If the validate() method returns either null or returns an empty ActionErrors collection, processing of the Action continues normally. If errors have been returned from validate(), Struts will automatically return the user to the input form, repopulate the fields from the form bean, and print out a list of the reported errors at the top of the page. For an example of this, see the figure below. This is a Struts form where the user has put in a negative duration and left the Text field blank. The errors listed at the top are automatically generated via the validate() method and the <html:errors/> tag at the top of the file (the position of this tag determines where the errors will appear on the page).

You will also notice that the error messages returned by the validate method aren’t the messages that appear on the page. The strings added in the ActionError constructor map to messages in a java.util.Properties file. This properties file is automatically referenced by Struts (this is one of the parameters of the ActionServlet), and allows for easy separation of the messages in the application from the code. This means that the text of the message can be easily changed without recompiling the application. This is also how Struts handles internationalization. Struts can be set up to look for a properties file that matches the locale encoding of the request to automatically provide text messages in the appropriate language. This is another service provided by the HTML tags. Notice in Listing Five that the input text references fields in the properties file.

If You Must Do Client-side Validation for Business Rules…

There are some web applications that simply must provide the instant feedback that you can only get with Javascript and client-side validation. In those cases, there is a solution. Because you must embed the business rules in Javascript, you should create a method on your model bean the outputs the necessary Javascript for the validation. This way, the model bean is still responsible for the validation, it just delegates it to the presentation layer. This still leaves the validation code itself in the business layer, where it can be easily changed.

Here is a simple example. In a model class named Order, the JavaScript validation for ensuring that a credit card field has a numeric value in it.

Listing 12: Model class that contains JavaScript to validate entry

public class Order implements Serializable {
    private static final String JS_CC_FORM_VALIDATION =
        "<script>" +
        "   function verify(e) {" +
        "       if (e.value == null || isNaN(parseInt(e.value))) {" +
        "           alert('Field must be numeric');" +
        "           e.focus();" +
        "           return false;" +
        "       }" +
        "   }" +
        "</script>";
    public String getFormValidationForCC() {
        return JS_CC_FORM_VALIDATION;
    }
    //... more methods follow

To embed this code in the view, you can use the typical JSP tags to generate the code into the view.

Listing 13: JSP code that pulls JavaScript from the model for validations

<%-- get JavaScript validation code from Order class --%>
<jsp:getProperty name="order" property="formValidationForCC"
/>
<form action="CheckOut" method="post" onSubmit='verify
(this.ccExp)'>
 Credit Card # <input type="text" name="ccNum">
    Credit Card Type <select name="ccType">
 <option value="Visa">Visa</option>
 <option value="MC">MC</option>
 <option value="Amex">AMEX<option>
 </select>
 Credit Card Exp Date <input type="text" name="ccExp" >
 <input type="submit" value="Check out">
</form>

As you can see, this allows you to keep the business rules of the application in the model class but still get the benefits of client-side validation.

Caching using the Flyweight Design Pattern

The Flyweight design pattern appears in the Gang of Four book, which is the seminal work on patterns in software development. The pattern uses sharing to support a large number of fine-grained object references. With the Flyweight strategy, you keep a pool of objects available and create references to the pool of objects for particular views. This pattern uses the idea of canonical objects. A canonical object is a single representative object that represents all other objects of that type. For example, if you have a particular product, it represents all products of that type. In an application, instead of creating a list of products for each user, you create one list of canonical products and each user has a list of references to that list.

A typical e-commerce application is designed to hold a list of products for each user. However, that design is a waste of memory. The products are the same for all users, and the characteristics of the products change infrequently. This figure shows the current architectural relationship between users and the list of products in the catalog.

Each user holds a list of products.

The memory required to keep a unique list for each user is wasted. Even though each user has his or her own view of the products, only one list of products exists. Each user can change the sort order and the catalog page of products he or she sees, but the fundamental characteristics of the product remain the same for each user.

A better design is to create a canonical list of products and hold references to that list for each user.

In this scenario, each user still has a reference to a particular set of products (to maintain paging and sorting), but the references point back to the canonical list of products. This main list is the only actual product object present in the application. It is stored in a central location, accessible by all the users of the application.

Flyweight considerations

The effectiveness of the Flyweight pattern as a caching mechanism depends heavily on certain characteristics of the data you are caching:

  • The application uses a large number of objects.
  • Storage (memory) cost is high to replicate this large number for multiple users.
  • Either the objects are immutable or their state can be made external.
  • Relatively few shared objects may replace many groups of objects.

The application doesn’t depend on object identity. While users may think they are getting a unique object, they actually have a reference from the cache.

One of the key characteristics enabling this style of caching is the state information in the objects. In the previous example, the product objects are immutable as far as the user is concerned. If the user is allowed to make changes to the object, then this caching scenario wouldn’t work. It depends on the object stored in the cache being read-only. It is possible to store non-immutable objects using the Flyweight design pattern, but some of their state information must reside externally to the object.

It is possible to store the mutable information needed by the reference in a small class that is associated to the link between the Flyweight reference and the Flyweight object. A good example of this type of external state information in an ecommerce application is the preferred quantity for particular items. This is information particular to the user, so it should not be stored in the cache. However, there is a discrete chunk of information for each product. This preference (and others) would be stored in an association class, tied to the relationship between the reference and the product. When you use this option, the information must take very little memory in comparison to the Flyweight reference itself. Otherwise, you don’t save any resources by using the Flyweight.

The Flyweight design pattern is not recommended when the objects in the cache change rapidly or unexpectedly. It would not be a suitable caching strategy for the ecommerce application if the products changed several times a day. This solution works best when you have an immutable set of objects shared between most or all of your users. The memory savings are dramatic and become more pronounced the more concurrent users you have.

Thinking about dynamically-typed languages

Every now and then I browse the Delphi newsgroups, including the notorious b.p.d.non-technical. In visiting this one you either lurk and never respond, or you wade in wearing rubber boots to your thighs and flame-proof jacket: the one thing you can say about lots of Delphi developers is that they’re fiercely loyal to their language. Valid arguments of any shape or form against Delphi are mercilessly trampled on without warning.

The same thing happens, to a lesser extent, with C# and VB developers. They have a constant battle about which language is better, on which language Microsoft should be spending more time, and, let’s not forget, in which language the .NET Framework should be developed.

However from the sidelines I have been watching another development taking place, the rise of dynamically-typed languages; a development that I think renders some of these turf wars moot.

Like many readers, and like the majority of developers everywhere, I cut my teeth on statically-typed languages. In these languages (C++, Delphi, Java, C#), strong typing is the way you save yourself from shooting yourself (and innocent bystanders) in the foot. Many times during my career, I’ve passed untyped pointers around and forgot what they were pointing to, with the consequent crashes and nasty debugging sessions deep into the night.

OOP seemed to help a lot here: it forced us to think about class hierarchies, about abstraction, about inheritance, and so on. Interfaces continued the process. Strong typing became more and more our friend, and the only problems seemed to stem form null pointers and referencing objects after they’d been disposed.

Strong typing brought us safety. The compiler made sure that if routine A could only accept an instance of Foo, you could only pass an instance of Foo to it. You were weaned off untyped pointers by the lure of safety-through-the-language. Once strong type-checking came in with the compiler and was enforced by the run-time, we had to really jump through hoops to break our code, at least as far as type bugs were concerned. Safe at any speed, right?

Oh, how we scoffed at bizarre VB code from the pre-dotNet era that enabled you to write routines that either returned an integer or a boolean value. Heh, those wacky VB-ers, eh?

But at the same time as all this strong typing infrastructure was coming into being there were a couple of other movements happening, one in a language-neutral dimension altogether, the other orthogonal to strongly typed languages.

The former was test-driven development (TDD), or the practice of writing unit tests at the same time as writing your code. This methodology helped us ensure that our code worked the way we intended. No implementation code without the tests to support it.

The latter was a new set of languages, Python, Perl, Ruby and the like. Originally conceived as languages for quick prototyping, for writing simple text-file analysis tools, and the like, they’ve now grown into languages for writing major applications. And they’re dynamically typed. Casts are a thing of the past, var blocks are so Victorian.

Recipe for disaster, right? Not if you use TDD and write unit tests. If you write your code using TDD, then I can guarantee that type-safety will be a non-issue for you. Your unit tests will impose another kind of safety on your code, run-safety for want of a better word. Your code works and you can prove it works by running the tests.

And you can reap the other main benefit of dynamically-typed languages: their flexibility. Writing code in these languages is just easier; you don’t have to explicitly declare variables, you don’t have to up- or downcast (or worry about the difference). Co- and contravariance hold no terrors for you.

Reading dynamically-typed code is also a lot easier. No casts for a kick-off. No extraneous keywords or helpful hints to the compiler to assure it that, yes, you really know what you’re doing. The intent of the code is revealed in a much clearer manner.

(Note: if you don’t believe me, try thinking about all that code you’ve written where you cast an object of some description into a type and then check that the object you get is null or not. In my recent ASP.NET project, I took the time to check on my usage of the Session object. I was amazed at the amount of casting code I’d been writing. Generics in .NET will help in certain cases of this: for example, removing the vague possibility of inserting a Bar object into a List at compile-time. But, being honest now, when was the last time you added a Bar object into an ArrayList or a TList that contained just Foos? Answer: never, right? Besides which I would hope that your unit tests would preclude this remote possibility anyway.)

Think of it this way: there’s always going to be a line in the sand where your problems transcend the language syntax and become program semantics instead. So, by accepting that you already dynamically check your application code anyway through the medium of your unit tests, why not defer type-checking to run time as well? Move the line in the sand so that run-time checking becomes more important and compile-time type checking less. You’ll find that your code is easier to write and a lot easier to read (and, remember, code is read more than it is written). Dynamically typed languages are the only players here.

Why You Should Choose Delphi

People are always asking “Why would I use Delphi when I can use C#? Why would I go with Borland’s tool when Visual Studio and clearly closed the gap?” I think that is totally the wrong question to ask. The real question is “Why should I be using C# when I could be using Delphi?”. Here’s some of the reasons why the latter is the real question to be asking:

    Delphi is easier to read. Delphi thankfully isn’t a C-language descendant and so it doesn’t use the ugly, horrible to read C-like syntax of say C# or Java. It uses easy to read works like ‘begin’ and ‘end’ instead of hard to read symbols like ‘{‘ and ‘}’. It uses a readable, sensible syntax in ‘for’ loops. It requires that you declare variables before you use them, ensuring that the definition and type of variables are easy to determine. It forces you to declare classes and routines in the interface section of a unit, making for much easier analysis of already written code.

    You already know Delphi – if you are currently a Delphi developer, Delphi has you covered in .Net. There isn’t a single thing that can be done by other tools in .Net that you can’t do in Delphi.

    Delphi has better data access. The BDP – Borland Data Provider — provides a much cleaner and well designed interface to ADO.NET than the FCL does. The FCL doesn’t provide enough abstraction over ADO.NET to even provide a single set of components to access data with ADO.NET. Access to Oracle and SQL Server require completely different sets of components and types, making providing a single-source, common front end to data impossible. The BDP, on the other hand provides that single interface, even provides components that allow you to mix data from differing databases into a single dataset. Other languages and tools don’t even pretend to provide this advantage.

    Delphi is cross-platform. Delphi code can be cross-platform between .Net and Win32. Code bases can, with a bit of work, even be used in those two platforms plus Linux. Delphi is the only .Net language that provides that level of compatibility between Win32 and .Net.

    Delphi can expose .Net functionality without COM/Interop – this is an unsung feature of Delphi. Want your Win32 applications to have access to encryption? Regular expressions? Anything else in the FCL? Delphi can provide it without COM.

    Delphi can link .Net code into your EXE – Delphi provides the ability to link code into a single EXE instead of having to worry about deploying all the right assemblies in all the right places.

    Delphi handles and names events the right way. Is there anything more confusing in the .Net framework that delegates and how they are used to handle events? No, there isn’t. Delphi handles this all the right way, hiding the silly, confusing way .Net does it. Instead of the confusing C# syntax, Delphi uses the model that has been proven for over ten years. Need to add an event to a component? Delphi does it clearly and simply – like it does everything.

    Delphi does IDisposable correctly. — Only slightly less confusing than delegates is the concept of IDisposable. Do I call Dispose? Do I need to? How do I know? As long as you follow the time-test method of calling Free on any object you create that doesn’t have an owner, you’ll be fine in .Net. If Dispose needs to be called, Delphi will call it. If not, it won’t. No more guesswork, and your code will even be compatible on the Win32 side of things.

    Delphi has been doing “Avalon” for ten years. The hot new thing over at MS is “Avalon”. Avalon is a system for storing properties and information about forms and objects in a text file separate from the silly InitializeComponents call in your code. Sound familiar? Yeah, thought so. (Side note: Partial classes crack me up. Only Microsoft could invent a “feature” solely for the purpose of making up for a gross shortcoming in their language.)

    Delphi has datamodules. Is there a bigger oversight in all of the FCL than the lack of a datamodule-like class? Sure, you can ‘simulate’ datamodules, but it’s a poor simulation at best. Put your database controls in a datamodule, and add the datamodule’s unit to your uses clause and the Object Inspector is smart enough to see the components in the datamodule when looking at another form or whatever. Datamodules let you decouple data access and other abstract concepts from the user interface of your application. In other words, datamodules rock, and other tools and don’t have them.

    Delphi’s third-party market is way more mature than the .Net market in general. Sure, there are tons of components out there in the .Net market. But Delphi component builders have been at this for a decade, and have the years of experience needed to build excellent components the right way. Are you going to try to tell me that Ray Konopka and the DevExpress folks don’t have it all over the johnny-come-lately folks that have been building components for a year or two?

    ECO II – Delphi has a mature, existing, shipping model driven architecture that is written by people who truly understand object-oriented programming. Microsoft doesn’t. They have some thoughts and ideas outlined on a web-site somewhere and promises of functionality that they don’t even really understand yet. Delphi is light-years ahead in this area, and there is no reason to believe that they won’t stay that way.

    Borland’s ALM solutions are here now, not “in the vaporware pipeline”. Microsoft is touting Team System or whatever they are calling it. Sounds great and all, but of course Borland is selling that technology right now, not just talking about it..

And that is just scratching the surface.  You’ll probably add even more reasons right here in the comments. Personally, I can’t understand why anyone would ever choose C# or VB.NET over Delphi. 

How about you?