Nick Shreds TRex’s Blog Post

Steve Teixeira, now a supplicant, er, sorry, employee for Microsoft — yes, sadly, it is true, he’s shaken the Delphi dust off his boots and drunk deeply from the MS Kool-Aid Stand — responded to my recent CodeFez article about MS not quite getting it in the area of OOPishness. 

The first thing I want to say to Steve is "Hey, thanks!"  It’s been my goal for months now to get a reference from a Microsoft blog to one of my CodeFez articles.  Mission accomplished!

Now that I’ve been nice and appreciative to Steve for fulfilling a long unrealized dream, I’ll proceed to rip his blog post to shreds like a puppy with a Sports Illustrated rubbed in bacon.

Steve discusses five points from my article.  I’ll respond to his comments about those five points, destroying each in turn.  It’ll be like Perry Mason vs. That District Attorney Guy who never won a case. (By the way, That District Attorney dude was named Hamilton Burger.  Wouldn’t you love to have a friend named Ham Burger? The laughs would just keep on rollin’, hanging around in the bar after Ham lost every case to Mason. Good times.)

Point 1:  Steve "refutes" my complaint about the poor design of the myriad of Connection objects in ADO.NET by saying "that isn’t an OOP design issue it’s a product functionality decision."  I must say, my initial response to that was "Huh…? What does ‘product functionality decision’ mean?"  How is this not an OOP design issue?  Instead of designing a single class to do the job, they design multiple classes that can’t replace one another.  In addition, each class requires the use of a database-specific implementation of the IDBDataAdapter which can’t be interchanged either. And don’t even get me started on things like OracleDateTime and OracleParameter.  You can’t even call that stuff "somewhat abstracted and decoupled".  To his credit, Steve concedes that point when he says "Nick also goes on to point out, correctly I think, that BDP is better at insulating the developer from different database vendors."  Well, yeah, exactly! That’s what the Borland Data Provider does:  it uses good OOP technique to encapsulate and abstract ADO.NET so that you don’t have to hard code things like OracleString into your code.  That’s what good OOP design is supposed to do. Steve says it’s not a fundamental design problem, but I say a total lack of abstraction to interfaces and a tight coupling of a specific implementation to the interfaces is bad design.  The BDP doesn’t do this.  That’s good design.

Point 2:  Steve misses the boat altogether when talking about the Style class.  Sure, you can create your own Style class and implement it in your ASP.NET controls.  But to do that, you have to completely abandon the built-in functionality that the framework supplies for dealing with Styles, and do everything "by hand".  The class System.Web.UI.WebControls.Style has a property called ControlStyle which is of type Style.  If you want your control to have a style that doesn’t descend from the Style class — and thus include stuff that you might not want — you are out of luck.  You can’t partake of the WebControl class’s style handling.  You have to do it all yourself.  This is a perfect example of MS not getting it.  They’ve provided a base style for you that you must use no matter what — even if you don’t want some of that functionality in the Style class.   Again — bad design.  It’s poorly designed because it makes assumptions about descendent controls that shouldn’t be made. 

I’ll have to agree with Steve when he laments my example for defining an IStyle.  I did say that I was "designing on the fly."  I didn’t propose my example as the perfect solution, but merely as an example of how it might be done:  i.e. the ControlStyle property should have been an interface instead of a class which requires certain type to be used.

Point 3:  Steve, Steve, Steve, Steve, Stevey, Steve-aroo, Steve-arama!  Wow, you are playing right into my hands, just like that poor cop schmuck in The Usual Suspects when Kevin Spacey played him like a concert piano.  Steve argues "Okay, now we’re getting into framework functionality, not OOP design"  Well, no, the lack of OOP design is exactly what I’m talking about here. How is "framework functionality not OOP design?" Loading, reading, and writing text files is a very basic and common functionality.  One would think that there might be a class that encapsulates that functionality.  For instance, the description of such a class that would be used to do what I discussed might go like this:

  1. Create an instance of the TextFileManager class by passing the filename to the constructor
  2. Alter the third line in the text file, as described in the original problem set, described in the previous articles.
  3. Save the Result.

Simple, clean, neat, orderly, and — dare I say it! — well designed.

Steve’s example drones on and on like this:

  1. Create an instance of something called a StreamReader (you use a StreamReader to handle text?), passing the filename to the constructor
  2. Allocate memory for an array of strings
  3. Read through to the end of the stream, and when you get done doing that, peruse over the result, chopping the string into array entries every time you run into a "return" character
  4. Close the StreamReader
  5. Alter the third item in the array
  6. Create an instance of something called a StreamWriter (you use a StreamWriter to handle text?), passing the proposed filename to the constructor.
  7. Iterate over each item in the array we created above and write each one out to the StreamWriter.

Imagine my mirth, my barely controlled giggles, when Steve follows this up with "You might be able to do this a little more briefly in other languages but not by much." Uh huh.  First, this isn’t a question of language, but a question of OOP.   I’ll leave it to the imagination of the reader to determine which of the two above examples is a better example of encapsulation.  (Hint:  One uses a single class, the other uses two classes and an array.) 

Point 4:  Steve then goes on to write about my "getting data out of a dataset" argument. He writes, "Okay, so the first one returns a System.Object that you have to explicitly convert and the second returns a TField object that has conversion methods hanging off of it.  I admit the TField is handy and maybe even nicer from a usability standpoint, but I have trouble seeing this as a huge issue or an indication that the implementer doesn’t quite get OOP."

Doesn’t quite get OOP? The designer here returns an instance of the root object. Very, very helpful.  Not! The class itself could provide easy conversion to other types, but it doesn’t.  Easy to do, obviously needed, but there’s nothing there.  All you get is an instance of System.Object. And this says to us that the designer "gets OOP"?  What it says to me is that the designer thought that it was quite OOPish to have you go off and call some other function from some other class to perform the basic functionality of retrieving data from a dataset.  Thinking it OOpish to need to call another class to get the original class to perform basic functionality of the field is not my idea of "Thinking in OOP".  As Steve says, the TField class is handy.  It’s handy because its a sound implementation of OOP design to perform the task at hand!  We can call the Convert class a class, and technically it is, but it’s really just a container for library functions.  Needing another class to perform the basic functionality of your class — and yes, returning values is a basic functionality of a dataset — isn’t OOPish.

Point 5:  My Point 5 here wasn’t really meant to be an indictment of the OOPishness (I love that word!) of ADO.NET, but just a general lament.  Sure, you can iterate over a result set with a DataReader, if you are connected to the server! If not, you are pretty much stuck doing the foreach thing over the rows. The notion of a current record requires binding the data to the interface.  Very uncool. The general point stands — the concept of a current record is a basic database concept, and totally devoid from ADO.NET.

I appreciate Steve’s general agreement about sealed classes. (I didn’t even talk about adding final to a method.  Argh, how unfriendly can you get? "Hey, this method used to be virtual, but I’ve categorically decided that you can’t descend from it anymore! Neener, neener, neener!") I think it hard to argue that sealed classes are anything other than totally lame.  I can even go so far as to grant Steve’s basic point that, lame though they are, a programmer should be able to seal a class.  Such a programmer would be a big weenie to do so, but,  hey, that’s the programmer’s decision. And of course, I can mock such a decision.

Steve does argue a few points in favor of sealed — that a class may need to be sealed to help the compiler.  I counter that OOPishness knows not, and should not know,  of compilers.  If you are making OOP design decisions based on what a specific compiler needs, then you aren’t making good OOP design decisions. In fact, the FCL is supposed to be language neutral, so OOP decisions based on compilers isn’t supposed to even be a factor.  He also argues that there might be security reasons for sealing a class.  Well, maybe, but I can’t think of any right now. I’m happy to be educated on that point.

Steve summarizes:  "In summary, I think there is very little evidence in Nick’s anecdotes that points to some fundamental misunderstanding of OOP." Weeeeeeeeeelllll, I beg to differ.  I think every single one of my anecdotes speaks directly to the issue of a lack of good OOP decisions, as illustrated above in my stunning repartee to Steve. Each of my examples speaks about nothing but design decisions made by Microsoft designers that either limit your ability to implement a descendent class, couple tightly your code to a specific class, or force you to use a class that you don’t necessarily want to use. And they are, of course, merely a sampling of things that I could have talked about.  For instance, try to descend from the StringCollection.

Hey, now that Steve is a Microsoftie, I expect him to defend the home field.  But since he is a Microsoftie, I also expect him not to quite get it.

Ranking Languages: Fear and Your Career

We all worry about our careers, and wonder about our future. But trying to find our way in the career marketplace is not always easy. When we want to study for the future, where should we focus our attention? Is learning a language with big marketing clout like C# or Java necessarily better than learning "small fry" scripting languages like Python or Perl? Is it even true that Python or Perl are less popular than C#? The answers to these questions are not as simple as they might seem.

It is undeniably true that Java is a safe career move at this time, and certainly C# and Visual Basic are at least decent career moves. However, I am not sure that they are as safe as Perl or PHP, and they are not necessarily a better career move than Python. In general, it is wrong to gauge the popularity of a language by the marketing hype generated by a big company. Microsoft has a lot of marketing muscle, but that does not mean that they have a correspondingly large developer mind share. Furthermore, bigger is not always better.

Ranking the Languages

There is of course no definitive way to pin down which languages are most popular. For instance, many developers believe that C# is among the most popular languages in the world. But it is hard to find facts to back that up. Go to this site:, and other sites similar to it. At the tiobe site, you will see that C# has only a sixth of the market share of a language like Java or C, and that it ranks just a hair above Python, and well behind Perl or PHP. These statistics are based mostly on web presence. They show VB.NET to be about 1/20th the size of a big language like C or Java, and only 1/6th the size of its little brother, Visual Basic.

As I say, the statistics I show here are not definitive, but neither are they meaningless. To help put them in perspective, go over to and see the ranking of the most popular technical books. You will see a similar story to the one laid out on tiobe.

Six of the top twenty books at this time are about Java, and two are about .NET. The most important one about .NET is at the bottom of the stack, ranking number 20 in the list of top 20 books. The second most popular book, "Head First Design Patterns," is focused mostly on Java programming. If I include this book, which I will not, then 7 of the top 20 books are about Java. Also included in the top 20 are "Code Complete," and "The Art of Project Management," both of which focus on Microsoft, but neither of which hones in on a Microsoft language.

Most of the rest of the popular books are about HTML and CSS. One of these books, "Professional DotNetNuke ASP.NET Portals," is the second book that goes into the Microsoft column, but not as a hard core programming book. There is no hardcore C# or VB.NET equivalent in the top 20. Another perennial best seller, "Programming Perl," is ranked at number 17. The closest thing to it is Jesse Liberty’s successful book on C#, which ranks at number 25, just behind a book on DreamWeaver, and just ahead of a second book on Perl. On this particular day, one has to go all the way down to number 76 to find a book on Python, but the intervening ranks are filled with books on Java, C/C++, HTML and PHP, with only a scattering of books on Visual Basic and almost nothing on C#.

The Amazon lists change constantly, with books moving up and down the hierarchy several times a day.

Language Group

Cross Platform and/or Open Source: (Java, PHP, Perl, Python)

Closed source, Microsoft Platform Specific

In this list, I am rightfully focusing on hard core programming books. However, scattered amid these books are many volumes on HTML, CSS, Security, Linux, Microsoft Office, and managing Microsoft operating systems. I do not mean to imply that Microsoft does not have a lot of books in the top 100 technical books on They do. The point is that most of them are about managing the Windows OS, or using Microsoft office. When it comes to programming, the focus is on Java and open source scripting languages such as Perl, Python or PHP.

As you can see, there is little evidence on the net to support the idea that Microsoft is anything like the primary focus of the programming world. The desktop world they own, even if the competition is fiercer than it used to be. But the programming world does not belong to Microsoft at this time, though their marketing department is trying their best to lay claim to it.

Is The Most Popular Language the Best for Your Career?

There is more to a programming career than simply searching for the most popular language. During the happy years when I worked at Borland, I focused on Delphi, a product that definitely did not have the same clout in the marketplace as C++ or Visual Basic. However, many people stuck with Delphi because they believed in it. For many of the people I knew, that turned out to be a great career move. Being a big fish in a relatively small pond can be a happier fate than being a minnow who swims with the big sharks that rule the billion dollar companies.

Many people who use Delphi consider it their secret weapon. When I worked for Borland I talked to many developers who loved it when a competitor came in and tried to build a project in C++. After they flailed around for a bit, the Delphi guys would come in and build the same product in half the time with twice the features. Products like Python, Perl or PHP can do the same kind of thing for you.

Python is both easier to use and more human in size and scale than a big language like Java, C++ or C#. I can get more work done, more quickly, using Python, than I can in any other language that I personally know. I find the majority of its classes and methods easier to use, and simpler to understand, than the classes and technologies found in C++, C# or Java. Like Perl, you can be productive in Python after just a few hours or days of study. But if you focus on the language for months or years, you will find that it is much more powerful than you might at first suppose.

Can Python match a product like Java or C# in all cases in terms of functionality? Probably not. On the other hand, scripting languages like Perl or Python can be lighter, and faster, and easier to use than a big full blown language like Java or C#. This is a version of the 90/10 rule. Python can do 90 percent of the things that Java and C# can do, but it can do them much more quickly and much easier. At the same time, 90 percent of the projects in this world can be written in Python. So most of the time, it makes sense to use Python, and by the same measure, 90 percent of the time Java or C# are overkill.

You can accomplish most programming tasks in Python, but there are some tasks that you might want to do in another language. The same is true of Delphi. Most of the time, the smart money is on products like Delphi and Python, and only the marketing challenged believe that Java or C# are really better solutions to real world programming problems.

It is not wise to underestimate Python. Excellent, powerful, and quite complex products like Zope and Plone are built in Python. Related scripting languages such as PHP, are surprisingly popular, as you can see from visiting this site: Just think of the figures you see on that page: "20,478,778 Domains and 1,299,068 IP Addresses" If there are twenty million domains that use PHP, it would definitely be a great career move to learn a little PHP, especially since it is a language that is so much more popular than either Visual Basic or C#, and so vastly more popular than VB.NET. Notice also that PHP is growing in popularity. Like Linux, the use of PHP is increasing over time, not decreasing.


The key point to grasp here is that marketers can render a sharp sighted person blind in minutes. Microsoft has a huge advantage in desktop computing. The dollars earned by Windows and Microsoft office give the Microsoft team a huge degree of marketing clout. But when it comes to the programming world, things are not nearly as simple. Microsoft can bring its billions of dollars in marketing muscle to the table and try to convince you that it rules the programming world in general, and the web in particular. But if you abandon the virtual marketing world, and get out in the real world and start digging up some real life statistics, you will find that even a small language like Python is a much better career move than it might appear at first. Certainly .NET languages such as VB.NET or C# have a long way to go before they catch up with Java, C++ or even PHP in terms of popularity.

I want to emphasize here that no one knows exactly how many programmers of which type are working where at this time. The statistics and data that I discuss here are not meant to be definitive. Nevertheless, I think they obviously point toward overarching trends in the industry. For instance, you can see that C# is growing in popularity, and that Visual Basic is still huge, but appears to be shrinking. Is the growing C# mind share mostly just Microsoft C++ and VB programmers who have migrated to C#, or is there a movement from Java to C#? No one knows the answer to a question like that. But bigger trends, such as the overall dominance of the open source and free software movement, are fairly clear.

J2EE Design Strategies Part I

Enterprise Java is an extremely popular development platform for good reasons. It allows developers to create highly sophisticated applications that have many desirable characteristics such as scalability, high availability, etc. However, with J2EE, you are potentially building distributed applications, which is a complicated endeavor no matter how it is built. As developers come to use J2EE more and more, they are discovering some pitfalls that are both easy to fall into and easy to avoid (if you know what to look for). J2EE development comes with its share of bear traps, just waiting to snap the leg off the unwary developer.

This paper is designed to highlight several of these avoidable pitfalls. It does so from an entirely pragmatic approach. Much of the available material on design patterns and best practices is presented in a very academic manner. The aim of this paper is just the opposite — present common problems and their associated solutions. It starts with J2EE Web best practices, moves to EJB best practices, and concludes with some common worst practices that should be avoided.


Web development is an important aspect of J2EE, and it has its share of potential dangers.

Singleton Servlets

A Singleton servlet sounds like an oxymoron — aren’t servlets already singleton objects? Some background is in order. First, a singleton is an object that can only be instantiated once. There are several different ways to achieve this effect in the Java language, most commonly with a static factory method. This is a common technique anytime an object reference can be reused rather than a new one instantiated. Of course, servlets already act in many ways like singleton objects — the servlet engine takes care of instantiating the servlet class for you, and generally only creates one instance of the servlet and spawns threads to handle users’ requests. Allowing the servlet engine to do this works fine in most cases. However, there are a few cases where you want the singleton-like behavior but also have to know what the instance is called. When the servlet engine creates the servlet, it assigns an internal reference to the servlet instance, and never lets the developer directly access it. This is a problem if you have a helper servlet storing configuration information, connection pool management, etc. What is needed is a way to allow the servlet engine to instantiate the servlet for us yet still be able to get to it.

Here is an example of a servlet that meets this criteria. It is a servlet that manages a homegrown connection pool.

Listing 1: Singleton Servlet for Managing Connection Pool

package webdev.exercises.workshop1;
import javax.servlet.*;
import java.sql.*;
Import webdev.utils.ConnectionPool;
public class PoolMgr extends GenericServlet 
    private static PoolMgr thePoolMgr;
    private ConnectionPool connectionPool;
    static final String DB_CLASS = "interbase.interclient.Driver";
    static final String DB_URL = "jdbc:interbase://localhost/e:/webdev/data/eMotherEarth.gdb";
    public PoolMgr() 
    public void init() throws ServletException 
            String dbUrl = getServletContext().getInitParameter("dbUrl");
            connectionPool = new ConnectionPool(DB_CLASS, dbUrl,
                 "sysdba", "masterkey", 5, 20, false);
            getServletContext().log("Created connection pool successfully");
    catch (SQLException sqlx) 
            getServletContext()Log("Connection error", sqlx);
       thePoolMgr = this;
    public void service(ServletRequest req, ServletResponse res) throws javax.servlet.ServletException,  
        //--- intentionally left blank
    public static PoolMgr getPoolMgr() 
        return thePoolMgr;
    public ConnectionPool getConnectionPool() 
        return connectionPool;

First, note that this is a GenericServlet instead of an HttpServlet — the user never directly accesses this servlet. It exists to provide infrastructure support to the other servlets in the application. The servlet includes a static member variable that references itself (common in "normal" singleton classes). It also has a (non-static) reference to the connection pool class. In the init() method of the servlet, the connection pools is instantiated. The very last line of this method saves the reference created by the servlet engine of this instance of the servlet. The GenericServlet class includes a service() method, which is not needed here (intentionally left blank to highlight that point). The servlet includes a static method called getPoolMgr() that returns the saved instance of the class. This is how other servlets and classes can access the instance created by the servlet engine. We are using the class name (and a static member variable) to keep the reference for us. To access this pool manager from another servlet, you can use code like this:

Listing 2: Snippet of servlet that uses a singleton

Connection con = null;
 //-- get connection from pool
 con = PoolMgr.getPoolMgr().getConnectionPool().getConnection();
 //-- do a bunch of stuff with the connection
catch (SQLException sqlx) 
 throw new ServletException(sqlx.getMessage());

The access to the connection pool class is always done through the PoolMgr servlet’s method. Thus, you can allow the servlet engine to instantiate the object for you and still access it through the class. This type of singleton servlet is also good for holding web application-wide configuration info. In fact, it is common to have the servlet automatically created by the servlet engine. The web.xml file allows you to specify a startup order for a particular servlet. Here is the servlet definition from the web.xml file for this project.

Listing 3: Web.xml entry to auto-load the PoolMgr servlet


An alternative to using a Singleton Servlet is to use a ServletContextListener, which was added as part of the servlet 2.2 specification. It and the listener event handlers for web development allow you to tie behavior to particular events. The listing below shows how to create a connection pool using a ServletContextListener.

Listing 4: StartupConfigurationListener creates a connection pool upon application startup.

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.ServletContext;
import org.apache.commons.pool.impl.GenericKeyedObjectPool;
import java.sql.SQLException;
public class StartupConfigurationListener implements
        ServletContextListener, AttributeConstants 
    public void contextInitialized(ServletContextEvent sce) 
    public void contextDestroyed(ServletContextEvent sce) 
    private void initializeDatabaseConnectionPool(ServletContext sc) 
        DBPool dbPool = null;
            dbPool = createConnectionPool(sc);
        catch (SQLException sqlx) 
            sc.log(new java.util.Date() + ":Connection pool error", sqlx);
        sc.setAttribute(DB_POOL, dbPool);
    private DBPool createConnectionPool(ServletContext sc)
            throws SQLException 
        String driverClass = sc.getInitParameter(DRIVER_CLASS);
        String password = sc.getInitParameter(PASSWORD);
        String dbUrl = sc.getInitParameter(DB_URL);
        String user = sc.getInitParameter(USER);
        DBPool dbPool = null;
        dbPool = new DBPool(driverClass, dbUrl, user, password);
        return dbPool;

The advantage of the SingletonServlet lies in the ability for non-web classes to create references to it. For example, you might have a POJO (Plain Old Java Object) that handles database access for your application. It has no way to get to any of the web collections directly because it has no access to the servlet context. Using a SingletonServlet, the class name of the servlet allows the developer to get to the underlying instance. So, even in the presence of the listener classes introduced to the web API, Singleton Servlets still have uses.

Model-View-Controller for the Web

In the beginning, there were Servlets, and it was good. They were much better than the alternatives, and allowed for scalable, robust web development. However, there was trouble in paradise. Web development partitioned itself into two camps: art school dropouts (invariably Macintosh users) who could create the beautiful look and feel for the web application, and the Java developers who made it work. The guys in the basement hand crafted the beautiful HTML and passed it to the developers who had to incorporate it into the dynamic content of the web site. For the developers, it was a thankless, tedious job, inserting all that beautiful HTML into the Java code. But, you drank lots of coffee and lived through it. Then, the unthinkable happened: the CEO got an AOL disk in the mail and visited a web site he’d never been to before. Come Monday, the commandment came down from on high: We’re completely changing the look and feel of the web site. The art school dropouts fired up their Macs and started realizing the CEO’s vision, and the developers got a sinking feeling in the pit of their stomachs. Time to do it all over again. The problem? Too much HTML in the Java code.

Then JSP’s appeared. Here was the answer to all our prayers. JSP’s have the same advantages of servlets (they are after all a type of servlet) and were much better at handling the user interface part of web design. In fact, the art school dropouts could craft the HTML, save it as JSP, and pass it right to the developers. However, all was still not well. The developers now must deal much more directly with the display characteristics of the application. Thus, the syntax of the JSP quickly becomes very cryptic, with the HTML and Java code interspersed together. The verdict: too much Java in the HTML.

Then came the Model-View-Controller design pattern for the web. If you’ve been living in a cave and aren’t familiar with this most famous of design patterns yet, here’s the capsulated version. The model represents the business logic and data in the application and resides in JavaBeans and/or Enterprise JavaBeans. The view is represented primarily by JSP pages, which have as little Java code in them as possible. In fact, all Java code should really be handled by method calls on the model beans or custom tags. The controller is the way that the view interacts with the model. In the web world, a servlet is the controller. Here is the typical scenario for web MVC. The user accesses a controller servlet. The servlet instantiates beans, calls methods on them to perform work, adds the beans with displayable information to one of the collections (for example, the request collection), and forwards the beans to a JSP that shows the user the results.

And it was good. Now, the display information is cleanly partitioned away from the "real" work of the application, which can be strictly in JavaBeans. The application could also start using regular JavaBeans, then scale up to use Enterprise JavaBeans without having to change the controller or presentation layers. This is clearly the best way to build web applications. It is easy to maintain, easy to update, and there is very little impact when one part of the system needs to change (now, the art school dropouts have to worry about the new look and feel, not the developers). This design patterns neatly modularizes the constituent parts of web applications.

Now what’s wrong? The problem with the MVC web applications (now frequently called "Model2", to distinguish it from MVC for regular applications) has to do with how you architect the web application. For example, if you create a different controller servlet for each page the user wants to visit, you end up with dozens or hundreds of servlets that look almost identical. Another problem is that these servlets, once visited, permanently reside as objects in the servlet engine. An alternative is to create one monster controller servlet to handle all the requests. The problem here is that you have to figure out a way to map the requests to different views. This is frequently done with parameters sent to the web site, identifying what command you want to execute. But, unless you are clever about it, your "uber servlet" becomes a massive set of "if…else" statements or a huge "switch…case" statement. Any changes require editing this servlet, which quickly becomes unruly and ugly. What is needed is an application framework for web development that handles most of these gory details. And that’s where Struts comes in.

The Big-Oh notation

When we compare algorithms in order to select one to use, we often need an understanding of their performance and space characteristics. Performance is important because, well, we’re always interested in raw speed; and space is important because we are always on the lookout for algorithms that don’t waste memory. Of course, there are other considerations too. For example, we might want to know how easy it is to implement algorithm X or algorithm Y. Yet most of the time we are primarily interested in performance and space characteristics.

We’ll talk about space considerations in a later article; for now, we’ll consider how to compare the performance of algorithms.

When comparing performance we need a compact notation to express its characteristics. For instance, it is awkward to say "the performance of algorithm X is proportional to the number of items it processes, cubed," or something equally as verbose. Fortunately Computer Science has a solution to this problem; it’s called the big-Oh notation.

We begin by running a series of profiling experiments to analyze the performance characteristics of the algorithm in which we’re interested. (If we’re Don Knuth, we can also try to derive the characteristics mathematically from first principles.) If we are lucky, the results of these profiling runs allow us to work out the mathematical function of n, the number of items, to which the time taken by the algorithm is proportional, and then say that the algorithm is an O(f(n)) algorithm, where f(n) is the mathematical function we determined. We read this as "big-Oh of f(n)", or, less rigorously, as "proportional to f(n)."

For example, if we timed experiments on a sequential search through an array for different numbers of items in the array, we would find that it is a O(n) algorithm. Binary search, on the other hand, we’d find out to be a O(log(n)) algorithm. Since log(n) < n, for all positive n, we could say that binary search is always faster than sequential search since the time taken would always be smaller. (However, in a moment, I shall be dishing out a couple of warnings about taking conclusions from the big-Oh notation too far. Be warned.)

Suppose that by experimentation we work out that Algorithm X is O(n2 + n), in other words, the time it takes to run is proportional to n2 + n. By "proportional to" we mean that we can find a constant k such that the following equation holds:

TimeTaken = k * (n2 + n)

Now, in general, the value of k doesn’t really affect our intuition of the performance of Algorithm X. Yes, higher values of k result in slower performance, but the important bits are within the parentheses, the n squared and the n. Increasing n doesn’t affect k; it’s constant, remember. In fact, knowing this, we can see that multiplying the mathematical function inside the big-Oh parentheses by a constant value has no effect. For example, O(3 * f(n)) is equal to O(f(n)); we can just take the ‘3’ out of the big-Oh notation and multiply it into the outside proportionality constant, the one we can conveniently ignore.

(The same goes for adding a constant inside the big-Oh parentheses; for large n, O(n + 42) is the same as O(n).)

If the value of n is large enough when we test Algorithm X, we can safely say that the effects of the "+ n" term are going to be swallowed up by the n2 term. In other words, providing n is large enough, O(n2 + n) is equal to O(n2). And that goes for any additional term in n: we can safely ignore it if, for sufficiently large n, its effects are swallowed by another term in n. So, for example, a term in n2 will be swallowed up by a term in n3; a term in log(n) will be swallowed up by a term in n; and so on. Note that this only applies when we’re adding or subtracting terms, we can’t ignore multiplying or dividing terms in the same manner (unless the term is constant, as we’ve shown).

This shows that arithmetic with the big-Oh notation is very easy. Let’s, for argument’s sake, suppose that we have an algorithm that performs several different tasks. The first task, taken on its own, is O(n), the second is O(n2), the third is O(log(n)). What is the overall big-Oh value for the performance of the algorithm? The answer is O(n2), since that is the dominant part of the algorithm, by far.

But, having said that, here comes the warning I was about to give you before about drawing conclusions from big-Oh values. Big-Oh values are representative of what happens with large values of n. For small values of n, the notation breaks down completely; other factors start to come into play and swamp the general results. For example, suppose we time two algorithms in an experiment. We manage to work out the two performance functions from our statistics:

Time taken for first = k1 * (n + 100000)
Time taken for second = k2 * n2

The two constants k1 and k2 are of the same magnitude. Which algorithm would you use? If we went with the big-Oh notation, we’d always choose the first algorithm because it’s O(n). However, if we actually found that in our applications n was never greater than 100, it would make more sense for us to use the second algorithm.

So, when you need to select an algorithm for some purpose, you must take into account not only the big-Oh value of the algorithm, but also its characteristics for the average number of items (or, if you like, the environment) for which you will be using the algorithm). Again, the only way you’ll ever know you’ve selected the right algorithm is by measuring its speed in your application, for your data, with a profiler. Don’t take anything on trust from an author like me; you should measure, time, and test.

There’s another issue we need to consider as well. The big-Oh notation generally refers to an average case scenario. In our sequential versus binary search thought experiment, if the item for which we were looking was always the first item in the array, we’d find that sequential search would always be faster than binary search — we would succeed in finding the element we wanted after only one test. This is known as a best case scenario and is O(1). (Big-Oh of 1 means that it takes a constant time, no matter how many items there are.)

If the item which we wanted was always the last item in the array, the sequential search would be a pretty bad algorithm. This is a worst case scenario and would be O(n), just like the average case.

Although binary search has a similar best case scenario (the item we want is bang in the middle of the array and is found at the first shot), its worst case scenario is still much better than that for sequential search.

In general, we should look at the big-Oh value for an algorithm’s average and worst cases. Best cases are usually not too interesting — we are generally more concerned with what happens "at the limit," since that is how our applications will be judged.

In conclusion, we have seen that the big-Oh notation is a valuable tool for us to characterize various algorithms that do similar jobs. We have also discussed that the big-Oh notation is generally valid only for large n, for small n we are advised to take each algorithm and time it. Also, the only way for us to truly know how an algorithm will perform in our application is to time it. Don’t guess, use a profiler.

In the second part of this article, you will learn about space and memory considerations and how those factors effect the selection of algorithms.

Who’s Buying Borland?

If I had a dollar for every rumor that has been circulated about Borland getting bought out by , I could buy the company myself.

The latest rumor has Microsoft buying Borland. In the past I’ve heard that Novell, BEA, IBM, Corel (oh, wait, that rumor was true!), Oracle, CA, SAP, HP, and McDonalds. Okay, I made that last one up. But nevertheless, every one of those rumors has been just that – a rumor. As far as I know, there hasn’t been a serious attempt to buy Borland since the Corel fiasco. Borland’s stock price has gone up and down on these rumors over the years, but no one aside from Corel has ever made a serious bid.

I’m no Mergers & Acquisitions expert, but it seems to me that if someone were going to buy Borland, they would have done so already. Borland is only getting stronger. I’d guess that all that money in the bank makes them tough to buy if they don’t want to be bought. Because Borland has one foot planted firmly in both the Java and .Net spaces, it makes only half the company attractive to most companies out there. MS wouldn’t have a clue what to do with JBuilder, and BEA would look at Delphi like we all would look at a man from Mars. Borland has a lot of valuable parts, but the some of those parts doesn’t really appeal to any one entity. In the end, it seems unlikely that anyone could or would really buy Borland. But it sure makes for interesting speculation on the Yahoo BORL board.

But lets imagine that someone did buy Borland. Such a company would have an interesting conundrum: what to do with the widely disparate development tool sets that Borland owns? Should a Java-ish company try to jump into the .Net world with Delphi? Should a .Net-minded company try to do the same into the Java world?

The only concern I personally would have would be for the future of Delphi. A company buying Borland may or may not see the value in Delphi; thus the specter of Borland being bought is a bit scary to us Delphi fans. Delphi going away would be a Very Bad Thing™ for the developer community on the .Net side of things. Delphi’s demise would leave .Net developers at the mercy of one company – the dreaded Microsoft. And of course, we can’t have that, now, can we?

Borland is a much stronger company than the average IT “expert” seems to realize, and they do have more bases covered in the software development market than any other company, even Microsoft. Sometimes we developers forget that Borland is made up of tools that cover many areas beyond development tools. They have StarTeam, CaliberRM, Together, Visibroker, and OptimizeIt. Borland has been doing more than merely preaching the ALM message, they’ve been acting on it, putting themselves years ahead of the competition in many areas. And in doing so, they’ve made themselves large enough and diverse enough that they would be a hard pill to swallow.

In the end, I’m inclined to believe that rumors of Borland’s acquisition have been greatly exaggerated.

Community Beats Borland to the Punch with a C++ Open Letter

Slashdot recently posted an article highlighting the unhappiness and frustration of the Borland C++ Builder community at the lack of attention paid to the product line by Borland. The community voiced their collective opinion in an open letter, which details some of the of large organizations relying today in BCB and the impact of Borland’s inaction upon these organizations. One of the chief organizers of this effort, Paul Gustavson, also wrote of this predicament in a blog entry this week.

The BCB community’s complaints regarding the product line seem quite reasonable and valid, and they can be boiled down to the following:

  • Lack of product updates for C++Builder 6, leaving key issues unaddressed and users without the latest development features.

  • Minimal support for C++Builder features in the newer C++Builder X product line, including no support for VCL-based projects or C++Builder 6 project files.

  • Many failures in communication with Borland’s C++ user community, most notably a much-promised open letter to the community that was never delivered.

I have to agree that these guys have a legitimate beef. Borland’s C++Builder user community has been treated rather poorly. It’s one thing for a company to simply stop updating a product, but it’s quite another to release new versions of similar products that seemingly abandon existing users and then to compound the problem by remaining mum on what the plans are for those existing users. It’s clear that somebody wasn’t minding the C++ store at Borland.

At the same time, I have to wonder just how effective the community’s open letter will ultimately be, seeing as how it seems to be written more from their hearts than from their minds. Yes, large companies and government organizations depend on C++Builder, and yes, their efforts may be hamstrung by Borland’s inattention to this product line. However, what the letter fails to do is make a strong business case for continued investment in C++Builder technology. It’s not enough just to say that if Borland doesn’t take care of C++Builder users they might lose some customers. There needs to be a legitimate case for making money with C++Builder technology. The list of signatories for the open letter is impressive, but we all know that it doesn’t necessarily translate into sales.

Let’s face it: Borland isn’t going to invest much more than lip service in C++Builder as a community service. Their grandiose past notwithstanding, Borland is a relatively small company with comparatively modest resources. As such, their management is going to insist – rightfully – that business units invest in endeavors that pay real cash dividends. We can find wisdom in the Flying Lizards’s 1979 hit here. The community’s love may give Borland a thrill, but it don’t pay the bills. They want your money.

As an occasional user of C++Builder, and one of the developers of the tool during my own days at Borland, I sincerely would like to see this situation work out in such a way that the technology lives on. For this to happen, the C++ product team needs to be able to build a business case around it. If I may offer my advice to the C++Builder community, this business case would be great place to focus their own evangelism efforts. For example, what evidence is there that producing a new C++Builder 7 will sell enough to make it worth the effort? Or how can adding VCL support to C++Builder X result in more sales? Does open sourcing some of the technology make sense? Can a case be made for C++ support in Borland Developer Studio, supporting VCL and VCL.NET? Microsoft all but admitted they dropped the ball with managed C++ in the 1.x version of .NET, so there is certainly opportunity here.

Borland has committed to making a final call on the C++Builder product line by December 14, 2004. That a little more than a month away. No matter how the situation is resolved, at least we won’t have to forever this time around to find out.

Parochial vs Cosmopolitan Computing

There is an old saying that travel broadens the mind. I think that a wide experience of different technologies can have the same beneficial effect for computer users.

A person who has traveled can distinguish between human traits that are peculiar to a particular area, and those traits that are universal, that are part of human nature. Such knowledge gives them a broader, more sophisticated view of the world. Ultimately, it teaches them compassion, and acceptance. Such people gain a willingness to see the good in people with customs different from their own.

The same can be said of computer users who have experience with multiple operating systems and multiple tool sets. People who use only one operating system, and one set of tools, generally don’t have as deep an understanding of computing or computers as do people who have wide experience with several operating systems and several different tool sets. A specialist may have a deeper understanding of a particular field, but their overall understanding of computing in general may be limited. This limitation traps them in a series of narrow minded prejudices which are both rude and limiting. It is hard for them to make good choices, because they don’t understand they options open to them.

There has long been a general prejudice in favor of people with a cosmopolitan or broad outlook and against people who have a parochial or narrow outlook. The reason a term like hick or yokel is considered derogatory is because people from rural areas who have not seen much of the world tend to have restricted or narrow points of view. For instance, there is something innately comic about a rural farmer from 100 years ago who lived off collard greens, chitlins and pigs feet reacting with disgust to the thought of a Frenchman eating snails. The joke was three fold:

  • Chitlins and collard greens are themselves exotic foods. There is something innately comic about people with exotic tastes making fun of someone else for having exotic tastes.

  • Though southern cooking can be delicious, it was not uncommon to see chitlins and collards prepared poorly, while French escargot, as a rule, was a delicacy prepared with exquisite refinement by some of the best cooks in the world.

  • The final, and most telling part of the joke was that southern cooking in general probably owed as much to French cooking as to any other single source. By deriding the French, our hapless yokel was unintentionally deriding his own heritage.

Most programmers start out using a particular computer language, such as Java, VB, C++ or Pascal. At first, their inclination is to believe that their language is the only "real" language, and that all other computer languages are "dumb." Take for instance, a deluded Visual Basic programmer who tries to use a PRINT statement in C++, finds that it won’t compile, and comes away thinking that C++ is a hopelessly crippled language. The truth of the matter, of course, is that C++ does support simple IO routines like PRINT, but the syntax in C++ is different than in VB.

This kind of narrow computer prejudice is similar to the viewpoint of our rural farmer from a hundred years ago who is suddenly transplanted to Paris. She goes home and tells everyone that there is nothing to eat in Paris. "They just don’t serve real food there. They think we are supposed to live off snails!" Or perhaps they conclude that Frenchmen are cruel because they laughed when the farmer started ladling up the flowers from her finger bowl with a spoon. What they forget, of course, is that everyone back home in Muskogee will laugh at a Frenchman who tries to eat corn on the cob with a knife and fork.

There is an interesting moment in the life of many developers when they start to understand parochial computing. As stated above, programmers tend to start out by getting to know one particular language in great depth. To them, their language is the computer language, and all other languages pale in comparison.

Then one day, disaster strikes. The boss comes in and tells them that they have to work on a project written in a second language, let’s say Java. At first, all one hears out from our hapless programmer is that Java "sucks." They are full of complaints. "You can’t do anything in this language. It doesn’t have feature X, it uses curly braces instead of "real" delimiters, the people who wrote this language must have mush for brains!"

Then, over time, the complaints lessen. After all, you can type a curly brace faster than the delimiters in their favorite language. That doesn’t make Java better than the developer’s favorite language, but it "is kind of convenient, in a funny kind of way." And after a bit, they discover that Java doesn’t support a particular feature of their favorite language because Java has another way of doing the same thing. Or perhaps the feature is supported, but the developer at first didn’t know where to look to find it. Of course, they are still heard to say that Java isn’t nearly as good as their favorite language, but the complaints lack the urgency of their initial bleatings.

Finally, after six months of struggling on the Java project, the big day comes: the developer has completed his module and can go back to work on a project using his favorite computer language. But a funny thing happens. At first, all goes swimmingly. How lovely it is to be back using his favorite editor and favorite language! But after an hour or so, curses start to be heard coming from his cube. "What’s the matter?" his friends ask. The programmer inaudibly mumbles some complaint. What he does not want to give voice to is the fact that he is missing some of the features in the Java language. And that Java editor, now that he comes to think of it, actually had a bunch of nice features that his editor doesn’t support! Of course, he is not willing to say any of this out loud, but a dim light has nonetheless been lit in the recesses of his brain.

Perhaps, if he is particularly judicious and fair minded, our newly enlightened programmer might suddenly see that though his language enjoyed some advantages over Java, Java was in some ways better than his own language! It is precisely at that moment that he begins to move out of the parochial world of prejudice and into the broader world of cosmopolitan computing.

The OS Bigot

The type of narrow viewpoint discussed here has no more common manifestations than in the world of operating systems. We have all heard from Microsoft fanatics, who, when asked to defend their OS, say: "There are more Microsoft users than users of all other operating systems combined." Yes, that is true, but it is also true that there are more people in India than in the United States. But believe me, there are few Americans who want to go live amidst the poverty, technical backwardness, and narrow provincialism of even a "thriving" Indian city such as New Delhi.

Microsoft users might also complain that it is hard to install competing OS’s, such as Linux. When asked to defend their point of view, they will eventually confess that their opinion is based on experiences that they had some five years earlier, when it was in fact true that most Linux installations were difficult. Today, Linux usually installs more quickly, and with much less fuss, that Windows.

Of course, people on the other side are no less narrow minded. A Linux install may be simpler and faster than a Windows install, but Linux typically does not have as good driver support, particularly for new devices. Thus it is not unusual for a Linux user to have no trouble with his video and sound cards, but to have to do work to get his CD burner working or scanner working.

It is true that the Windows GUI environment is still better than the one found in Linux. But the advantage seems to shrink not just with each passing year, but with each passing month. For the last year, and for most of the last two years, the KDE Linux environment has been at least as good as the GUI environment found in Windows 98, and in some areas it is superior to that in Windows XP.

Conversely, just as Windows has a slight advantage in the GUI world, Linux has long enjoyed a significant advantage when working at the command prompt. A typical Windows user will say, "So what? Who wants to work at the command prompt?" That’s because they are used to using the Windows command prompt, which has historically been very bad. But watching a skilled user work at the command prompt in Linux can be a revelation. There are things you can do easily with the BASH shell that are hard, or even impossible, to do with the Windows GUI. But in recent years, even this truism has been shown to have its weaknesses. The command prompt in Windows XP is much improved over that found in Windows 98 or Windows 2000, and the porting of scripting languages such as Python and Perl to Windows has done much to enhance life at the Windows command prompt.


Linux users often argue that their software is free in two senses of the word:

  • It has zero cost

  • And it comes with source and can be freely modified

All that is true, but Windows has a wider range of available applications. Who would deny that there is a very real sense of freedom that one gets from using a beautifully designed piece of software?

And yet, if you are a student, or an older person on a limited income, you might not be able to afford all that fancy software. In such cases, you might be better off using Linux, where you can easily find free versions of the tools you need.

Again, one might read the above and come to the narrow conclusion that proprietary software is always better than open source software. But this is not always true. For instance, Mozilla is clearly a much better browser than the Internet Explorer. It more closely conforms to the HTML standard, it handles popups better, it has a better system for handling favorites, and it has a feature, tabbed windows, that gives it a massive usability advantage over IE.

On the other hand, there is simply nothing in the open source world to compare to a tool like DreamWeaver. There are probably a hundred different open source web editors, but only the HTML editor in OpenOffice provides even the rudimentary features found in DreamWeaver.

The Historical Perspective

The ultimate irony, of course, comes when a person with a limited perspective imitates another culture, and goes about crowing about this borrowed sophistication as if he invented it himself.

I used to do this myself, back when I promoted Delphi for a living. Unknowingly, I often championed features in Delphi that were in fact borrowed from VB. I would say, Delphi is better than VB because it has feature X. I didn’t know that VB not only had the same feature, but that the creators of Delphi had in fact borrowed the feature from VB.

I have seen the same thing happen when advocates of C# crow about how much better it is than Java, and then use one of the many features that C# borrowed from Java as proof of the fact. The same often happens when a user of a DotNet based application approaches a Linux user and shows off the great features in their product. The fact that not only the feature, but the entire product and its architecture was stolen directly from an open source application written in PHP is of course lost on the advocate of DotNet’s prowess.

In fact, it is generally true that Microsoft is a company that uses derived technologies. DotNet is just an attempt to emulate the features found in Java and PHP. C# is for the most part simply an imitation of Java with a few features from Delphi thrown in for good luck. IE is an imitation of the features found in the old Netscape browser. The Window’s GUI is an imitation of the Mac GUI.

One of the signs of a cosmopolitan person is that they have an historical perspective, and can know something about where cultural habits originated, or from which sources they were derived. A provincial person thinks not only that his culture is best, but that his country invented the very idea of culture.

Of course, one should rise above even this insight. It is true that Microsoft is a company based on borrowed ideas. But Microsoft does a good job of borrowing technology. The old joke states that Microsoft begins by deriding new inventions, then imitates them, and ends up claiming they invented them. But what people forget is that Microsoft often does "reinvent" technologies in a meaningful way by implementing them very well, and by adding special touches that improve upon the original product.

So the correct perspective is to recognize that derivation lies at the heart of Microsoft technology, but to also recognize their technical expertise. Gaining that kind of nuanced world view is part of what it means to be a sophisticated computer user. Knowing such things can help you make informed decisions, rather than decisions based on prejudice.


Ultimately the kind of narrow prejudice found by advocates of single platforms or single technologies offers a frighteningly restricted world view. Such people are indeed a bit like a hick or yokel from 100 years ago who arrives in the big city and feels overwhelmed by a kind of sophistication that they had never imagined and cannot comprehend. They dislike the big city not only because it is different, but because it threatens them. They are suddenly a small fish in a big pond, and from the heart of their insecurity, they begin to mock the city sophisticates who swim in the urban sea.

This is not to say that our yokel might not have cultural advantages over a "snob" from the big city. For instance, it is well known that rural farmers in America 100 years ago were renowned for their friendliness. It is true that such people often worked together to help a neighbor through a tough time, and they often worked together and shared resources in ways that their friends from the big city could not even imagine, let alone imitate. And of course they would have a specialized knowledge of how to survive in their rural world that the Parisian could not match.

The key difference, of course, is that a truly cosmopolitan person could have the perspective to appreciate all this, while a person from a rural area would be more inclined to adopt a narrow, provincial point of view. The cosmopolitan person could admire both Parisian society, and rural America.

This is the perspective that Alexis de Tocqueville brought to his book Democracy in America. Alexis de Tocqueville understood both European culture, and American culture, and that gave him the insight needed to write so trenchantly about American society.

The mark of the cosmopolitan is that she will:

    • Be gracious enough to help without condescension foreigners who are unfamiliar with the customs of her land.

    • Have enough perspective to laugh goodnaturedly at herself when caught out not knowing the customs of a foreign land.

    • Have the perspective to see what is truly best in any one culture because her perspective is broad and informed.

A cosmopolitan person has these traits instinctively, and without self consciousness. She knows that each land has its own customs, and that deep down where it counts, people are the same when it comes to matters of the heart and soul. The may have different habits, but it is narrow minded, provincial, even parochial, to regard people with a different perspective as innately inferior to oneself.

Software developers who have broken out of the narrow prejudices formed when using their first language and first OS have the same advantages. They know what is best in multiple worlds, and therefore have the wisdom to search for those features on whatever platform they use. They don’t waste time embarrassing themselves by making snide, narrow minded comments, that polite people can’t even correct without sounding condescending or unemotionally hurting someone’s feelings. They have gained a sophistication, and a broader perspective, that makes them better at everything they do, regardless of their toolset.

What is Aspect Oriented Programming?

This article will give you a brief introduction to Aspect Oriented Programming using an upcoming library product that’s being developed at RemObjects. The project is currently codenamed “RemObjects Taco”. Taco is a library that will enable you to leverage concepts of Aspect Oriented Programming (AOP) in your .NET applications. Unlike other aspect oriented tools available, Taco is a language-independent library and will allow you to both use and implement aspects using the .NET language of your choice.

Aspect Oriented Programming is based on the concept of expanding and specializing class implementations, not by extending their code or using traditional inheritance models, but by attaching pieces of code, called Aspects, to them.

Assume that you have a fairly extensive class with many methods, and you now are faced with the task of applying a certain layer of logic to that class. This layer could involve thread synchronization, security checks, or something as simple as logging all method calls to a log file for debugging or audit purposes.

If the class in question is fairly extensive and contains a large number of methods, adding code for that purpose to each and every method would be a huge amount of work. It would also involve adding a lot of duplicate code to your library, and adding it in places where it does not really belong. (After all, a method should focus on the task at hand and should not be weighted down with external “plumbing.”)

With Aspect Oriented Programming, you implement your logic in a separate Aspect class, independently of the class (or classes) you will later want to augment. Once the aspect is implemented, you can easily attach it to any given class in your class library, and your logic will be applied to all (or selected) calls made into the class. Neither the class nor the caller will need to worry or even be aware of the aspect. For instance, you can use this technology to add a Critical Section; code for checking the user’s access rights; or code for writing data to a log file. All of this will be implemented in a separate class called an Aspect and will not clutter up your primary code.

An Example

Let’s look at an example to illustrate this concept by implementing an aspect that performs thread synchronization. Taco already comes with a prebuilt Synchronize aspect to provide this functionality and a lot more flexibility then the example shown; but for the purpose of this article let’s assume that thread synchronization represents some custom logic you want to implement yourself.

Let’s assume that you have a (completely contrived) MyData class already implemented. While scaling up your application to be multi-threaded, you find that it would be helpful if the MyData class was thread-safe (which the current implementation isn’t). Here is your existing code for the MyData class:


MyData = class(MyBaseClass)


fValue: integer;


method Calculate;

property Value: integer read fValue write fValue;




method MyData.Calculate;


fValue := (fValue+3)*5;


To make even this simplistic class thread-safe using conventional programming techniques would involve a an amount of code that far exceeds the current class implementation – you’d have to

  • add a private field to hold a CriticalSection or Mutex
  • add a constructor to initialize the critical section
  • add calls to CriticalSection.Enter/Exit with corresponding try/finally blocks to all methods
  • add getter/setter methods for the property so that you could acquire the critical section for property access.

In contrast, using Aspect Oriented Programming and assuming that you have implemented your Synchronize aspect, you would add exactly one line of code to your class definition and it will be thread-safe automatically:



MyData = class(MyBaseClass)


fValue: integer;


method Calculate;

property Value: integer read fValue write fValue;




method MyData.Calculate;


fValue := (fValue+3)*5;


With the exception of the [Synchronize] attribute added to the class declaration, the code is completely identical to the original version. The individual methods and properties are unchanged and not cluttered with synchronization code.

Implementing the Aspect

Now that you’ve seen how to augment a class with an aspect, let’s take a look at what’s involved with writing a custom aspect (in this case, we’ll implemented a simple version of the Synchronize aspect used above).

Taco provides a base class (RemObjects.Taco.Aspect) for you to descend from to implement your own aspects. All we need to do is create a descendant, instantiate a Mutex object, and implement the PreprocessMessage and PostprocessMessage methods to acquire and release the Mutex, respectively:


SynchronizeAspect = assembly class(Aspect)


fLock: Mutex := new Mutex();


method PreprocessMessage(aMessage: CallMessage); override;

method PostprocessMessage(aMessage: ReturnMessage); override;



method PreprocessMessage(aMessage: CallMessage);




method PostprocessMessage(aMessage: ReturnMessage);




The PreprocessMessage method of your aspect will be executed prior to any call into the classes augmented with your aspect and the PostprocessMessage method will be executed after any of the calls return. This happens whether they return successfully or were aborted via an exception. Note that the aMessage parameter also gives you access to details about the call being made (such as which object is being called, which method, and what parameters are being passed or returned.) While this simple aspect didn’t need this information, it is available for more complex aspects.

How Does this Work?

Underneath the hood, Taco uses .NET’s messaging architecture, which is also used by .NET Remoting, to enable the injection of code. Basically, every method or property call made to your augmented object will be run thru a number of Message Sinks before reaching the actual method. Taco aspects hook into this list to execute the logic you provide in the PreprocessMessage and PostprocessMessage methods.

Converting the method call to a message that can be processed by your aspect and back does of course introduce a small amount of overhead. This hit, however, is not serious. The overhead is comparable to calling an object from a different AppDomain (which basically uses the same technique) or a COM object hosted in the COM+ runtime. For normal “business logic” type object hierarchies, this overhead will be negligible, but you probably would not want to use AOP inside the core rendering engine of your new first person shooter game!

The above code snippets are written in Chrome, but the same principles apply to other .NET languages.

Please also note that Taco is currently in early alpha state, so the exact class interfaces and syntax shown in the last code snippet below might still be subject to change before public release. If you’re interested in joining the Taco beta program, when it becomes available, please drop me a mail.

Further Reading

Hopefully this article has given you a quick introduction to Aspect Oriented Programming and has given you a sense of the scope of what Taco will provide for .NET developers seeking to use AOP.

The links below provide some more general information on AOP:

  • AOP: Aspect-Oriented Programming Enables Better Code Encapsulation and Reuse — MSDN Magazine, March 2002
  • Aspect-Oriented Software Development Community

One Reason Nick Hodges Doesn’t Quite Get OOP

Nick Hodges has written an entertaining article on what he percieves as the failings of the Microsoft .NET team’s attempt to design and code an object-oriented famework. Along the way he takes a few additional swipes at the C# language.

In this article I could have outlined my disagreements with Nick’s specific allegations about the Framework, or I could have talked about the sheer difficulty of writing a complex framework, or I could have explained how cross-language cultural issues make using a different framwork difficult. However, I decided instead to focus on one paragraph from Nick’s article:

"Maybe someday someone can explain to me why so many classes in the FCL are marked sealed. Shoot, why is it even possible to ‘seal’ a class. What the heck is that all about? Who are you to say I can’t improve or enhance your class? If your class somehow needs to be sealed, then I say you have a design problem. Now, despite the fact that most of your OOP languages include the ability to “seal” a class — C#, C++, Smalltalk — I am undaunted in my view. I was hoping that the FCL designers would be the ones to see the light and let me descend from the String class. Shoot, you can’t swing a dead cat in the FCL without hitting a sealed class that desperately needs enhancing.

Let’s focus in on the real issue: Should a modern object-oriented language allow classes to be sealed, and thereby bar subclassing from them?

The answer to this question involves a detour into designing libraries. The success of modern OO languages is due, in part, to the ability to use libraries for the development of large-scale systems. In an ideal world, those libraries should be secure, reusable, well-tested, and performant. In the real world, they sometimes miss the mark, but we can at least hope that they are reusable.

To be reusable, a modern OO library depends on the pillars of OOP: encapsulation, polymorphism, and inheritance. Since the development of Java, inheritance has been mostly replaced by delegation or composition. Indeed, way back in 1995, the Gang of Four said this: Favor object composition over class inheritance. (Page 20 of Design Patterns. It’s one of the two predicates on which the rest of the book depends.)

Encapsulation is an important principle for libraries since it enables the writer’s of the library to hide the functional implementation of their classes and methods. This in turn means that classes can guarantee that the data they hide can be only changed by methods of the class itself. If you use the Design by Contract pattern — and you should — then you will always be sure that the parameters to your methods are valid. But you only need to apply the contract to outward-facing methods. The inner private or protected methods don’t need to obey the contract because they are only called from code you control and own.

Since your code is the only code that can write to the class’ private fields you automatically make the class easier to test, make its behavior easier to predict and document, and make the methods easier to profile and optimize.

Another great benefit of encapsulation is a strong contract with the outside world: Here is this class and here’s the interface to it (defined as a set of methods, properties, and events). The class is a black box with certain well-defined knobs and switches on it. The maintenance programmer at the library vendor who has to fix/extend the class in some way has one of two possible avenues to explore (although they can overlap):

  1. An internal change to the implementation
  2. Or a change to the interface.

The first can be done almost with impunity so long as the published behavior doesn’t change (encapsulation means never having to say you’re sorry for an internal change). The second is a contract-breaker and the maintenance programmmer has two possible solutions: make the breaking change and suffer the slings and arrows, etc, or possibly write a new class altogether (the old "Ex" suffix solution). Both are nasty.

There is a great problem with using encapsulation though. That is inheritance, one of the other great principles of object-orientation (although as I mentioned above somewhat deprecated these days).

Consider this from the library writer’s point of view. You must write a base class that encapsulates some behavior and you want to make it extensible so that some unknown programmer in the future can subclass it in some unknown way. You know that encapsulation is good; however, you have a unique problem: you must break encapsulation in order to provide override points for the subclasser. You look surprised, perhaps. Yet, why otherwise have the protected keyword? The very existence of this keyword means that encapsulation is being broken, albeit for the limited use of someone who will be subclassing the base class (which in reality means everyone).

All of a sudden, this class no longer has this strong encapsulation contract with the rest of the world. You have to expose — to a certain extent — how you are implementing the class. A corollary is that you have to provide a weaker contract to the subclasser: I promise not to change the implementation of my class "too much", with some hand-wavy gesture.

But it doesn’t stop there. As soon as you expose part of the implementation of the class, you’ll be opening the door to someone who will say: you know, it’s nice that this class is subclassable, but I really need access to this little private field for my own derived class. Please? Pretty please?

Of course, another problem to solve is how to fit the extensibility points for polymorphism into your base class by marking some methods as virtual. (Java has the opposite problem: since all methods are virtual by default, which do you mark final? Or do you just ignore the issue?) Since virtual methods are known to be slower at calling (there’s a double redirection going on) you don’t usually want to go the whole hog and mark all protected/public methods as virtual. All that will do is to bring down the ire of the premature optimizer.

We used to wrestle with this constantly at TurboPower. For at least one product, we even went to the extent of having a compiler define that switched all private sections to protected ones, just because we didn’t know how to solve the "expose part but not all" inheritance problem. And I think we were fairly intelligent people. It’s just that the problem of designing a class hierarchy or framework that can efficiently be extended by third-party programmers is hard. And then you have to document it, hopefully well enough that those third-party developers can understand how to extend your base class.

There is another problem (another? you’re nuts: writing libraries is easy, dude) that, frankly, not many programmers appreciate or even care about. That is one of security. You see the whole point of polymorphism is that you can pass around objects that look like BenignBaseClass instances but are in fact HostileDerivedClass instances. Every time you implement a method in your library which takes an instance of BenignBaseClass, you must ensure that the method is robust in the face of potentially hostile instances of derived types. You cannot rely upon any invariants which you know to be true in BenignBaseClass, because some hostile hacker might have subclassed it, overridden the virtual methods to screw up your logic, and passed it in. Evil laughter.

Between a rock and a hard place, eh? In essence you just can’t have pure encapsulation and unrestricted inheritance. It just doesn’t work like that; never has done. Fooey to those pillars of old-style object-orientation, welcome to compositional object-orientation. The King is Dead, Long Live the King.

Don’t use inheritance unless you are writing a self-contained set of classes in your library or framework. I now use inheritance so infrequently that I always seem to have to reread the C# Programming Language book to understand how to call the base class’ constructors. Go with what the Gang of Four were saying 10 years ago (as Delphi 1 was just coming out): prefer composition over inheritance. Of course, for that your library or framework has to be designed around interfaces, and that takes some mental acuity or you won’t get the abstractions right. It’s not as hard as determining extensibility points of your base classes, but still challenging.

And since your library or framework users are modern OOP programmers, they understand the issues and welcome being able to use interfaces, and you can seal your classes, at least those that you determine should not be extended. Enforce encapsulation, it’s the strongest of the pillars. After all, in C# at least, if you get it wrong (and one of your users comes up with the canonical case for allowing inheritance to work), you just unseal the class. It’s a non-breaking change. (The reverse is not true: someone might have written a derived class.)

Admittedly there is a tradeoff here.  On the one hand you have the developers who want to save a little development time and effort by treating any old object as a "bag o’ fields" (if it has some methods, w00t, bonus!), and on the other hand you want to design and implement a fully-featured, robust, secure, predictable, testable library in a reasonable amount of time. The latter will certainly involve sealing classes that you don’t want developers subclassing for whatever reason.

Sealing classes is a perfectly valid thing to do. Throw away those awkward frameworks based on class inheritance. Move away from class inheritance to implementation inheritance. The grass is definitely greener over here.

Microsoft and OOP

I have had this theory for quite a while that the Microsoft community – both inside and outside of the company — doesn’t quite get objects. I think they mostly get it — .Net wouldn’t be what it is if they didn’t — but there are just so many places where things just aren’t quite right that I think that overall, they just don’t quite get it. Now, I’m quite aware of the arrogance implicit in that statement, and I am quite aware that the comments that will follow this article will no doubt question my intellectual capacity, but I’m going to plow ahead anyway. What the heck.

I guess I can’t say for sure why I have this theory; it’s just something that sticks in the back of my mind every time I talk to a Microsoft-type person. I’ve been asked “What do you need an object for?”. They’ve said things like “VB6 is object-oriented” and “Oh, we can do that just as fast without objects”. I’ve heard “You don’t need polymorphism to be object-oriented.” (huh?) My theory is further bolstered as I work with .Net’s Framework Class Library (FCL). (Maybe someday someone can explain to me why so many classes in the FCL are marked sealed. Shoot, why is it even possible to “seal” a class. What the heck is that all about? Who are you to say I can’t improve or enhance your class? If your class somehow needs to be sealed, then I say you have a design problem. Now, despite the fact that most of your OOP languages include the ability to “seal” a class — C#, C++, Smalltalk — I am undaunted in my view. I was hoping that the FCL designers would be the ones to see the light and let me descend from the string class. Shoot, you can’t swing a dead cat in the FCL without hitting a sealed class that desperately needs enhancing. Oh well.)

But don’t get me wrong, I’m quite happy to say that, despite some irritating anomalies, the FCL and the rest of the .NET framework have been a big jump forward for MS in terms of their embrace of OOP — but it sure took them long enough.(For the sake of my sanity, I pretend that MFC isn’t really an OOP framework.) They are only about eight years behind Delphi and the VCL. That’s eight years of maturity that isn’t present in the framework. Nevertheless, despite it’s depth and scope, the FCL has a lot of quirks that indicate the folks in Redmond still don’t quite get it.

For instance, why is there a separate Connection class for each database type in ADO.NET? OracleConnection, SQLConnection, OLEDBConnection – one for each database! And you can only connect a SQLDataAdapter to a SQLConnection. If ADO.NET were properly designed, like, say, oh, I don’t know, the Borland Data Provider architecture is, then the concept of a “Connection” would be properly abstracted out as a single object that could be interchanged or replaced based on the back-end database. If ADO.NET is supposed to abstract out data access, why aren’t the base classes database independent? Why do I have to use Oracle-specific enumerations with OracleConnection and SQL Server specific enumerations with SQLServer? I’ll tell you – because ADO.NET isn’t designed properly, that is why. Someone somewhere along the line didn’t quite get it. The interfaces are there for ADO.NET to be programmed against, but the connection classes in ADO.NET fail to take advantage of them properly. IDBConnection has a ChangeDatabase method – why can’t I change from an Oracle database to a SQL Server one?

One of the purported great things about the FCL is the extensive use of interfaces, but I keep running into places where an interface sure would be nice, but isn’t there. The example that brought this to mind recently for me was the System.Web.UI.WebControls.Style class. The Style class allows you to set properties — Bold, Underline, Font, etc. — and have those values rendered as part of an ASP.NET control. Well, I was building a control that needed a very specific type of Style, but the problem I quickly ran into was that I didn’t want all of the properties of the Style class to be part of my new Style – in this case it was the various Border related properties. The problem, of course, is that the whole ASP.NET component architecture assumes that any and all styles for a control will descend from the Style class, and if the Style class has stuff attached to it that you don’t want, then too bad for you.

Wouldn’t it have been better if instead of a ControlStyle property, which must take a Style class or one of its descendants, there were an IStyle interface that knew how to extract a style string, and which let component developers implement it however they like? It might look as simple as this:

IStyle = interfaces
function GetStyleString: string;

I’m designing off of the top of my head here, but such an interface would allow me to design any class I like to provide the styles for my components. When it comes time to apply the style, the control could just call the GetStyleString method and add it to the style=”whatever” tag of my control and there you have it. It would be up to me to ensure that the string was properly formatted, and I could have any style settings that I please. Instead, in order to get the styles that I want, I have to hack on my own style classes, foregoing the alleged advantages of the FCL.

I’m not saying that the FCL sucks – far from it. But I am saying that I run into situations like this one more than I should. How about this – try reading in a text file, altering the third line of text in the file, and then writing it back out again. In the VCL, that is about four lines of code. In the FCL, it’s a bit tougher. You have to create Readers and Writers and Lord knows what else. What a hassle. Why not a neat object to do that?

More ADO.NET complaints: Why is it so tough to get a data value out of a table? I have to write:

CustomerID := Convert.ToInt32(MyDataset.Tables[0].Rows[0]['CUSTID']);

when the above code is crying out to be

CustomerID := MyDataset.Tables[0].Rows[0]['CUSTID'].AsInteger;

Or, in other words, clearly a field value in a row of a DataTable should be an object, with methods attached to it to convert it to whatever it needs to be. Like the VCL has been doing since, oh, 1995. OOP code is supposed to reduce the amount of code that you have to write by encapsulating common functionality. That isn’t happening in the above code, that’s for sure. Heck in general, it always seems like I have to write way too much code in my ADO.NET applications.

(And while I’m at it, surely I am not the only one that finds the complete lack of the concept of a current record in ADO.NET a glaring omission. I’m not, am I? Oh, sure, you can get a (unfortunately named) CurrencyManager from a visual control, but then of course your cursor is coupled with the user interface. That’s plain wrong.)

Now look, I know that the FCL is huge, and it’s a conglomeration of the work of hundreds if not thousands of programmers, and no doubt some of them have a better grasp of OOP principles than others. But there just seems to be enough of these little quirks in it to make me wonder if Microsoft doesn’t quite get it. It’s the little things that always add up. But hey, I suppose that when the FCL is as mature and refined as the VCL, it will probably have worked out this kind of thing. Only about eight more years to go.