More Developer Economics

My CodeFez colleague, Nick,  just posted an interesting editorial on the economics of developer tools, particularly with regard to Borland. Nick points out that businesses are justified in maximizing profits versus, say, maximizing revenue or expanding market share. While this may be true (and I don’t claim to be any more of an economist than Nick), this business strategy brings with it a number of other market forces that cannot be ignored. Most notably, maximizing profits may significantly increase the volatility of the revenue stream. In addition, deliberately shrinking the market, particularly in developers tools, reduce the total market of the tool – and ultimately reduces demand.

Increased Revenue Volatility

Just for the sake of argument, let’s say my company has 100 customers. Let’s say it then turns out that my support costs are sky-high, so I decide to focus on maximizing profit by increasing the price of my product. In this way, I know fewer people will buy my product but I also know that revenue per unit will be higher and support costs will be lower. Again, just to work with round numbers, let’s say this shrinks my customer base to 10 very profitable clients. The problem this presents is now my revenue stream is much more volatile. With my original base of 100 customers, losing 1 customer means the loss of only 1% of my business. However, with the smaller customer base of 10 the cost of losing each customer is much greater: 10% of my revenue. Of course, these numbers are totally contrived, but the concept behind them is solid: the fewer customers you have, the less you can afford to lose them.

Shrinking Demand

A smaller market might be fine for some kinds of products. Fine wines or luxury cars, for example, are markets designed for the high margin/great service/few customers business model. However, a small market is not a good thing for developer tools. In developer tools, market share is king. While a small portion of the developer tools market will stick to their favorite tool, popularity be damned, the vast majority of the tools market will tend to follow the trends, those tools and technologies that organizations feel have the best longevity and developers feel will keep them best employed. At the end of the day, this translates into continually decreasing demand for the smaller market tools – a sort of death spiral into sub-one-percent-market-share oblivion.

In short, I’m not as fond as my good friend, Nick, of the Borland Delphi pricing strategy. I enjoy Borland tools, but it’s clear that the growth of products like MS Visual Studio.NET and Eclipse is occurring at the expense of the higher-priced Borland offerings. I would prefer to see a strategy that more effectively balances growth with profitability.

Developers and Economics

I never cease to be amazed at how little the average developer knows about economics. I mean, I don’t claim to be an expert, but I have taken a college class or two and read up on the basics. Even just understanding the basics, though, gives one surprising insight into why things happen the way they do in the marketplace.

For instance, we are in the process of hiring a new developer at my company. We figured out what qualifications we were looking for and determined about how much salary we wanted to pay. It quickly become apparent, however, that we weren’t offering a high enough salary to attract the caliber of candidates we wanted. So what I did was to go on the newsgroups and start whining about not being able to find any good Delphi programmers. Okay, no, I didn’t really do that. What we did, of course, was to increase the salary that we were offering. Simple supply and demand issue: There wasn’t enough of a supply of good Delphi programmers at the price we wanted to pay, so the solution is to be willing to pay more – a no-brainer decision, really. Once we did that, we found that we have been able to find plenty of valuable candidates. Simple economics.

One common area that I see developers totally misunderstand is that of Delphi pricing. One thing you learn in Economics 101 is that the vast majority of companies are “price searchers”. That is, they are constantly searching for a price that will maximize their profits. (Some companies, mainly producers of commodities, are “price takers”. That is, they take whatever price is offered. Farmers are a good example. A corn farmer can only sell his corn at market price. If he asks for more, the market will simply buy corn from another farmer that will take the offered price). Borland is definitely a price searcher. They can set their prices as they please, and will do so to maximize profit. Of course, the market will respond to any particular price by demanding a certain number of units at that price. Price searchers are constantly adjusting prices to maximize the amount of money they make.

Note that they don’t set price to maximize revenue, but rather profit. The cost of goods sold is a factor here as is the cost of simply having customers. Sometimes a company will actually price a product in order to limit the number of customers they have in order to maximize profits as sometimes having additional customers causes decreased profits. (That may be a bit counter-intuitive, but think of a product that has high production and support costs.) So for example, sometimes doubling your price can increase profits even though it drastically reduces sales. If doubling the price cuts the number of customers you have in half, but also cuts your production and support costs in half as well, your profit increases. (This is a very simple example, and it is actually hopelessly more complicated than that, but you hopefully get the basic idea).

So Borland, having learned from years of experience and copious sales data, is quite aware of what effect various prices have on sales. No doubt by now they have a pretty good idea what price will maximize their profits and how price changes will effect sales.

Where it gets really interesting is pricing outside of the United States. Europe, of example, is a completely different market than the US. Taxes, the demand curve, and the number of potential customers are all different. Borland clearly believes that they need to – and can – charge more in Europe than in the US. The price difference is not related to the exchange rate between the Euro and the Dollar; it has everything to do with maximizing profits. Clearly Borland believes that a higher price in Europe – again, a completely different market – will mean higher profits. That’s why Delphi costs more in Europe. I suppose Europeans could view this as “price gouging”, but in reality, it’s just the market signaling to Borland that it will bear a higher price than will the American market. Simple economics.

Another economic blunder that developers frequently make is ignoring economies of scale. Borland is a big company that is publicly traded. Many Delphi developers work in small, private companies. Borland has legal obligations, overhead costs, and market demands that most little-shop developers don’t even know about, much less take into consideration. Borland’s main competition is one of the largest corporations in the world. Borland faces investors who expect a return. Borland has to deal with major entities in the media that can write things that can have profound effects on Borland’s business. All of this combines to make running Borland a complex and difficult task that most of us simply don’t comprehend.

So I love it when a developer posts in the newsgroups something like this: “Borland should just hire two college students to go through and fix all the Delphi bugs in Quality Central.” Well, that sounds great, but is clearly isn’t that simple. First, fixing bugs in a big product like Delphi is no small, trivial task. Finding people with the talent and skill to do Delphi bug-fixing isn’t easy. And they certainly aren’t going to be cheap. The notion that some college interns can do it is quite naïve. The economic blunder comes, though, in thinking that the cost of fixing all those bugs is merely the salary of a couple of developers. First, employees aren’t cheap, no matter who you hire. Human capital is by far the most costly – and valuable – part of doing business. Secondly, I don’t know what your development process is like, but bug fixing at Borland is more than a guy hacking out some code. Every fix has to be extensively tested for efficacy and correctness, and then the whole product has to be regression tested to ensure that any given fix doesn’t actually break something else. Fixes need to be incorporated into shipping product and distributed to existing customers. The documentation needs to be updated. And who know what else needs to be done? The point is this: the costs of things that many people think are small are in fact much larger than the average developer appears to realize.

The economics of running a business like Borland isn’t something about which I claim to be an expert. But I do know that I don’t know enough to be able to claim to know better than Borland. Something to consider before you fire off a post in the newsgroups that starts out “Borland ought to just….”

The Need for Bad Software

Unless you happen to be Don Knuth, you have probably written bad software. But believe it or not, bad software is good for you. 

One of the funniest blogs for developer is The Daily WTF. For normal people like my wife it is of course completely and utterly baffling, containing, as it does, words arranged in monospace fonts with punctuation where it shouldn’t be (sometimes with more punctuation than any text has a right to have), different lines indented in different ways, and if you’re lucky some words get to be printed in blue or red.

The Daily WTF’s premise is simple: developers enjoy a good laugh when we see badly written software. The laughter is usually accompanied by a fervent prayer muttered under our breath: "We hope by all that’s holy that we didn’t write it, and we pray that it isn’t part of the application we’re currently developing." The blog has a post a day that lampoons some source code, some interaction between a developer and other IT personnel, or some development-related happening.

Consider this saying: "You learn from your mistakes." Though a bit trite and clichéd, it does hold a grain of truth. Think back in time to a really awful bit of code you wrote. What happened when the inevitable bug report came in, and you investigated, and noted with horror what your younger self had written? You broke out in a sweat, your eyes dilated and you uttered the immortal acronym WTF, but in a decompressed form. Maybe you were able to cover it up, maybe it had to be written down in all its glory in the bug report’s commentary and then you had to go past people in the hallway who would fall silent and watch as you went by.

But I’ll bet you’ve never written code like that ever again.

Another example maxim: you only learn by doing. You know, you can read all the books and erudite exposition explaining how test-driven development helps you write better software, but you won’t internalize those lessons until you actually start writing tests. Until then it’s merely a nice thought experiment with no contact with reality. But how many times will you write bad software, or make the same coding mistake, or forget to check a fix in, before you start to wonder whether having a test suite could actually save your bacon? And guess what? Once you start with a unit test suite, you’ll be adding code to it all the time. Your code will get better. You willl stop making the same mistakes, (but make different ones and probably be able to fix them faster.) And most importantly, you’ll learn to rely on the tests that are run each time you check your code in.

You see, we do have a real need for bad software. It is only through bad software that we get better tools, better algorithms, better methodologies.

Think about it another way: it took several iterations of really bad software before Microsoft launched their Trustworthy Computing initiative. And it took them awhile to see the value in designing the CLR. The CLR demonstrates that it is harder (but not impossible) to write unsecure code with managed code and the .NET framework than with the C run-time library and pointers galore. It took an analysis of Diebold’s bad voting machine software to make us realize that hanging chads were the good old days. It takes bad software to crash a Mars probe into the planet and thereby waste $250 million and nine months in one glorious thud. It takes bad software for us all to appreciate that it’s new software we like writing, but that the steady money comes from software maintenance.

Bad software is everywhere; we have to live with it. It permeates the programs we use every day, it infuses the components we use, it pervades the society we live in. It forces us, as developers, to strive to better ourselves and to write better code. (If it doesn’t have this effect on you, then maybe you should go do something else!)

It takes bad software to make us appreciate the good software. Best of all, it gives us a good laugh every day!

Microsoft .NET Framework Security

Ask ten people “what is information security?” and you’ll get ten answers, most of them probably correct. We have a tendancy in this business to take this vast topic area and paint it with the single color of information security. Here, for example, are a handful of typical answers to the “what is information security?” question:

  • Authentication: The act of validating that a user or system is who they claim to be. This is accomplished most commonly using username and password, but can also be done using, for example, Smart Card or any number of biometric techniques (fingerprint, retinal scan, etc.).

  • Authorization: Once the user is authenticated, authorization determines what they are allowed to do on the system. For example, the user may have special administrator rights on the system or the user may be a member of a group with these rights.

  • Access control: The system that controls access to resources, ensuring that authenticated users are able to access only that functionality for which they are authorized.

  • Privacy: Ensuring that data or communications intended to be private remains private. This is often accomplished through cryptography and communication layers depending on cryptography, such as Secure Sockets Layer (SSL).

  • Integrity: After data is communicated or stored, the reader of the data must be able to have assurance that it has not been modified. Cryptographic hashes and signatures often play a role in this area.

  • Uniqueness: In a message-based system, care must be taken to ensure each message is unique and cannot be “replayed” to the detriment of the system. Often serial numbers or time codes are used to prevent this type of vulnerability.

  • Availability: Systems must remain available to authorized users at the times they are supposed to be available.

  • Non-repudiation: Preventing the system from denying having performed an action related to some data.

  • Software vulnerabilities: Protecting software system against comprimise through the sending of specially formatted data via a network or console.

  • Rogue applications: Viruses, Malware, and the like, which causes damage to a system when executed.

  • Infrastructure: Firewalls, routers, wireless access points, and other hardware that makes up the physical network infrastructure. Without sufficient infrastructure protection, no system can be declared safe.

  • Endpoint protection: Ensuring that workstations, laptops, hand held devices, and other network “endpoints” are hardened against vulnerabilities that might otherwise put the network or system as a whole at risk.

  • Auditing: Logging and cataloging of data so that problems or compromises can be analyzed in progress or postmortem so that they may be corrected in the future.

  • Physical: The proverbial lock and key, preventing unauthorized individuals from physical proximity to a system or network.

I obviously won’t be able to thoroughly cover all of these topics in this paper, but we will certainly touch on the greatest hits. More importantly, we’ll drill down in these important topics to a level of detail that enables you to understand how to implement such security for yourself in the .NET Framework.

J2EE Design Strategies Part I

Enterprise Java is an extremely popular development platform for good reasons. It allows developers to create highly sophisticated applications that have many desirable characteristics such as scalability, high availability, etc. However, with J2EE, you are potentially building distributed applications, which is a complicated endeavor no matter how it is built. As developers come to use J2EE more and more, they are discovering some pitfalls that are both easy to fall into and easy to avoid (if you know what to look for). J2EE development comes with its share of bear traps, just waiting to snap the leg off the unwary developer.

This paper is designed to highlight several of these avoidable pitfalls. It does so from an entirely pragmatic approach. Much of the available material on design patterns and best practices is presented in a very academic manner. The aim of this paper is just the opposite — present common problems and their associated solutions. It starts with J2EE Web best practices, moves to EJB best practices, and concludes with some common worst practices that should be avoided.


Web development is an important aspect of J2EE, and it has its share of potential dangers.

Singleton Servlets

A Singleton servlet sounds like an oxymoron — aren’t servlets already singleton objects? Some background is in order. First, a singleton is an object that can only be instantiated once. There are several different ways to achieve this effect in the Java language, most commonly with a static factory method. This is a common technique anytime an object reference can be reused rather than a new one instantiated. Of course, servlets already act in many ways like singleton objects — the servlet engine takes care of instantiating the servlet class for you, and generally only creates one instance of the servlet and spawns threads to handle users’ requests. Allowing the servlet engine to do this works fine in most cases. However, there are a few cases where you want the singleton-like behavior but also have to know what the instance is called. When the servlet engine creates the servlet, it assigns an internal reference to the servlet instance, and never lets the developer directly access it. This is a problem if you have a helper servlet storing configuration information, connection pool management, etc. What is needed is a way to allow the servlet engine to instantiate the servlet for us yet still be able to get to it.

Here is an example of a servlet that meets this criteria. It is a servlet that manages a homegrown connection pool.

Listing 1: Singleton Servlet for Managing Connection Pool

package webdev.exercises.workshop1;
import javax.servlet.*;
import java.sql.*;
Import webdev.utils.ConnectionPool;
public class PoolMgr extends GenericServlet 
    private static PoolMgr thePoolMgr;
    private ConnectionPool connectionPool;
    static final String DB_CLASS = "interbase.interclient.Driver";
    static final String DB_URL = "jdbc:interbase://localhost/e:/webdev/data/eMotherEarth.gdb";
    public PoolMgr() 
    public void init() throws ServletException 
            String dbUrl = getServletContext().getInitParameter("dbUrl");
            connectionPool = new ConnectionPool(DB_CLASS, dbUrl,
                 "sysdba", "masterkey", 5, 20, false);
            getServletContext().log("Created connection pool successfully");
    catch (SQLException sqlx) 
            getServletContext()Log("Connection error", sqlx);
       thePoolMgr = this;
    public void service(ServletRequest req, ServletResponse res) throws javax.servlet.ServletException,  
        //--- intentionally left blank
    public static PoolMgr getPoolMgr() 
        return thePoolMgr;
    public ConnectionPool getConnectionPool() 
        return connectionPool;

First, note that this is a GenericServlet instead of an HttpServlet — the user never directly accesses this servlet. It exists to provide infrastructure support to the other servlets in the application. The servlet includes a static member variable that references itself (common in "normal" singleton classes). It also has a (non-static) reference to the connection pool class. In the init() method of the servlet, the connection pools is instantiated. The very last line of this method saves the reference created by the servlet engine of this instance of the servlet. The GenericServlet class includes a service() method, which is not needed here (intentionally left blank to highlight that point). The servlet includes a static method called getPoolMgr() that returns the saved instance of the class. This is how other servlets and classes can access the instance created by the servlet engine. We are using the class name (and a static member variable) to keep the reference for us. To access this pool manager from another servlet, you can use code like this:

Listing 2: Snippet of servlet that uses a singleton

Connection con = null;
 //-- get connection from pool
 con = PoolMgr.getPoolMgr().getConnectionPool().getConnection();
 //-- do a bunch of stuff with the connection
catch (SQLException sqlx) 
 throw new ServletException(sqlx.getMessage());

The access to the connection pool class is always done through the PoolMgr servlet’s method. Thus, you can allow the servlet engine to instantiate the object for you and still access it through the class. This type of singleton servlet is also good for holding web application-wide configuration info. In fact, it is common to have the servlet automatically created by the servlet engine. The web.xml file allows you to specify a startup order for a particular servlet. Here is the servlet definition from the web.xml file for this project.

Listing 3: Web.xml entry to auto-load the PoolMgr servlet


An alternative to using a Singleton Servlet is to use a ServletContextListener, which was added as part of the servlet 2.2 specification. It and the listener event handlers for web development allow you to tie behavior to particular events. The listing below shows how to create a connection pool using a ServletContextListener.

Listing 4: StartupConfigurationListener creates a connection pool upon application startup.

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.ServletContext;
import org.apache.commons.pool.impl.GenericKeyedObjectPool;
import java.sql.SQLException;
public class StartupConfigurationListener implements
        ServletContextListener, AttributeConstants 
    public void contextInitialized(ServletContextEvent sce) 
    public void contextDestroyed(ServletContextEvent sce) 
    private void initializeDatabaseConnectionPool(ServletContext sc) 
        DBPool dbPool = null;
            dbPool = createConnectionPool(sc);
        catch (SQLException sqlx) 
            sc.log(new java.util.Date() + ":Connection pool error", sqlx);
        sc.setAttribute(DB_POOL, dbPool);
    private DBPool createConnectionPool(ServletContext sc)
            throws SQLException 
        String driverClass = sc.getInitParameter(DRIVER_CLASS);
        String password = sc.getInitParameter(PASSWORD);
        String dbUrl = sc.getInitParameter(DB_URL);
        String user = sc.getInitParameter(USER);
        DBPool dbPool = null;
        dbPool = new DBPool(driverClass, dbUrl, user, password);
        return dbPool;

The advantage of the SingletonServlet lies in the ability for non-web classes to create references to it. For example, you might have a POJO (Plain Old Java Object) that handles database access for your application. It has no way to get to any of the web collections directly because it has no access to the servlet context. Using a SingletonServlet, the class name of the servlet allows the developer to get to the underlying instance. So, even in the presence of the listener classes introduced to the web API, Singleton Servlets still have uses.

Model-View-Controller for the Web

In the beginning, there were Servlets, and it was good. They were much better than the alternatives, and allowed for scalable, robust web development. However, there was trouble in paradise. Web development partitioned itself into two camps: art school dropouts (invariably Macintosh users) who could create the beautiful look and feel for the web application, and the Java developers who made it work. The guys in the basement hand crafted the beautiful HTML and passed it to the developers who had to incorporate it into the dynamic content of the web site. For the developers, it was a thankless, tedious job, inserting all that beautiful HTML into the Java code. But, you drank lots of coffee and lived through it. Then, the unthinkable happened: the CEO got an AOL disk in the mail and visited a web site he’d never been to before. Come Monday, the commandment came down from on high: We’re completely changing the look and feel of the web site. The art school dropouts fired up their Macs and started realizing the CEO’s vision, and the developers got a sinking feeling in the pit of their stomachs. Time to do it all over again. The problem? Too much HTML in the Java code.

Then JSP’s appeared. Here was the answer to all our prayers. JSP’s have the same advantages of servlets (they are after all a type of servlet) and were much better at handling the user interface part of web design. In fact, the art school dropouts could craft the HTML, save it as JSP, and pass it right to the developers. However, all was still not well. The developers now must deal much more directly with the display characteristics of the application. Thus, the syntax of the JSP quickly becomes very cryptic, with the HTML and Java code interspersed together. The verdict: too much Java in the HTML.

Then came the Model-View-Controller design pattern for the web. If you’ve been living in a cave and aren’t familiar with this most famous of design patterns yet, here’s the capsulated version. The model represents the business logic and data in the application and resides in JavaBeans and/or Enterprise JavaBeans. The view is represented primarily by JSP pages, which have as little Java code in them as possible. In fact, all Java code should really be handled by method calls on the model beans or custom tags. The controller is the way that the view interacts with the model. In the web world, a servlet is the controller. Here is the typical scenario for web MVC. The user accesses a controller servlet. The servlet instantiates beans, calls methods on them to perform work, adds the beans with displayable information to one of the collections (for example, the request collection), and forwards the beans to a JSP that shows the user the results.

And it was good. Now, the display information is cleanly partitioned away from the "real" work of the application, which can be strictly in JavaBeans. The application could also start using regular JavaBeans, then scale up to use Enterprise JavaBeans without having to change the controller or presentation layers. This is clearly the best way to build web applications. It is easy to maintain, easy to update, and there is very little impact when one part of the system needs to change (now, the art school dropouts have to worry about the new look and feel, not the developers). This design patterns neatly modularizes the constituent parts of web applications.

Now what’s wrong? The problem with the MVC web applications (now frequently called "Model2", to distinguish it from MVC for regular applications) has to do with how you architect the web application. For example, if you create a different controller servlet for each page the user wants to visit, you end up with dozens or hundreds of servlets that look almost identical. Another problem is that these servlets, once visited, permanently reside as objects in the servlet engine. An alternative is to create one monster controller servlet to handle all the requests. The problem here is that you have to figure out a way to map the requests to different views. This is frequently done with parameters sent to the web site, identifying what command you want to execute. But, unless you are clever about it, your "uber servlet" becomes a massive set of "if…else" statements or a huge "switch…case" statement. Any changes require editing this servlet, which quickly becomes unruly and ugly. What is needed is an application framework for web development that handles most of these gory details. And that’s where Struts comes in.

Why the word "Fez"?

Inquiring minds want to know: who put the Fez in CodeFez? To begin, you should know that I was born in Egypt in 1971 with Greek, French and Egyptian ancestry.  That is probably why the “Falafels“ and the “Fezes“ keep turning up wherever I go!

The best way for you to understand the name CodeFez is for me to present a history lesson on where the word “Fez“ comes from. Most of the facts below come straight from

DURING the reign of Turkey’s Sultan Mahmud Khan II (1808-39), European code of dress gradually replaced the traditional robes worn by members of the Ottoman court. The change in costume was soon emulated by the public and senior civil servants, followed by the members of the ruling intelligentsia and the emancipated classes throughout the Turkish Empire. As European dress gradually gained appeal, top hats and bowlers with their great brims, and the French beret, never stood a chance. They did not conform with the customs and religions of the east. In their stead the Sultan issued a firman (royal decree) that the checheya headgear in its modified form would become part of the formal attire irrespective of his subjects’ religious sects or milets.

The checheya had many names and shapes. In Istanbul it was called ‘fez’ or ‘phecy’ while the modern Egyptian version was called ‘tarboosh’ (pronounced tar-boosh, with the accent on the second syllable.) The word tarboosh derives from the Persian words ‘sar’ meaning head and ‘poosh’ meaning cover. It was a brimless, cone-shaped, flat-topped hat made of felt. Originating in Fez, Morocco, the earliest variety was in the form of a bonnet with a long turban wound around it which could be white, red or black. When it was adopted in Istanbul the bonnet was modified. At first it was rounded, then, some time later, lengthened and subsequently shortened. At some point the turban was eliminated and a deep crimson became the standard color of the checheya.

Towards the latter part of his reign, Egypt’s Mohammed Ali incorporated the Greek version of the fez as part of the military uniform. All officers, regardless of rank or nationality, were required to wear it. Soldiers were issued two fez’s a year with their uniforms. This meant the Viceroy had to import almost 500,000 fez’s annually to satisfy his growing army’s needs. This is probably why he issued a boyourli (viceregal firman) to his chief of district Mohammed Aga, residing in the town of Fawah, to immediately commence plans for the local manufacture of tabooshes.

The first Egyptian made tarboosh appeared on the market in 1825. By 1837 Egypt produced 720 per day in the province of Gharbieh. But when the state economy floundered during Khedive Ismail’s reign, the tarboosh had to be imported. European manufacturers quickly multiplied their production lines. The demand for the Fez increased when it became part of the uniform for the Bosniak regiments in the Austro-Hungary army (up until 1918). It is therefore not surprising that Austria was the biggest tarboosh producer in Europe.

Until it went bankrupt in the 1970s, the tarboosh manufacturing firm of HABIG Und Sohne operated out of two adjacent buildings at Frankenbergasse No. 9 in Vienna’s 4th district, only 10 minutes from St.Stephan’s Cathedral. After WW-I and the subsequent fall of the Ottoman Empire, Egypt became Habig’s single most important customer. Yet in order to survive HABIG found itself obliged to produce top hats as well which later became the company’s main source of revenue. The vestiges of this tarboosh manufacturing company were still in evidence when the buildings were renovated five years ago and when molds used in order to press the ‘fez’ were found in the building’s basement.

It was during the reign of King Fouad that a ‘piaster’ fund-raising scheme was launched and the proceeds invested in a tarboosh factory. By 1943, there were 14 tarboosh retailers in Cairo, four in Alexandria and one each in Tanta and Simbelawain. By then the tarboosh had been once again modified to suit the fashion. While in Viceroy Ibrahim’s time it had been a la Greque as evidenced by his statue in Opera Square, it became a la turc during Khedive Ismail’s. The change resulted in the Fez becoming, much longer and almost covering the ears. During the reigns of Sultan Hussein and King Fouad, it was changed to its final shape and size, well above the ears as seen on the minted coins of that period.

Just like the flag, the tarboosh was a national emblem. It was de rigeur at the Egyptian court, in the civil service, the army and the police. Some officers wore it with a dashing slant, silken tassel flying in the breeze. Crown Prince Mohammed Ali Tewfik wore it in a manner defying gravity, looking like it was about to topple over at any moment. Others wore it as though it were a column sweating profusely beneath it. Its shape was described in a 1937 English editorial as esthetically and functionally inferior to European hats. "The practical advantages that the hat has over the tarboosh is that the tarboosh offers very little defense against the sun, it’s long chimney-pot length makes it a convenient victim of any random gust of wind, and in time of rain it has to be mollycoddled and swathed in its owner’s handkerchief in case it should come to harm."

Tarbooshes were also worn by Egyptian diplomats abroad. This requirement almost caused the breakdown of relations between Egypt and Turkey when, on the occasion of the 9th anniversary of the proclamation of the Turkish republic (October 29, 1932), Abdel Malek Hamza Bey, Egypt’s envoy to Ankara, appeared at the Ankara Palace Hotel wearing his tarboosh.

Present at the hotel was Turkey’s strongman, Ghazi Mustafa Kemal. (In 1934, when the Surname Law was adopted in Turkey, the national parliament gave him the name "Ataturk" meaning father of the Turks.) In his quest for a Yeni Turan or a ‘new society’, Mustafa Kemal decreed several laws which aimed at changing the norms and traditions of his country. For instance, in October 1928 he decreed that the capital move from Istanbul to Ankara. In November 1928, he proclaimed the Arabic script be changed to the Latin alphabet, a measure successfully adopted despite momentous challenges from the more traditional elements of society. Also by decree, the temenah form of greeting (touching one’s forehead, lips and heart with the tips of his fingers), once the symbol of imperial obeisance, was no more. It would be substituted with a handshake. Other drastic measures meant to bring Turkey in step with western culture included the outlaw of the bewitching yasmak and the eradication of the ferraji mantle, the bournous and the gandourah. European style of dress had come to displace the sherwals, the shalwahs and the baggy jodhpurs. Turbans and fashionable European hats replaced the veil. Mustafa Kemal was virtually offering women their freedom.

Yet what was considered a most radical reform was the 1926 parliamentary decree abolishing the fez. Henceforth, the Ghazi forbade its appearance anywhere within his new secular state. To Mustafa Kemal Ataturk, the fez and the veil were signs of backwardness, inferiority and fanaticism. Whereas in Turkey’s not so distant past, anyone sporting a hat other than a fez was considered a Giaour (stranger or one belonging to another faith and mode of life) now, western hats had become the rage. To encourage his people to turn away from the fez and adopt the western hat, the Ghazi would appear in public wearing different European headgear. With the power of law, the situation had now been irrevocably reversed.

So it must have been quite appalling for the supreme leader, on his Republican Day celebrations, to espie a red tarboosh bobbing about the crowded banquet hall. Unknown to him was that it belonged to Abdel Malek Hamza Bey, Egypt’s diplomatic envoy to Turkey. Aiming to please their leader, Turkish protocol officials accosted the unsuspecting tarboosh wearer and advised him to remove it for fear of soliciting the Ghazi’s wrath. Following a brief exchange between the Egyptian envoy and his Turkish hosts, Abdel Malek Hamza Bey refused to comply with his hosts’ requests arguing that his tarboosh formed an integral part of his national attire. Having made his point, he took leave from the celebrations.

Once the reason for Abdel Malek Hamza’s precipitated departure was made public, the press in both countries had a field day. Not to be outdone, the Daily Herald in London ran hair raising columns on the subject detailing how the Egyptians felt insulted and how the Turks had countered that since the Egyptian minister had received a personal apology from the Ghazi, no offense had been done. And hadn’t the Turkish foreign minister, Tewfic Rushdi Bey, declared that the Ankara Government did not consider it necessary to tender an apology on the grounds that Egypt’s dignity was not touched at all; that the Egyptian Government must consider this incident as closed!

With restraint practiced on both sides, and despite the Daily Herald’s malefic efforts, the diplomatic incident passed and relations between Egypt and Turkey were maintained for another twenty years. Almost as long as the survival of the tarboosh itself.

In 1952, just like the pashas and beys who proudly wore them, tarbooshes became history in Egypt when the new republican government abolished the official headwear. In an urelated incident, relations with Turkey were broken off.

As this century comes to a close, tarbooshes and top hats have become relics of the past and can only be found in masquerades, fancy dress balls and in one or two wayward freemason lodges. Wherever else you go, you will find multicolored baseball caps instead.

Finally, here at Codefez, we take our name very seriously, just as the Tarboosh and Fez were once taken seriously in Eqypt. The Fez is a symbol of respect, honor and professionalism.

Visual Development with Mono, GTK # and Glade. Part I

This multi-part article describes how to use Glade, a development tool that aids in creating a GTK+ based visual interface for Mono applications. Using Glade is a bit like using the visual designer in Visual Studio or Delphi. One big difference, however, is that in Visual Studio you use Windows.Forms to create a .NET GUI interface, while in Glade you use GTK+ to create a visual interface.

Though Glade has a nice visual designer, it is not a full featured development environment like Delphi or Visual Studio. It does, however, provide tools that allow you to drag, drop, arrange and configure visual elements such as buttons, edit controls and list boxes.

All of the tools discussed in this article, including Gnome, Glade, Mono, GTK+ and the GDK are open source projects. This means that they are distributed with source. More importantly, both Mono and GTK+ are cross platform technologies. This means that they run equally well on Linux or Windows.

This article begins by explaining the technologies used in Glade development. Once this background material is out of the way, the second part of the article outlines the simple steps necessary to create applications with Glade and Mono.

Mono and Gnome

Mono is the Linux implementation of the Microsoft .NET framework. It is based on open standards such as the C# standard and the Common Language Infrastructure defined by the ECMA group.

The C# language was created at Microsoft by Anders Hejlsberg, Scott Wiltamuth, and Peter Golde. The standard was submitted to the ECMA group by Microsoft, Hewlett Packard and Intel.

C# is designed to be simple and safe to use. It supports cornerstones of modern language design such as strong type checking, array bounds checking, detection of uninitialized variables, and garbage collection. Though not intended to be as fast as C or C++, it is designed to reflect much of the syntax found in those languages. This was done to make it easy to attract new developers who are familiar with those languages. C# was also designed to support distributed computing.

Since the Mono group is interested in the Linux operating system, they were attracted by the fact that C# was also designed to promote cross platform development. This is, in fact, one of the goals of C# specification, as laid down in the official ECMA documents.

The Mono project has complete implementations of the Mono specification for C#, and the CLI. It also supports web and database development. That is, it implements ADO.NET and ASP.NET.

Mono has had more trouble, however, implementing a .NET based GUI front end for desktop applications. In Microsoft’s implementation of .NET, the GUI front end is handled by a toolkit called Windows.Forms. The Mono team has had trouble implementing Windows.Forms on Linux. As a result, an alternative to Windows.Forms, called GTK#, has emerged. GTK# is similar to Windows.Forms, runs on both Linux and Windows, and has in Glade a visual development environment similar to Delphi. If you have patience, you will find that the next few sections of this article will, in a methodical way, explain what GTK# is, and how it is structured.


The Mono project is headed by Miguel de Icaza. Miguel’s background includes credit for being the chief architect behind Gnome. Gnome is a complex toolkit that consists of several disparate parts including an implementation of CORBA and a toolkit called Bonobo which is similar to Microsoft COM. However, Gnome is known most widely as a desktop environment similar to KDE or Windows. The goal of the Gnome desktop environment is to put an easy to use GUI front end on Unix. The Gnome desktop includes tools like a file manager, a panel for managing applications and windows, and various other components too numerous to mention.

Gnome, however, is more than just a desktop environment. It is also a development environment, a suite of API’s that a developer can use to access and control many features of a computer. In particular, it gives the developer control over the X Window System.

The Gnome API’s are not limited to GUI development. You can also use Gnome to develop command line applications. However, for the purposes of this article, Gnome is important because it provides a GUI development API similar to that provided by Windows.Forms.

Because the Gnome API’s are based on a desktop environment, GUI’s developed with it have a consistent and integrated look and feel. A Gnome based GUI can be run on top of the raw X Window System, on top of Microsoft Windows, or on KDE, but it has the most consistent and unified look and feel when it runs on top of the Gnome desktop.

Glib, GDK and GTK+: Important Libraries

Gnome is based on several external libraries. In particular, it relies on two libraries called Glib and GTK+. Before you can fully understand GTK#, it is best that you read at least this brief introduction to GTK+ and Glib.

Glib consists of a number of utility functions, many involving solutions to portability related problems. The developers of Glib were interested in cross platform development, and in particular, they wanted a consistent API on which to develop cross platform applications. A problem arose in cross platform development when a fundamental routine was needed on two different platforms, but was only available natively on one platform. Glib was created in large part to solve such problems. In particular, any routines that were needed on multiple platforms was implemented as part of Glib. Developers could then install the Glib library on a platform, and then map into existing Glib routines to get the functionality they needed.

It is interesting to note that in many cases problems of this kind were resolved by adding a routine to a particular language. For instance, when a routine for string manipulation was needed in the C language, sprintf was developed. When C was ported to different platforms, the sprintf routine came along as a matter of course. But the creators of Glib took a different approach. Instead of embedding a particular routine in a language like C, they imbedded it in the Glib library itself. From the beginning, Glib was designed to be run on multiple platforms and to support multiple languages. So rather than building the routine into any particular language, the routine was built into Glib, and then mappings to Glib were created for a wide range of languages, including the Mono implementation of C#.

It is interesting to note that GTK was not originally created for use in Gnome. Instead, GTK was designed to be one of the tool kits used by a sophisticated drawing program called the Gimp. In fact, the letters GTK stand for the Gimp Tool Kit. Another important library, called the GDK, is the Gimp Drawing Kit. Both the GTK and GDK depend on Glib. Gnome, in its turn, relies on GTK+, GDK and Glib. In other words, the architects of Gnome choose to rely on the same graphics libraries as the Gimp. This has proved to be a wise decision, for the GTK has been a good base on which to build.

GTK+ includes a widget library for creating controls such as buttons and list boxes. Though these controls were originally meant only to be part of the Gimp, they have now become part of Gnome, and serve as the widget set for the whole Gnome library. GTK+ also includes a type system, and a set of routines for implementing events. In GTK+, events are called signals. As you can see, GTK+ is an sophisticated library that plays a big role in Gnome development.

Because GTK+ calls into the GDK and not into the X Windows System itself, it has been possible to rewrite the GDK to hook into windowing systems on other platforms such as Windows. As a result, GTK+ and GDK run on multiple platforms, including Windows. This means you can write a single GTK+ based application and have it run unchanged on Windows and Linux.

Mono, GTK+, GTK# and Gnome

GTK# is a C# wrapper around GTK+. It therefore provides a cross platform way for Mono based applications to have access to a powerful, and well thought out, visual library of widgets and drawing routines. Mono developers using C# can call the routines in GTK# to manually create a visual interface for an application. In terms of complexity, such an operation is similar to creating a window in the raw QT API. That is to say, it is simpler than having to write a raw Win32 API application by hand, but it is more difficult than using a tool like Delphi or Visual Studio to draw the interface for an application.

Glade was designed to bring visual development to GTK+ and wrapper languages such as GTK#. Using Glade, you can create an XML file that defines a visual interface for an application. Using GTK#, you can automatically load that interface and display it to the user with a single, easy to use call.

An example of an application developed with GLADE and the GTK# library is shown in Figure 1. Applications such as this one can be quickly assembled using Glade and a very few lines of GTK#. An exact explanation of the process will be shown in part 2 of this article.

Figure 1: A simple Mono application written using the GTK library.


In this article you have learned that GTK# is a wrapper around a visual library called GTK+. This library forms the basis for the desktop environment called Gnome. Because Gnome is a complete, and sophisticated tool, it demonstrates that one can use GTK+ to create sophisticated visual applications. By creating a mapping from C# into GTK+, the Mono team has a found a good way for their developers to create sophisticated graphical user interfaces for their applications. Such applications are built on standards based, open source code that can be run on either Windows or Linux.

The Big-Oh notation

When we compare algorithms in order to select one to use, we often need an understanding of their performance and space characteristics. Performance is important because, well, we’re always interested in raw speed; and space is important because we are always on the lookout for algorithms that don’t waste memory. Of course, there are other considerations too. For example, we might want to know how easy it is to implement algorithm X or algorithm Y. Yet most of the time we are primarily interested in performance and space characteristics.

We’ll talk about space considerations in a later article; for now, we’ll consider how to compare the performance of algorithms.

When comparing performance we need a compact notation to express its characteristics. For instance, it is awkward to say "the performance of algorithm X is proportional to the number of items it processes, cubed," or something equally as verbose. Fortunately Computer Science has a solution to this problem; it’s called the big-Oh notation.

We begin by running a series of profiling experiments to analyze the performance characteristics of the algorithm in which we’re interested. (If we’re Don Knuth, we can also try to derive the characteristics mathematically from first principles.) If we are lucky, the results of these profiling runs allow us to work out the mathematical function of n, the number of items, to which the time taken by the algorithm is proportional, and then say that the algorithm is an O(f(n)) algorithm, where f(n) is the mathematical function we determined. We read this as "big-Oh of f(n)", or, less rigorously, as "proportional to f(n)."

For example, if we timed experiments on a sequential search through an array for different numbers of items in the array, we would find that it is a O(n) algorithm. Binary search, on the other hand, we’d find out to be a O(log(n)) algorithm. Since log(n) < n, for all positive n, we could say that binary search is always faster than sequential search since the time taken would always be smaller. (However, in a moment, I shall be dishing out a couple of warnings about taking conclusions from the big-Oh notation too far. Be warned.)

Suppose that by experimentation we work out that Algorithm X is O(n2 + n), in other words, the time it takes to run is proportional to n2 + n. By "proportional to" we mean that we can find a constant k such that the following equation holds:

TimeTaken = k * (n2 + n)

Now, in general, the value of k doesn’t really affect our intuition of the performance of Algorithm X. Yes, higher values of k result in slower performance, but the important bits are within the parentheses, the n squared and the n. Increasing n doesn’t affect k; it’s constant, remember. In fact, knowing this, we can see that multiplying the mathematical function inside the big-Oh parentheses by a constant value has no effect. For example, O(3 * f(n)) is equal to O(f(n)); we can just take the ‘3’ out of the big-Oh notation and multiply it into the outside proportionality constant, the one we can conveniently ignore.

(The same goes for adding a constant inside the big-Oh parentheses; for large n, O(n + 42) is the same as O(n).)

If the value of n is large enough when we test Algorithm X, we can safely say that the effects of the "+ n" term are going to be swallowed up by the n2 term. In other words, providing n is large enough, O(n2 + n) is equal to O(n2). And that goes for any additional term in n: we can safely ignore it if, for sufficiently large n, its effects are swallowed by another term in n. So, for example, a term in n2 will be swallowed up by a term in n3; a term in log(n) will be swallowed up by a term in n; and so on. Note that this only applies when we’re adding or subtracting terms, we can’t ignore multiplying or dividing terms in the same manner (unless the term is constant, as we’ve shown).

This shows that arithmetic with the big-Oh notation is very easy. Let’s, for argument’s sake, suppose that we have an algorithm that performs several different tasks. The first task, taken on its own, is O(n), the second is O(n2), the third is O(log(n)). What is the overall big-Oh value for the performance of the algorithm? The answer is O(n2), since that is the dominant part of the algorithm, by far.

But, having said that, here comes the warning I was about to give you before about drawing conclusions from big-Oh values. Big-Oh values are representative of what happens with large values of n. For small values of n, the notation breaks down completely; other factors start to come into play and swamp the general results. For example, suppose we time two algorithms in an experiment. We manage to work out the two performance functions from our statistics:

Time taken for first = k1 * (n + 100000)
Time taken for second = k2 * n2

The two constants k1 and k2 are of the same magnitude. Which algorithm would you use? If we went with the big-Oh notation, we’d always choose the first algorithm because it’s O(n). However, if we actually found that in our applications n was never greater than 100, it would make more sense for us to use the second algorithm.

So, when you need to select an algorithm for some purpose, you must take into account not only the big-Oh value of the algorithm, but also its characteristics for the average number of items (or, if you like, the environment) for which you will be using the algorithm). Again, the only way you’ll ever know you’ve selected the right algorithm is by measuring its speed in your application, for your data, with a profiler. Don’t take anything on trust from an author like me; you should measure, time, and test.

There’s another issue we need to consider as well. The big-Oh notation generally refers to an average case scenario. In our sequential versus binary search thought experiment, if the item for which we were looking was always the first item in the array, we’d find that sequential search would always be faster than binary search — we would succeed in finding the element we wanted after only one test. This is known as a best case scenario and is O(1). (Big-Oh of 1 means that it takes a constant time, no matter how many items there are.)

If the item which we wanted was always the last item in the array, the sequential search would be a pretty bad algorithm. This is a worst case scenario and would be O(n), just like the average case.

Although binary search has a similar best case scenario (the item we want is bang in the middle of the array and is found at the first shot), its worst case scenario is still much better than that for sequential search.

In general, we should look at the big-Oh value for an algorithm’s average and worst cases. Best cases are usually not too interesting — we are generally more concerned with what happens "at the limit," since that is how our applications will be judged.

In conclusion, we have seen that the big-Oh notation is a valuable tool for us to characterize various algorithms that do similar jobs. We have also discussed that the big-Oh notation is generally valid only for large n, for small n we are advised to take each algorithm and time it. Also, the only way for us to truly know how an algorithm will perform in our application is to time it. Don’t guess, use a profiler.

In the second part of this article, you will learn about space and memory considerations and how those factors effect the selection of algorithms.

The Drive to Write Free Software. Part 3

Evolution: Knowledge Wants to be Free

We have seen that an economic and historical analysis of this subject is useful, but not completely satisfying. Perhaps the real roots of the free and open source software movements lie not in economics or history, but in human nature itself.

If you step way back, and begin looking from a distance at the forces that drive life here on this planet, it does not take long to become aware of a force that we, for lack of a better term, call evolution. At bottom, evolution is about the dissemination of knowledge. In particular, it is about the dissemination of knowledge encapsulated in the genetic structure of the creatures that inhabit this planet. That is an odd form of knowledge, but it is knowledge nonetheless.

When people talk about genes, and about the evolution of a species, they don’t always think about mathematics or information sciences. But at bottom, genes are all about mathematics and information. Genes are a form of knowledge encoded in a structure that is not really so different from a computer language. The famous double helix that underlies our genetic structure is something that can be duplicated almost exactly on a computer. In fact, when it came time, in the 1990’s, to unravel the secrets of our genetic structure, real progress was slow until people began to use computers to map the human genome.

Genes track information in a manner that is directly analogous to the way computers encode information in bits and bytes. Genes have their own language, consisting of four characters, just as computers are based on a binary language. In other words, human genes are more than a little bit like little computers. Genetic information contains the code for the very structure of our physical being, just as the bits and bytes in a computer form the structure of a computer program. The information encoded in genes is the information that is used to determine the color of our eyes, hair and skin, the structure of our bones, the kinds of diseases we are prone to and are likely to resist, even to some degree the structure of our nervous system. All of these things are dependent on information encoded in genes.

The behavior of computer programs, and even their appearance, are also encoded in a series of bits and bytes not so different from the information in a gene. In other words, information is information, whether it is encoded in a human gene or encoded in a computer program.

If you want to understand the development of life on earth, you have to understand genetics. Life emerged from tiny one celled animals into complex creatures such as cats, deer and humans due to the different ways in which knowledge, encoded in genes, can be combined and recombined. This whole subject is explained beautifully in the extraordinary book Microcosmos by Lynn Margulis and Dorion Sagan.

But why did life evolve this way? Why weren’t genes content just to stay in little one celled animals? What force drove them to create more and more complex hosts? Genes are the driving force behind evolution. Without DNA and RNA and the whole relentless, combinatorial drive to evolve, life as we know it would not exist. Why is the information in genes continually reaching out to form more and more complex, more and more sophisticated, forms of life? Is there something inherent in the nature of knowledge that wants to expand, that wants to be free? Apparently, the answer to this question must be yes.

Whether this force is a manifestation of God’s will, or of randomly driven nature, is not really the question here. If God created this world, then certainly one of Her primary engines of evolution was the force that demands that knowledge be spread, be disseminated, that it continue to grow. The desire of knowledge itself, of life itself, to evolve and grow is simply one of the laws of life as we know it.

The written history of the human race is in effect the unbinding of recorded knowledge from our genetic structure, and the encoding of that knowledge in books, media and computers. As people learned to encode knowledge first in written text, then in printed text, and finally in computers, they in effect harnessed the power of knowledge itself. Modern life evolves so quickly because we can encode knowledge in books and computers, much as knowledge about the structure of a living being can be encoded in a gene.

You might think that I am trying to set up an analogy here between knowledge as we know it in books, film and computers, and knowledge that is encoded in the humane genome. But I do not view this as an analogy. I think information is information no matter how it is stored. This information drives physical (but not spiritual) evolution here on earth, and it wants to be free to do its work. Now we have entered an age when genes emerge not through random events in nature, but through direct manipulation by people. In other words, knowledge has found a new way to force its evolution.

The point to grasp here is that human knowledge is not just an abstraction, it is a force of nature, it is one of the basic principles with which God imbued creation. The idea of trying to wrap up knowledge inside copyright or patent law suddenly becomes absurd when seen from this perspective. You can’t control so powerful a force with such crude tools. (This is not a diatribe against copyright law. Notice, for instance, that I have a copyright notice at the top of this article. Copyrights and patents are useful tools, but they are not as primary, not as powerful, as the urge to obtain and disseminate knowledge.)

People write free software because software is knowledge, it is the very force of nature itself, and you can’t suppress knowledge. Life itself, first in the form of genes, but then later in the form of written words and finally as binary data, is all about the dissemination and evolution of knowledge.

You can’t suppress this force by insisting that only corporations can control knowledge. It is not just that some people find the idea of giving such knowledge to corporations repugnant, but that life itself won’t put up with restrictions of that type. Knowledge wants to be free, it wants to spread itself across not only this planet, but the entire universe.

When powerful forces try to bind knowledge and make it the plaything of an economic elite, they are fighting a battle that hopefully can never be won. They think that they can own knowledge, and that they can force us to only borrow it for short periods of time. They have the source, we get only binary data, they have the rights, we have to agree to EULA’s that take away any meaningful sense of ownership that we can have of that software. In the long run, however, knowledge will escape from their clutches. If it does not, then life as we know it will stop evolving, and we will be frozen in place. That is, we will die.

So that is why people write free software. Software is a form of knowledge. Knowledge is part of the fabric of life. Knowledge wants to be free so that life can evolve. People write software for the same reason they build houses, or fall in love. We were born to create and share knowledge. It is one of our deepest and most profound instincts.

Corporations try to control this knowledge by hiding the source code for their software. The US government tries to hide this knowledge by enshrining it in a corporate monopoly they believed useful to their conception of the state. But what happens? The strangest of all things. Something that from a particular perspective makes no sense at all! People start building software for free on their own, in their spare time! What sense does that make? What can possibly be motivating these people? How can we make sense of what they are doing? What possible explanation is there for this huge, wildly successful, seemingly irrational, international movement to create free software? What is it that wants to be free? From what does it want to escape? Why does it want to escape? What is its purpose?

The people who want to bind knowledge with laws, who want to own it, who want to possess it for their own benefit, will tell you that knowledge is property. That they own it. They will even try to “own” the knowledge encoded in genes. They will literally try to copyright the genes that form the very substance of life itself. (And yes Virginia, this is already happening.) But knowledge doesn’t want to be owned. And certainly it doesn’t want to be owned by something as lowly on the cosmic scale of things as a human being sitting in an office in Washington DC or in Silicon Valley. The force driving the spread of knowledge is much more powerful than a group of middle aged men and women sitting in government or corporate buildings.

Does this mean that corporations and private enterprise have no part to play in the development of software? Of course not. Knowledge will use any tool available to help it grow and spread. Sometimes market forces are a great means of enhancing the spread of knowledge. In those cases, corporations and human knowledge work together to achieve the same ends. But it is not the corporation that is in charge, it is nature itself. Knowledge wants to spread, and it will use individuals, governments, corporations, educational institutions, monasteries, whatever tools are available, to help it achieve that end. But it will not make itself subservient to any particular corporation or denomination. Knowledge, and God’s will, are greater than any individual, any corporation, any religion, or any educational institution.

Why do people write software for free? It probably makes more sense to ask why software wants to be written. But when you put the question that way, then the whole idea of people trying to bind knowledge by legal means, or by obfuscating the source, becomes a bit laughable. It’s just not going to work, and everyone in the software development community knows that it is not working. If you have doubts, go spend a half an hour on SourceForge, on the Apache site, and you will know that it is not working! But there are some people who don’t want you to look at it that way. They have a vested interest in being sure that you don’t look at it that way.

So tell me: Why do people write free software? It seems a bit enigmatic at times, this urge to write software for free. If we decide that life is all about making money, then it makes no sense at all. But maybe life is about more than just money. Maybe the really powerful forces in life aren’t economic. But if it’s not money that motivates these people, then what is it? Is life really about economics, or are there other forces in play here? If so, what are those forces? Whatever they are, they must be very deep, and very powerful. What theory is there that is large enough to account for such an extraordinary phenomenon?

The Drive to Write Free Software. Part 2

History: The Origins of the Free Software Movement

Sometimes difficult questions can be answered by looking at history. In discovering the roots of a movement, we can often learn something about its causes. So let’s try following the historical record for a bit, and see where that leads us.

During the late sixties, and through the early eighties, many of the greatest contributions to software emerged from the universities and corporate think tanks. One way or another, this software was available free of charge to the computer community. Just as academics shared software, so did the workers at big corporate think tanks. They lived, in effect, in a free, open source, software community. And they liked living there, and they didn’t want the open sharing of knowledge to end. Computers also came with complete suites of software, and usually shipped with source. Especially from a management position, this was not the same things as free, open source software. Yet to the developers who worked on these machines, it felt as if the software and its source came for free. If you want to read more about this part of computer history, you can start with Steven Levy’s famous book called Hackers.

But as smaller, more portable computers developed in the eighties, this situation changed. Suddenly software was being written by corporations for sale to people who had money. Companies like Microsoft, Novell, Lotus and others emerged, and began selling software, but not the source to the software. Knowledge was no longer freely available. Instead, it was something that had to be purchased. In universities, and at corporate think tanks, source was usually available. But ironically, when cheaper computers made software more widely available, that was precisely when corporations stepped in and tried to claim the intellectual rights to knowledge that had previously been freely available, at least to those in the corporate think tanks or in academia.

Both the academics at major universities, and some of the personnel from the great corporate think tanks such as Bell Labs, felt that this was a betrayal of the values they had cultivated during the previous two decades. Previously knowledge flowed freely among the small group of people who had access to computers. Now many more people could own computers, but the source to the software was locked up. As a result, a small group of these developers formed a community that valued free software. The heart of their argument was that owning the source to computer programs was important, and having the right to recompile a program was important. On a more idealistic level, many of them believed that knowledge about computers was the province of humanity itself, not of individuals or corporations. To them, it made no more sense to talk of owning a compiler or algorithm than it did to talk of owning the rights to the syntax of the English language. Ultimately, their argument was that proprietary software represented a restriction on the field of computer science, and on their rights as free individuals in a free society.

Particularly in the academic world, there was a sense that the computer community was working to create a tool that could be used for the good of mankind. The idea that knowledge which could benefit everyone should be owned by a corporation was repugnant to some people. These people wanted to live free, and they wanted knowledge to be freely accessible. They didn’t want to be told how, when, or to what extent they were free to use a piece of information. You can read more about this world view in Eric Raymond’s, The Art of UNIX Programming.

Clearly the thoughts of this small group of people in academia and in corporate think tanks does not provide a complete explanation for a trend as large as the Open Source Movement. Their ideas are simply far to abstract and too idealistic to gain hold in a country like America at the present time. Nevertheless, their ideas and their efforts formed one of the major motivating forces behind the creation of the free software movement.

The history of computer science in academia and in corporate think tanks explains what happened, but not why it happened. We know that people want to be free, and that they want knowledge to be freely available, but it is more difficult to understand why they want these things. To understand why people want to share the source for their programs, to see why they want knowledge to be free, we have to explore this subject further.