The War of the Virtual Bills

As programmers, it is our karma to witness constant change. The wheel of life keeps turning, and we  turn with it. Each creaky, joyful, revolution represents a new phase, a new song to sing.

This revolution of the wheel is called the War of the Virtual Worlds. One spoke points toward Bill the Great, and his .NET virtual world, and another points to Bill the Joyful, and his cross platform Java virtual world.

As programmers we sweat over minute details in computer syntax, while these two behemoths face off, Goliath vs Goliath, one the mirror image of the other. Choose Java, or choose .NET: it hardly matters. In either case, you are living in a virtual world. It’s one virtual machine against the other, and the devil take the hindmost.

Despite the hype of their respective supporters, it is hard to believe that either will triumph decisively in the end. Here in America, many believe Microsoft rules the roost. But cross the sea to either the east or west, or travel south or north, and the further one gets from home, the more dully shines the Microsoft coin, and the more golden and glittering shine opposing technologies such as Java or Linux. The two opposing camps are both too huge, too set in their ways, too committed to their respective technologies, to ever give in. This is the age of the virtual Bills, and they will clash over and over until the wheel turns and each is taken by their aging, withered hands and marched off into the sunset.

Pick One from Column A, One from Column B

Some people have very strong opinions about which virtual machine is better: The one from Sun, or the one from Microsoft. But personally, I’m having an increasingly hard time telling the two apart. And so, I surmise, is the rest of the world.

This year Microsoft’s Tech Ed had 14 thousand attendees, while Java One had 15 thousand: a virtual dead heat. In a lead article from the June 27 edition of InfoWorld, we learn that: "Java is the de facto platform for enterprise-scale applications, owing to its proven scalability and the extraordinary range of services it provides."

In the eyes of your average Microsoft marketing victim, however, Java is little more than the cybernetic equivalent of Latin: A dead language used by a few bald-headed, pot bellied professors laboring away in virtual anonymity at small two year colleges lost on the plains of the American midwest. In the words of the Microsoft marketing wizards: "The Microsoft [.NET] strategy helps businesses realize the promise of information anytime, anywhere, on any device." With a promise like that, who needs anything else but a copy of Windows with the .NET virtual machine?

The Gospel According to Bills I and II

Driven by the desire to compete with Microsoft Windows, both Sun and IBM bit the bullet and worked together to champion Java and the cause of platform independence. The same Java code runs smoothly on Windows, Linux, Solaris, HP Unix, and a host of other platforms and small devices. The primary message of Java is that it fosters independence: Java applications run smoothly anytime, anywhere. Or at least that is how the song goes.

In the Windows world, the .NET CLR sometimes appears to have the opposite goal. It aims not at platform independence, but at Microsoft world dominanation: "One VM to rule them all, and in the darkness bind them!" In the Brave New World envisioned by Microsoft, one need never think again. Microsoft has the answers and the marketing team that can explain the details to you if you are dull enough to need them. The future is planned out for you: one shining steel rail reaching from Singapore to Washington, one seamless platform, based on the law of Redmond. In this land, there is plenty of choice, one can choose between Visual Basic, C++ or C# — so long as you don’t mind finding that all the different languages use nearly identical API’s and end up compiling down to more or less identical byte code.

NOTE: Of course, Microsoft has standardized parts of the .NET platform, just as parts of the Java platform are open sourced. The question we have to wrestle with is whether we are writing to the open parts of these platforms, or whether we are locked in to an individual company’s plan for  "world dominance." The Mono project represents the bridge from the Microsoft world to the larger world of cross platform programming, and hence it is always worth careful study if you are interested in .NET technology.

The More they Stay the Same

With two such opposite strategies, one would think that there must be a huge difference between the Java and .NET platforms. But of course, there is no significant difference between them. The average first year programmer with a semester or two of programming behind her would be hard pressed to guess whether a randomly selected swatch of code was written in C# or Java. Syntactically, the two languages are all but identical, and their API’s frequently vary only in the most minor details.

As I explained in a previous article, the Java VM and the Microsoft VM (aka as the CLR), are so nearly identical as to be virtually indistinguishable. They both have the same type of byte code scheme, the same type of JITcompilation engine, the same memory management scheme, the same syntax, the same interface based compositional architecture, and the same simple types.

NOTE: Occasionally we hear rumors that the .NET VM will become part of the Windows OS, but Microsoft has no plans to do this in the next revision of their operating system. As a result, any such development is likely to occur many years in the future. By then, the whole computing landscape will no doubt have evolved into something very different from what we see today.

Listening to the opposing marketing teams trying to claim technical superiority makes me feel like I’m back at college trying to parse a passage from Kant’s Critique of Pure Reason. Get beneath the bombastic hyperbole, and the VM marketer’s logic is sliced so thinly, the wheels are grinding so finely, that the whispy differences between the two positions will blow out of your hands in a five knot breeze.

Coping with the Virtual Twins

One way to deal with the twin virtual juggernauts is to stick to open API’s and cross platform strategies. Some technologies, such as unit testing and Ant build files, look virtually the same on either platform. Other technologies, such as HTML and Web Services, are designed to free you from having to choose a particular platform. It takes little work to make a .NET web service talk to a Java web service. Web Services are designed to work across platforms, and across API’s.

Anyone who has worked in the industry for awhile knows that technologies come and go. Seeing this panoply of change, I try not to become the victim of a marketing machine. When possible, I design my code to work with other technologies. Choosing accepted standards and open API’s is not a commitment to a particular platform, it is a commitment to the future of programming and to your own career. Every time you buy into an API controlled by a single company you are sucking the life blood from your own body. To choose a closed API is to help a chuckling bean counter create a world where there are victors and victims, and to work against a more spiritual, decent and fair world where there is security and stability.

In the Belly of the Virtual Snake

Ten years ago hardly anyone would have guessed that our future as programmers would be built on top of virtual machines. But now, morning noon and night, the talk is of virtual machines, and little else. It always .NET this and Java that. In some social circles, it is less risky to sing the praises of George Bush or Bill Clinton than it is to praise the wrong virtual machine.

I’ve always been a dreamer, and these days, my favorite dream is of a huge, virtual snake coming along and swallowing up both of these behemoths. I can picture it’s huge belly bulging with the meal on which it has gorged.

Like Java and .NET, Python is built on top of a virtual machine. But the Python virtual machine has no billion dollar marketing machine singing its praises. As a result, it is a lot less well known, but the air it breaths is clean and the rivers and lakes from which it drinks are clear and unpolluted.

While Java and C# are simply opposite sides of the same coin, Python has its own unique syntax. Created after Java and after Perl, it absorbed the best features of both languages, and then learned many new tricks. And it is constantly under revision, as you can learn from studying the new decorator syntax available in version 2.4. It’s creator, Guido von Rossum, is both more handsome and more creative than either of the Bills. And besides, he has a better name.

Of course, in today’s world, programmers and managers alike follow the pied piper of marketing, which leads them off to lemming land where the cliffs are high and the sky is always gray and cold. Humming his tune as they march toward the cliff, few look to the right or left and see that there is more than one virtual machine from which to choose.

Summary

Here at Falafel, land of good food, where the Egyptians dance, and the Nile flows blue between the sands, there are those who worship at the altar of Bill the Great, God of Microsoft. His virtual machine floats like a giant blue sun on the horizon of our consciousness. But there are others who look above them and see a different sky.

And certainly the behemoth virtual machines have done much for programmers. They each provide freedom from the cold metallic hardware that lies beneath us. In these virtual worlds we can work with logical machines that operate at least a bit more like a human brain and bit less like a cold, rigid mechanism made of silicon and metal.

The flexibility found in a virtual machine has proved to be a boon to everyone. Unfortunately, a marketing team can help mold these virtual machines into whatever shape they like. For my part, I try to write not to a virtual machine, but instead to write to accepted standards and open API’s.

If you want freedom, and a breath of fresh air, you might also try to set aside a little time each month in which to play with a virtual machine that is not powered by a billion dollar marketing team. I suggest Python, but some prefer Perl, and others prefer Ruby. In any case, you owe it to yourself to see what life is like in the free world. The war of the virtual Bills rages unabated, but there are other worlds where the grass is still green, the air fresh, and the land peaceful.

Whatever we do, the wheel will keep turning. Listen to the creaking and groaning, hear the rhythms the wheel makes as it turns. This is the song we dance to each day as we write code.

Start Your DirectX Engines

The DirectX 9 SDK now ships with the DirectX Sample Application Framework. With the sample framework, optimized DirectX initialization and management code are delivered free at the click of a button. I’m enthusiastic about the sample framework because developers will be able to begin work on DirectX games and simulations without re-engineering support code. But before we tour the sample framework, let’s take a look at the pain we can expect to avoid.

The Bad Old Days

In the past, many of the available example programs set up only the most basic DirectX 3D graphics requirements. Using Delphi and the wrapper units available from project Jedi for example, the following unmanaged code executes basic DirectX setup functions:

// wrapper unit from project Jedi, available from Borland code central
   uses D3D9; 
var
   Direct3D9: IDirect3D9;
   Direct3DDevice8: IDirect3DDevice9;
procedure TfrmMain.FormCreate(Sender: TObject);
   var
   DisplayMode: TD3DDisplayMode;
   D3DPresent_Parameters: TD3DPresent_Parameters;
   begin
   Direct3D9 := Direct3DCreate9(D3D_SDK_VERSION);
   Direct3D9.GetAdapterDisplayMode(D3DADAPTER_DEFAULT, DisplayMode);
 FillChar(D3DPresent_Parameters, SizeOf(TD3DPresent_Parameters), 0);
   with D3DPresent_Parameters do
   begin
   Windowed := True;
   hDeviceWindow := Handle;
   SwapEffect := D3DSWAPEFFECT_DISCARD;
   BackBufferFormat := DisplayMode.Format;
   end;
 Direct3D9.CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL,
   Handle, D3DCREATE_SOFTWARE_VERTEXPROCESSING, D3DPresent_Parameters,
   Direct3DDevice8);
   end;
procedure TfrmMain.FormPaint(Sender: TObject);
   begin
   Direct3DDevice8.Clear(0, nil, D3DCLEAR_TARGET,
   D3DCOLOR_XRGB(0, 0, 255), 0, 0);
 Direct3DDevice8.BeginScene;
   // assemble your scene here on the back buffer
   Direct3DDevice8.EndScene;
 // swap your back buffer to the visible screen
   Direct3DDevice8.Present(nil, nil, 0, nil);
   end;
procedure TfrmMain.FormResize(Sender: TObject);
   begin
   FormPaint(Self);
   end;

What do you get with this example? A blue screen, which of course Microsoft used to supply automatically at nominal cost. What doesn’t this do? Besides the fact the example has no other visual content, the example does not:

  • Handle device changes.
  • Have an optimal message loop.
  • Support text controls, edit boxes, or drop downs or any other basic UI that work in DirectX.
  • Support switching between full and windowed screens
  • Contain exception handling.

Fortunately you get all of the above for free using the DirectX Sample Application Framework that ships with the DirectX 9 SDK.

The Direct X Sample Application Framework

Your first step will be to install the latest DirectX 9 SDK from the msdn.microsoft.com DirectX download area. The download is about 220 Meg. After installing the SDK, run the DirectX Sample Browser from the SDK menu. Once you open the Sample Browser,  you will probably spend the next couple hours exploring the very cool examples. When you get back, we can finish the article.


The DirectX Sample Browser

To create a new C# sample application, first find the entry labeled “EmptyProject”. To locate the entry quickly, select the “Managed” and “Samples” checkboxes on the left. Then click the “Install Project” link to bring up the Install dialog. Note: You may have a Visual Studio plug-in for DirectX projects but this only works for some versions.

 

Installing the Empty Project

In the Install dialog give your project a name and change the location if you’re not happy with the default Visual Studio Projects location. When you click “Install”, the project is created for you. Note that your prebuilt project includes a hefty amount of infrastructure code. Also notice that references for DirectX assemblies are added automatically to support 3D graphics and other graphic utility functions.

 

Checking out the new DirectX project

In the “Common” folder, you’ll find a number of support classes:

  • The Framework class is defined in dxmut.cs. This class welds familiar Windows mechanics to the DirectX display details, including an implementation of a custom message loop. Graphics applications are typically organized around a “rendering loop”, the section of code that continually repaints the screen. A custom message loop avoids the use of less optimal paint or idle events for DirectX rendering.
  • Your application automatically implements IFrameworkCallback and IDeviceCreation interfaces. IFrameworkCallback defines OnFrameRender to animate the scene and OnFrameMove for pre-rendering calculations. Both interfaces are called by the Framework class and are defined in dxmutdata.cs.
  • Central to DirectX is the notion of direct access to the best capabilities of your hardware. The Enumerate class defined in Dxmutenum.cs discovers what graphics cards are available and the capabilities of each.
  • Before the advent of the Sample Framework, if you wanted user input controls that worked in a DirectX environment, you had to write your own. DxmutGui.cs defines a working set of edit boxes, radio buttons, standard buttons, drop down, sliders and dialogs that are structured similar to the familiar Windows controls, but are actually rendered along with your 3D scene.

 

 

New Sample Application Running

Summary

All told, the DirectX Sample Application Framework delivers more than 16,000 lines of robust support code. This free functionality translates to code you don’t need to re-engineer and write just to get started game and simulation programming.

Nick Shreds TRex’s Blog Post

Steve Teixeira, now a supplicant, er, sorry, employee for Microsoft — yes, sadly, it is true, he’s shaken the Delphi dust off his boots and drunk deeply from the MS Kool-Aid Stand — responded to my recent CodeFez article about MS not quite getting it in the area of OOPishness. 

The first thing I want to say to Steve is "Hey, thanks!"  It’s been my goal for months now to get a reference from a Microsoft blog to one of my CodeFez articles.  Mission accomplished!

Now that I’ve been nice and appreciative to Steve for fulfilling a long unrealized dream, I’ll proceed to rip his blog post to shreds like a puppy with a Sports Illustrated rubbed in bacon.

Steve discusses five points from my article.  I’ll respond to his comments about those five points, destroying each in turn.  It’ll be like Perry Mason vs. That District Attorney Guy who never won a case. (By the way, That District Attorney dude was named Hamilton Burger.  Wouldn’t you love to have a friend named Ham Burger? The laughs would just keep on rollin’, hanging around in the bar after Ham lost every case to Mason. Good times.)

Point 1:  Steve "refutes" my complaint about the poor design of the myriad of Connection objects in ADO.NET by saying "that isn’t an OOP design issue it’s a product functionality decision."  I must say, my initial response to that was "Huh…? What does ‘product functionality decision’ mean?"  How is this not an OOP design issue?  Instead of designing a single class to do the job, they design multiple classes that can’t replace one another.  In addition, each class requires the use of a database-specific implementation of the IDBDataAdapter which can’t be interchanged either. And don’t even get me started on things like OracleDateTime and OracleParameter.  You can’t even call that stuff "somewhat abstracted and decoupled".  To his credit, Steve concedes that point when he says "Nick also goes on to point out, correctly I think, that BDP is better at insulating the developer from different database vendors."  Well, yeah, exactly! That’s what the Borland Data Provider does:  it uses good OOP technique to encapsulate and abstract ADO.NET so that you don’t have to hard code things like OracleString into your code.  That’s what good OOP design is supposed to do. Steve says it’s not a fundamental design problem, but I say a total lack of abstraction to interfaces and a tight coupling of a specific implementation to the interfaces is bad design.  The BDP doesn’t do this.  That’s good design.

Point 2:  Steve misses the boat altogether when talking about the Style class.  Sure, you can create your own Style class and implement it in your ASP.NET controls.  But to do that, you have to completely abandon the built-in functionality that the framework supplies for dealing with Styles, and do everything "by hand".  The class System.Web.UI.WebControls.Style has a property called ControlStyle which is of type Style.  If you want your control to have a style that doesn’t descend from the Style class — and thus include stuff that you might not want — you are out of luck.  You can’t partake of the WebControl class’s style handling.  You have to do it all yourself.  This is a perfect example of MS not getting it.  They’ve provided a base style for you that you must use no matter what — even if you don’t want some of that functionality in the Style class.   Again — bad design.  It’s poorly designed because it makes assumptions about descendent controls that shouldn’t be made. 

I’ll have to agree with Steve when he laments my example for defining an IStyle.  I did say that I was "designing on the fly."  I didn’t propose my example as the perfect solution, but merely as an example of how it might be done:  i.e. the ControlStyle property should have been an interface instead of a class which requires certain type to be used.

Point 3:  Steve, Steve, Steve, Steve, Stevey, Steve-aroo, Steve-arama!  Wow, you are playing right into my hands, just like that poor cop schmuck in The Usual Suspects when Kevin Spacey played him like a concert piano.  Steve argues "Okay, now we’re getting into framework functionality, not OOP design"  Well, no, the lack of OOP design is exactly what I’m talking about here. How is "framework functionality not OOP design?" Loading, reading, and writing text files is a very basic and common functionality.  One would think that there might be a class that encapsulates that functionality.  For instance, the description of such a class that would be used to do what I discussed might go like this:

  1. Create an instance of the TextFileManager class by passing the filename to the constructor
  2. Alter the third line in the text file, as described in the original problem set, described in the previous articles.
  3. Save the Result.

Simple, clean, neat, orderly, and — dare I say it! — well designed.

Steve’s example drones on and on like this:

  1. Create an instance of something called a StreamReader (you use a StreamReader to handle text?), passing the filename to the constructor
  2. Allocate memory for an array of strings
  3. Read through to the end of the stream, and when you get done doing that, peruse over the result, chopping the string into array entries every time you run into a "return" character
  4. Close the StreamReader
  5. Alter the third item in the array
  6. Create an instance of something called a StreamWriter (you use a StreamWriter to handle text?), passing the proposed filename to the constructor.
  7. Iterate over each item in the array we created above and write each one out to the StreamWriter.

Imagine my mirth, my barely controlled giggles, when Steve follows this up with "You might be able to do this a little more briefly in other languages but not by much." Uh huh.  First, this isn’t a question of language, but a question of OOP.   I’ll leave it to the imagination of the reader to determine which of the two above examples is a better example of encapsulation.  (Hint:  One uses a single class, the other uses two classes and an array.) 

Point 4:  Steve then goes on to write about my "getting data out of a dataset" argument. He writes, "Okay, so the first one returns a System.Object that you have to explicitly convert and the second returns a TField object that has conversion methods hanging off of it.  I admit the TField is handy and maybe even nicer from a usability standpoint, but I have trouble seeing this as a huge issue or an indication that the implementer doesn’t quite get OOP."

Doesn’t quite get OOP? The designer here returns an instance of the root object. Very, very helpful.  Not! The class itself could provide easy conversion to other types, but it doesn’t.  Easy to do, obviously needed, but there’s nothing there.  All you get is an instance of System.Object. And this says to us that the designer "gets OOP"?  What it says to me is that the designer thought that it was quite OOPish to have you go off and call some other function from some other class to perform the basic functionality of retrieving data from a dataset.  Thinking it OOpish to need to call another class to get the original class to perform basic functionality of the field is not my idea of "Thinking in OOP".  As Steve says, the TField class is handy.  It’s handy because its a sound implementation of OOP design to perform the task at hand!  We can call the Convert class a class, and technically it is, but it’s really just a container for library functions.  Needing another class to perform the basic functionality of your class — and yes, returning values is a basic functionality of a dataset — isn’t OOPish.

Point 5:  My Point 5 here wasn’t really meant to be an indictment of the OOPishness (I love that word!) of ADO.NET, but just a general lament.  Sure, you can iterate over a result set with a DataReader, if you are connected to the server! If not, you are pretty much stuck doing the foreach thing over the rows. The notion of a current record requires binding the data to the interface.  Very uncool. The general point stands — the concept of a current record is a basic database concept, and totally devoid from ADO.NET.

I appreciate Steve’s general agreement about sealed classes. (I didn’t even talk about adding final to a method.  Argh, how unfriendly can you get? "Hey, this method used to be virtual, but I’ve categorically decided that you can’t descend from it anymore! Neener, neener, neener!") I think it hard to argue that sealed classes are anything other than totally lame.  I can even go so far as to grant Steve’s basic point that, lame though they are, a programmer should be able to seal a class.  Such a programmer would be a big weenie to do so, but,  hey, that’s the programmer’s decision. And of course, I can mock such a decision.

Steve does argue a few points in favor of sealed — that a class may need to be sealed to help the compiler.  I counter that OOPishness knows not, and should not know,  of compilers.  If you are making OOP design decisions based on what a specific compiler needs, then you aren’t making good OOP design decisions. In fact, the FCL is supposed to be language neutral, so OOP decisions based on compilers isn’t supposed to even be a factor.  He also argues that there might be security reasons for sealing a class.  Well, maybe, but I can’t think of any right now. I’m happy to be educated on that point.

Steve summarizes:  "In summary, I think there is very little evidence in Nick’s anecdotes that points to some fundamental misunderstanding of OOP." Weeeeeeeeeelllll, I beg to differ.  I think every single one of my anecdotes speaks directly to the issue of a lack of good OOP decisions, as illustrated above in my stunning repartee to Steve. Each of my examples speaks about nothing but design decisions made by Microsoft designers that either limit your ability to implement a descendent class, couple tightly your code to a specific class, or force you to use a class that you don’t necessarily want to use. And they are, of course, merely a sampling of things that I could have talked about.  For instance, try to descend from the StringCollection.

Hey, now that Steve is a Microsoftie, I expect him to defend the home field.  But since he is a Microsoftie, I also expect him not to quite get it.

More Developer Economics

My CodeFez colleague, Nick,  just posted an interesting editorial on the economics of developer tools, particularly with regard to Borland. Nick points out that businesses are justified in maximizing profits versus, say, maximizing revenue or expanding market share. While this may be true (and I don’t claim to be any more of an economist than Nick), this business strategy brings with it a number of other market forces that cannot be ignored. Most notably, maximizing profits may significantly increase the volatility of the revenue stream. In addition, deliberately shrinking the market, particularly in developers tools, reduce the total market of the tool – and ultimately reduces demand.

Increased Revenue Volatility

Just for the sake of argument, let’s say my company has 100 customers. Let’s say it then turns out that my support costs are sky-high, so I decide to focus on maximizing profit by increasing the price of my product. In this way, I know fewer people will buy my product but I also know that revenue per unit will be higher and support costs will be lower. Again, just to work with round numbers, let’s say this shrinks my customer base to 10 very profitable clients. The problem this presents is now my revenue stream is much more volatile. With my original base of 100 customers, losing 1 customer means the loss of only 1% of my business. However, with the smaller customer base of 10 the cost of losing each customer is much greater: 10% of my revenue. Of course, these numbers are totally contrived, but the concept behind them is solid: the fewer customers you have, the less you can afford to lose them.

Shrinking Demand

A smaller market might be fine for some kinds of products. Fine wines or luxury cars, for example, are markets designed for the high margin/great service/few customers business model. However, a small market is not a good thing for developer tools. In developer tools, market share is king. While a small portion of the developer tools market will stick to their favorite tool, popularity be damned, the vast majority of the tools market will tend to follow the trends, those tools and technologies that organizations feel have the best longevity and developers feel will keep them best employed. At the end of the day, this translates into continually decreasing demand for the smaller market tools – a sort of death spiral into sub-one-percent-market-share oblivion.

In short, I’m not as fond as my good friend, Nick, of the Borland Delphi pricing strategy. I enjoy Borland tools, but it’s clear that the growth of products like MS Visual Studio.NET and Eclipse is occurring at the expense of the higher-priced Borland offerings. I would prefer to see a strategy that more effectively balances growth with profitability.

Developers and Economics

I never cease to be amazed at how little the average developer knows about economics. I mean, I don’t claim to be an expert, but I have taken a college class or two and read up on the basics. Even just understanding the basics, though, gives one surprising insight into why things happen the way they do in the marketplace.

For instance, we are in the process of hiring a new developer at my company. We figured out what qualifications we were looking for and determined about how much salary we wanted to pay. It quickly become apparent, however, that we weren’t offering a high enough salary to attract the caliber of candidates we wanted. So what I did was to go on the newsgroups and start whining about not being able to find any good Delphi programmers. Okay, no, I didn’t really do that. What we did, of course, was to increase the salary that we were offering. Simple supply and demand issue: There wasn’t enough of a supply of good Delphi programmers at the price we wanted to pay, so the solution is to be willing to pay more – a no-brainer decision, really. Once we did that, we found that we have been able to find plenty of valuable candidates. Simple economics.

One common area that I see developers totally misunderstand is that of Delphi pricing. One thing you learn in Economics 101 is that the vast majority of companies are “price searchers”. That is, they are constantly searching for a price that will maximize their profits. (Some companies, mainly producers of commodities, are “price takers”. That is, they take whatever price is offered. Farmers are a good example. A corn farmer can only sell his corn at market price. If he asks for more, the market will simply buy corn from another farmer that will take the offered price). Borland is definitely a price searcher. They can set their prices as they please, and will do so to maximize profit. Of course, the market will respond to any particular price by demanding a certain number of units at that price. Price searchers are constantly adjusting prices to maximize the amount of money they make.

Note that they don’t set price to maximize revenue, but rather profit. The cost of goods sold is a factor here as is the cost of simply having customers. Sometimes a company will actually price a product in order to limit the number of customers they have in order to maximize profits as sometimes having additional customers causes decreased profits. (That may be a bit counter-intuitive, but think of a product that has high production and support costs.) So for example, sometimes doubling your price can increase profits even though it drastically reduces sales. If doubling the price cuts the number of customers you have in half, but also cuts your production and support costs in half as well, your profit increases. (This is a very simple example, and it is actually hopelessly more complicated than that, but you hopefully get the basic idea).

So Borland, having learned from years of experience and copious sales data, is quite aware of what effect various prices have on sales. No doubt by now they have a pretty good idea what price will maximize their profits and how price changes will effect sales.

Where it gets really interesting is pricing outside of the United States. Europe, of example, is a completely different market than the US. Taxes, the demand curve, and the number of potential customers are all different. Borland clearly believes that they need to – and can – charge more in Europe than in the US. The price difference is not related to the exchange rate between the Euro and the Dollar; it has everything to do with maximizing profits. Clearly Borland believes that a higher price in Europe – again, a completely different market – will mean higher profits. That’s why Delphi costs more in Europe. I suppose Europeans could view this as “price gouging”, but in reality, it’s just the market signaling to Borland that it will bear a higher price than will the American market. Simple economics.

Another economic blunder that developers frequently make is ignoring economies of scale. Borland is a big company that is publicly traded. Many Delphi developers work in small, private companies. Borland has legal obligations, overhead costs, and market demands that most little-shop developers don’t even know about, much less take into consideration. Borland’s main competition is one of the largest corporations in the world. Borland faces investors who expect a return. Borland has to deal with major entities in the media that can write things that can have profound effects on Borland’s business. All of this combines to make running Borland a complex and difficult task that most of us simply don’t comprehend.

So I love it when a developer posts in the newsgroups something like this: “Borland should just hire two college students to go through and fix all the Delphi bugs in Quality Central.” Well, that sounds great, but is clearly isn’t that simple. First, fixing bugs in a big product like Delphi is no small, trivial task. Finding people with the talent and skill to do Delphi bug-fixing isn’t easy. And they certainly aren’t going to be cheap. The notion that some college interns can do it is quite naïve. The economic blunder comes, though, in thinking that the cost of fixing all those bugs is merely the salary of a couple of developers. First, employees aren’t cheap, no matter who you hire. Human capital is by far the most costly – and valuable – part of doing business. Secondly, I don’t know what your development process is like, but bug fixing at Borland is more than a guy hacking out some code. Every fix has to be extensively tested for efficacy and correctness, and then the whole product has to be regression tested to ensure that any given fix doesn’t actually break something else. Fixes need to be incorporated into shipping product and distributed to existing customers. The documentation needs to be updated. And who know what else needs to be done? The point is this: the costs of things that many people think are small are in fact much larger than the average developer appears to realize.

The economics of running a business like Borland isn’t something about which I claim to be an expert. But I do know that I don’t know enough to be able to claim to know better than Borland. Something to consider before you fire off a post in the newsgroups that starts out “Borland ought to just….”

The Need for Bad Software

Unless you happen to be Don Knuth, you have probably written bad software. But believe it or not, bad software is good for you. 

One of the funniest blogs for developer is The Daily WTF. For normal people like my wife it is of course completely and utterly baffling, containing, as it does, words arranged in monospace fonts with punctuation where it shouldn’t be (sometimes with more punctuation than any text has a right to have), different lines indented in different ways, and if you’re lucky some words get to be printed in blue or red.

The Daily WTF’s premise is simple: developers enjoy a good laugh when we see badly written software. The laughter is usually accompanied by a fervent prayer muttered under our breath: "We hope by all that’s holy that we didn’t write it, and we pray that it isn’t part of the application we’re currently developing." The blog has a post a day that lampoons some source code, some interaction between a developer and other IT personnel, or some development-related happening.

Consider this saying: "You learn from your mistakes." Though a bit trite and clichéd, it does hold a grain of truth. Think back in time to a really awful bit of code you wrote. What happened when the inevitable bug report came in, and you investigated, and noted with horror what your younger self had written? You broke out in a sweat, your eyes dilated and you uttered the immortal acronym WTF, but in a decompressed form. Maybe you were able to cover it up, maybe it had to be written down in all its glory in the bug report’s commentary and then you had to go past people in the hallway who would fall silent and watch as you went by.

But I’ll bet you’ve never written code like that ever again.

Another example maxim: you only learn by doing. You know, you can read all the books and erudite exposition explaining how test-driven development helps you write better software, but you won’t internalize those lessons until you actually start writing tests. Until then it’s merely a nice thought experiment with no contact with reality. But how many times will you write bad software, or make the same coding mistake, or forget to check a fix in, before you start to wonder whether having a test suite could actually save your bacon? And guess what? Once you start with a unit test suite, you’ll be adding code to it all the time. Your code will get better. You willl stop making the same mistakes, (but make different ones and probably be able to fix them faster.) And most importantly, you’ll learn to rely on the tests that are run each time you check your code in.

You see, we do have a real need for bad software. It is only through bad software that we get better tools, better algorithms, better methodologies.

Think about it another way: it took several iterations of really bad software before Microsoft launched their Trustworthy Computing initiative. And it took them awhile to see the value in designing the CLR. The CLR demonstrates that it is harder (but not impossible) to write unsecure code with managed code and the .NET framework than with the C run-time library and pointers galore. It took an analysis of Diebold’s bad voting machine software to make us realize that hanging chads were the good old days. It takes bad software to crash a Mars probe into the planet and thereby waste $250 million and nine months in one glorious thud. It takes bad software for us all to appreciate that it’s new software we like writing, but that the steady money comes from software maintenance.

Bad software is everywhere; we have to live with it. It permeates the programs we use every day, it infuses the components we use, it pervades the society we live in. It forces us, as developers, to strive to better ourselves and to write better code. (If it doesn’t have this effect on you, then maybe you should go do something else!)

It takes bad software to make us appreciate the good software. Best of all, it gives us a good laugh every day!

Microsoft .NET Framework Security

Ask ten people “what is information security?” and you’ll get ten answers, most of them probably correct. We have a tendancy in this business to take this vast topic area and paint it with the single color of information security. Here, for example, are a handful of typical answers to the “what is information security?” question:

  • Authentication: The act of validating that a user or system is who they claim to be. This is accomplished most commonly using username and password, but can also be done using, for example, Smart Card or any number of biometric techniques (fingerprint, retinal scan, etc.).

  • Authorization: Once the user is authenticated, authorization determines what they are allowed to do on the system. For example, the user may have special administrator rights on the system or the user may be a member of a group with these rights.

  • Access control: The system that controls access to resources, ensuring that authenticated users are able to access only that functionality for which they are authorized.

  • Privacy: Ensuring that data or communications intended to be private remains private. This is often accomplished through cryptography and communication layers depending on cryptography, such as Secure Sockets Layer (SSL).

  • Integrity: After data is communicated or stored, the reader of the data must be able to have assurance that it has not been modified. Cryptographic hashes and signatures often play a role in this area.

  • Uniqueness: In a message-based system, care must be taken to ensure each message is unique and cannot be “replayed” to the detriment of the system. Often serial numbers or time codes are used to prevent this type of vulnerability.

  • Availability: Systems must remain available to authorized users at the times they are supposed to be available.

  • Non-repudiation: Preventing the system from denying having performed an action related to some data.

  • Software vulnerabilities: Protecting software system against comprimise through the sending of specially formatted data via a network or console.

  • Rogue applications: Viruses, Malware, and the like, which causes damage to a system when executed.

  • Infrastructure: Firewalls, routers, wireless access points, and other hardware that makes up the physical network infrastructure. Without sufficient infrastructure protection, no system can be declared safe.

  • Endpoint protection: Ensuring that workstations, laptops, hand held devices, and other network “endpoints” are hardened against vulnerabilities that might otherwise put the network or system as a whole at risk.

  • Auditing: Logging and cataloging of data so that problems or compromises can be analyzed in progress or postmortem so that they may be corrected in the future.

  • Physical: The proverbial lock and key, preventing unauthorized individuals from physical proximity to a system or network.

I obviously won’t be able to thoroughly cover all of these topics in this paper, but we will certainly touch on the greatest hits. More importantly, we’ll drill down in these important topics to a level of detail that enables you to understand how to implement such security for yourself in the .NET Framework.

J2EE Design Strategies Part I

Enterprise Java is an extremely popular development platform for good reasons. It allows developers to create highly sophisticated applications that have many desirable characteristics such as scalability, high availability, etc. However, with J2EE, you are potentially building distributed applications, which is a complicated endeavor no matter how it is built. As developers come to use J2EE more and more, they are discovering some pitfalls that are both easy to fall into and easy to avoid (if you know what to look for). J2EE development comes with its share of bear traps, just waiting to snap the leg off the unwary developer.

This paper is designed to highlight several of these avoidable pitfalls. It does so from an entirely pragmatic approach. Much of the available material on design patterns and best practices is presented in a very academic manner. The aim of this paper is just the opposite — present common problems and their associated solutions. It starts with J2EE Web best practices, moves to EJB best practices, and concludes with some common worst practices that should be avoided.

Web

Web development is an important aspect of J2EE, and it has its share of potential dangers.

Singleton Servlets

A Singleton servlet sounds like an oxymoron — aren’t servlets already singleton objects? Some background is in order. First, a singleton is an object that can only be instantiated once. There are several different ways to achieve this effect in the Java language, most commonly with a static factory method. This is a common technique anytime an object reference can be reused rather than a new one instantiated. Of course, servlets already act in many ways like singleton objects — the servlet engine takes care of instantiating the servlet class for you, and generally only creates one instance of the servlet and spawns threads to handle users’ requests. Allowing the servlet engine to do this works fine in most cases. However, there are a few cases where you want the singleton-like behavior but also have to know what the instance is called. When the servlet engine creates the servlet, it assigns an internal reference to the servlet instance, and never lets the developer directly access it. This is a problem if you have a helper servlet storing configuration information, connection pool management, etc. What is needed is a way to allow the servlet engine to instantiate the servlet for us yet still be able to get to it.

Here is an example of a servlet that meets this criteria. It is a servlet that manages a homegrown connection pool.

Listing 1: Singleton Servlet for Managing Connection Pool

package webdev.exercises.workshop1;
import javax.servlet.*;
import java.sql.*;
Import webdev.utils.ConnectionPool;
public class PoolMgr extends GenericServlet 
{
    private static PoolMgr thePoolMgr;
    private ConnectionPool connectionPool;
    static final String DB_CLASS = "interbase.interclient.Driver";
    static final String DB_URL = "jdbc:interbase://localhost/e:/webdev/data/eMotherEarth.gdb";
    public PoolMgr() 
    {
    }
    public void init() throws ServletException 
    {
       try 
    {
            String dbUrl = getServletContext().getInitParameter("dbUrl");
            connectionPool = new ConnectionPool(DB_CLASS, dbUrl,
                 "sysdba", "masterkey", 5, 20, false);
            getServletContext().log("Created connection pool successfully");
       } 
    catch (SQLException sqlx) 
    {
            getServletContext()Log("Connection error", sqlx);
       }
       thePoolMgr = this;
    }
    public void service(ServletRequest req, ServletResponse res) throws javax.servlet.ServletException, 
         java.io.IOException 
    {
        //--- intentionally left blank
    }
    public static PoolMgr getPoolMgr() 
    {
        return thePoolMgr;
    }
    public ConnectionPool getConnectionPool() 
    {
        return connectionPool;
    }
}

First, note that this is a GenericServlet instead of an HttpServlet — the user never directly accesses this servlet. It exists to provide infrastructure support to the other servlets in the application. The servlet includes a static member variable that references itself (common in "normal" singleton classes). It also has a (non-static) reference to the connection pool class. In the init() method of the servlet, the connection pools is instantiated. The very last line of this method saves the reference created by the servlet engine of this instance of the servlet. The GenericServlet class includes a service() method, which is not needed here (intentionally left blank to highlight that point). The servlet includes a static method called getPoolMgr() that returns the saved instance of the class. This is how other servlets and classes can access the instance created by the servlet engine. We are using the class name (and a static member variable) to keep the reference for us. To access this pool manager from another servlet, you can use code like this:

Listing 2: Snippet of servlet that uses a singleton

Connection con = null;
try 
{
 //-- get connection from pool
 con = PoolMgr.getPoolMgr().getConnectionPool().getConnection();
 //-- do a bunch of stuff with the connection
} 
catch (SQLException sqlx) 
{
 throw new ServletException(sqlx.getMessage());
} 
finally 
{
 PoolMgr.getPoolMgr().getConnectionPool().free(con);
}

The access to the connection pool class is always done through the PoolMgr servlet’s method. Thus, you can allow the servlet engine to instantiate the object for you and still access it through the class. This type of singleton servlet is also good for holding web application-wide configuration info. In fact, it is common to have the servlet automatically created by the servlet engine. The web.xml file allows you to specify a startup order for a particular servlet. Here is the servlet definition from the web.xml file for this project.

Listing 3: Web.xml entry to auto-load the PoolMgr servlet

 
  PoolMgr 
   
    webdev.exercises.workshop1.PoolMgr 
   
  99 
 

An alternative to using a Singleton Servlet is to use a ServletContextListener, which was added as part of the servlet 2.2 specification. It and the listener event handlers for web development allow you to tie behavior to particular events. The listing below shows how to create a connection pool using a ServletContextListener.

Listing 4: StartupConfigurationListener creates a connection pool upon application startup.

package com.nealford.art.facade.emotherearth.util;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.ServletContext;
import org.apache.commons.pool.impl.GenericKeyedObjectPool;
import java.sql.SQLException;
public class StartupConfigurationListener implements
        ServletContextListener, AttributeConstants 
{
    public void contextInitialized(ServletContextEvent sce) 
    {
        initializeDatabaseConnectionPool(sce.getServletContext());
        BoundaryFacade.getInstance().initiaizeBoundaryPool(sce.getServletContext());
    }
 
    public void contextDestroyed(ServletContextEvent sce) 
    {
    }
    private void initializeDatabaseConnectionPool(ServletContext sc) 
    {
        DBPool dbPool = null;
        try 
        {
            dbPool = createConnectionPool(sc);
        } 
        catch (SQLException sqlx) 
        {
            sc.log(new java.util.Date() + ":Connection pool error", sqlx);
        }
        sc.setAttribute(DB_POOL, dbPool);
    }
    private DBPool createConnectionPool(ServletContext sc)
            throws SQLException 
    {
        String driverClass = sc.getInitParameter(DRIVER_CLASS);
        String password = sc.getInitParameter(PASSWORD);
        String dbUrl = sc.getInitParameter(DB_URL);
        String user = sc.getInitParameter(USER);
        DBPool dbPool = null;
        dbPool = new DBPool(driverClass, dbUrl, user, password);
        return dbPool;
    }
}

The advantage of the SingletonServlet lies in the ability for non-web classes to create references to it. For example, you might have a POJO (Plain Old Java Object) that handles database access for your application. It has no way to get to any of the web collections directly because it has no access to the servlet context. Using a SingletonServlet, the class name of the servlet allows the developer to get to the underlying instance. So, even in the presence of the listener classes introduced to the web API, Singleton Servlets still have uses.

Model-View-Controller for the Web

In the beginning, there were Servlets, and it was good. They were much better than the alternatives, and allowed for scalable, robust web development. However, there was trouble in paradise. Web development partitioned itself into two camps: art school dropouts (invariably Macintosh users) who could create the beautiful look and feel for the web application, and the Java developers who made it work. The guys in the basement hand crafted the beautiful HTML and passed it to the developers who had to incorporate it into the dynamic content of the web site. For the developers, it was a thankless, tedious job, inserting all that beautiful HTML into the Java code. But, you drank lots of coffee and lived through it. Then, the unthinkable happened: the CEO got an AOL disk in the mail and visited a web site he’d never been to before. Come Monday, the commandment came down from on high: We’re completely changing the look and feel of the web site. The art school dropouts fired up their Macs and started realizing the CEO’s vision, and the developers got a sinking feeling in the pit of their stomachs. Time to do it all over again. The problem? Too much HTML in the Java code.

Then JSP’s appeared. Here was the answer to all our prayers. JSP’s have the same advantages of servlets (they are after all a type of servlet) and were much better at handling the user interface part of web design. In fact, the art school dropouts could craft the HTML, save it as JSP, and pass it right to the developers. However, all was still not well. The developers now must deal much more directly with the display characteristics of the application. Thus, the syntax of the JSP quickly becomes very cryptic, with the HTML and Java code interspersed together. The verdict: too much Java in the HTML.

Then came the Model-View-Controller design pattern for the web. If you’ve been living in a cave and aren’t familiar with this most famous of design patterns yet, here’s the capsulated version. The model represents the business logic and data in the application and resides in JavaBeans and/or Enterprise JavaBeans. The view is represented primarily by JSP pages, which have as little Java code in them as possible. In fact, all Java code should really be handled by method calls on the model beans or custom tags. The controller is the way that the view interacts with the model. In the web world, a servlet is the controller. Here is the typical scenario for web MVC. The user accesses a controller servlet. The servlet instantiates beans, calls methods on them to perform work, adds the beans with displayable information to one of the collections (for example, the request collection), and forwards the beans to a JSP that shows the user the results.

And it was good. Now, the display information is cleanly partitioned away from the "real" work of the application, which can be strictly in JavaBeans. The application could also start using regular JavaBeans, then scale up to use Enterprise JavaBeans without having to change the controller or presentation layers. This is clearly the best way to build web applications. It is easy to maintain, easy to update, and there is very little impact when one part of the system needs to change (now, the art school dropouts have to worry about the new look and feel, not the developers). This design patterns neatly modularizes the constituent parts of web applications.

Now what’s wrong? The problem with the MVC web applications (now frequently called "Model2", to distinguish it from MVC for regular applications) has to do with how you architect the web application. For example, if you create a different controller servlet for each page the user wants to visit, you end up with dozens or hundreds of servlets that look almost identical. Another problem is that these servlets, once visited, permanently reside as objects in the servlet engine. An alternative is to create one monster controller servlet to handle all the requests. The problem here is that you have to figure out a way to map the requests to different views. This is frequently done with parameters sent to the web site, identifying what command you want to execute. But, unless you are clever about it, your "uber servlet" becomes a massive set of "if…else" statements or a huge "switch…case" statement. Any changes require editing this servlet, which quickly becomes unruly and ugly. What is needed is an application framework for web development that handles most of these gory details. And that’s where Struts comes in.

Why the word "Fez"?

Inquiring minds want to know: who put the Fez in CodeFez? To begin, you should know that I was born in Egypt in 1971 with Greek, French and Egyptian ancestry.  That is probably why the “Falafels“ and the “Fezes“ keep turning up wherever I go!

The best way for you to understand the name CodeFez is for me to present a history lesson on where the word “Fez“ comes from. Most of the facts below come straight from Tarboosh.com.

DURING the reign of Turkey’s Sultan Mahmud Khan II (1808-39), European code of dress gradually replaced the traditional robes worn by members of the Ottoman court. The change in costume was soon emulated by the public and senior civil servants, followed by the members of the ruling intelligentsia and the emancipated classes throughout the Turkish Empire. As European dress gradually gained appeal, top hats and bowlers with their great brims, and the French beret, never stood a chance. They did not conform with the customs and religions of the east. In their stead the Sultan issued a firman (royal decree) that the checheya headgear in its modified form would become part of the formal attire irrespective of his subjects’ religious sects or milets.

The checheya had many names and shapes. In Istanbul it was called ‘fez’ or ‘phecy’ while the modern Egyptian version was called ‘tarboosh’ (pronounced tar-boosh, with the accent on the second syllable.) The word tarboosh derives from the Persian words ‘sar’ meaning head and ‘poosh’ meaning cover. It was a brimless, cone-shaped, flat-topped hat made of felt. Originating in Fez, Morocco, the earliest variety was in the form of a bonnet with a long turban wound around it which could be white, red or black. When it was adopted in Istanbul the bonnet was modified. At first it was rounded, then, some time later, lengthened and subsequently shortened. At some point the turban was eliminated and a deep crimson became the standard color of the checheya.

Towards the latter part of his reign, Egypt’s Mohammed Ali incorporated the Greek version of the fez as part of the military uniform. All officers, regardless of rank or nationality, were required to wear it. Soldiers were issued two fez’s a year with their uniforms. This meant the Viceroy had to import almost 500,000 fez’s annually to satisfy his growing army’s needs. This is probably why he issued a boyourli (viceregal firman) to his chief of district Mohammed Aga, residing in the town of Fawah, to immediately commence plans for the local manufacture of tabooshes.

The first Egyptian made tarboosh appeared on the market in 1825. By 1837 Egypt produced 720 per day in the province of Gharbieh. But when the state economy floundered during Khedive Ismail’s reign, the tarboosh had to be imported. European manufacturers quickly multiplied their production lines. The demand for the Fez increased when it became part of the uniform for the Bosniak regiments in the Austro-Hungary army (up until 1918). It is therefore not surprising that Austria was the biggest tarboosh producer in Europe.

Until it went bankrupt in the 1970s, the tarboosh manufacturing firm of HABIG Und Sohne operated out of two adjacent buildings at Frankenbergasse No. 9 in Vienna’s 4th district, only 10 minutes from St.Stephan’s Cathedral. After WW-I and the subsequent fall of the Ottoman Empire, Egypt became Habig’s single most important customer. Yet in order to survive HABIG found itself obliged to produce top hats as well which later became the company’s main source of revenue. The vestiges of this tarboosh manufacturing company were still in evidence when the buildings were renovated five years ago and when molds used in order to press the ‘fez’ were found in the building’s basement.

It was during the reign of King Fouad that a ‘piaster’ fund-raising scheme was launched and the proceeds invested in a tarboosh factory. By 1943, there were 14 tarboosh retailers in Cairo, four in Alexandria and one each in Tanta and Simbelawain. By then the tarboosh had been once again modified to suit the fashion. While in Viceroy Ibrahim’s time it had been a la Greque as evidenced by his statue in Opera Square, it became a la turc during Khedive Ismail’s. The change resulted in the Fez becoming, much longer and almost covering the ears. During the reigns of Sultan Hussein and King Fouad, it was changed to its final shape and size, well above the ears as seen on the minted coins of that period.

Just like the flag, the tarboosh was a national emblem. It was de rigeur at the Egyptian court, in the civil service, the army and the police. Some officers wore it with a dashing slant, silken tassel flying in the breeze. Crown Prince Mohammed Ali Tewfik wore it in a manner defying gravity, looking like it was about to topple over at any moment. Others wore it as though it were a column sweating profusely beneath it. Its shape was described in a 1937 English editorial as esthetically and functionally inferior to European hats. "The practical advantages that the hat has over the tarboosh is that the tarboosh offers very little defense against the sun, it’s long chimney-pot length makes it a convenient victim of any random gust of wind, and in time of rain it has to be mollycoddled and swathed in its owner’s handkerchief in case it should come to harm."

Tarbooshes were also worn by Egyptian diplomats abroad. This requirement almost caused the breakdown of relations between Egypt and Turkey when, on the occasion of the 9th anniversary of the proclamation of the Turkish republic (October 29, 1932), Abdel Malek Hamza Bey, Egypt’s envoy to Ankara, appeared at the Ankara Palace Hotel wearing his tarboosh.

Present at the hotel was Turkey’s strongman, Ghazi Mustafa Kemal. (In 1934, when the Surname Law was adopted in Turkey, the national parliament gave him the name "Ataturk" meaning father of the Turks.) In his quest for a Yeni Turan or a ‘new society’, Mustafa Kemal decreed several laws which aimed at changing the norms and traditions of his country. For instance, in October 1928 he decreed that the capital move from Istanbul to Ankara. In November 1928, he proclaimed the Arabic script be changed to the Latin alphabet, a measure successfully adopted despite momentous challenges from the more traditional elements of society. Also by decree, the temenah form of greeting (touching one’s forehead, lips and heart with the tips of his fingers), once the symbol of imperial obeisance, was no more. It would be substituted with a handshake. Other drastic measures meant to bring Turkey in step with western culture included the outlaw of the bewitching yasmak and the eradication of the ferraji mantle, the bournous and the gandourah. European style of dress had come to displace the sherwals, the shalwahs and the baggy jodhpurs. Turbans and fashionable European hats replaced the veil. Mustafa Kemal was virtually offering women their freedom.

Yet what was considered a most radical reform was the 1926 parliamentary decree abolishing the fez. Henceforth, the Ghazi forbade its appearance anywhere within his new secular state. To Mustafa Kemal Ataturk, the fez and the veil were signs of backwardness, inferiority and fanaticism. Whereas in Turkey’s not so distant past, anyone sporting a hat other than a fez was considered a Giaour (stranger or one belonging to another faith and mode of life) now, western hats had become the rage. To encourage his people to turn away from the fez and adopt the western hat, the Ghazi would appear in public wearing different European headgear. With the power of law, the situation had now been irrevocably reversed.

So it must have been quite appalling for the supreme leader, on his Republican Day celebrations, to espie a red tarboosh bobbing about the crowded banquet hall. Unknown to him was that it belonged to Abdel Malek Hamza Bey, Egypt’s diplomatic envoy to Turkey. Aiming to please their leader, Turkish protocol officials accosted the unsuspecting tarboosh wearer and advised him to remove it for fear of soliciting the Ghazi’s wrath. Following a brief exchange between the Egyptian envoy and his Turkish hosts, Abdel Malek Hamza Bey refused to comply with his hosts’ requests arguing that his tarboosh formed an integral part of his national attire. Having made his point, he took leave from the celebrations.

Once the reason for Abdel Malek Hamza’s precipitated departure was made public, the press in both countries had a field day. Not to be outdone, the Daily Herald in London ran hair raising columns on the subject detailing how the Egyptians felt insulted and how the Turks had countered that since the Egyptian minister had received a personal apology from the Ghazi, no offense had been done. And hadn’t the Turkish foreign minister, Tewfic Rushdi Bey, declared that the Ankara Government did not consider it necessary to tender an apology on the grounds that Egypt’s dignity was not touched at all; that the Egyptian Government must consider this incident as closed!

With restraint practiced on both sides, and despite the Daily Herald’s malefic efforts, the diplomatic incident passed and relations between Egypt and Turkey were maintained for another twenty years. Almost as long as the survival of the tarboosh itself.

In 1952, just like the pashas and beys who proudly wore them, tarbooshes became history in Egypt when the new republican government abolished the official headwear. In an urelated incident, relations with Turkey were broken off.

As this century comes to a close, tarbooshes and top hats have become relics of the past and can only be found in masquerades, fancy dress balls and in one or two wayward freemason lodges. Wherever else you go, you will find multicolored baseball caps instead.

Finally, here at Codefez, we take our name very seriously, just as the Tarboosh and Fez were once taken seriously in Eqypt. The Fez is a symbol of respect, honor and professionalism.

Visual Development with Mono, GTK # and Glade. Part I

This multi-part article describes how to use Glade, a development tool that aids in creating a GTK+ based visual interface for Mono applications. Using Glade is a bit like using the visual designer in Visual Studio or Delphi. One big difference, however, is that in Visual Studio you use Windows.Forms to create a .NET GUI interface, while in Glade you use GTK+ to create a visual interface.

Though Glade has a nice visual designer, it is not a full featured development environment like Delphi or Visual Studio. It does, however, provide tools that allow you to drag, drop, arrange and configure visual elements such as buttons, edit controls and list boxes.

All of the tools discussed in this article, including Gnome, Glade, Mono, GTK+ and the GDK are open source projects. This means that they are distributed with source. More importantly, both Mono and GTK+ are cross platform technologies. This means that they run equally well on Linux or Windows.

This article begins by explaining the technologies used in Glade development. Once this background material is out of the way, the second part of the article outlines the simple steps necessary to create applications with Glade and Mono.

Mono and Gnome

Mono is the Linux implementation of the Microsoft .NET framework. It is based on open standards such as the C# standard and the Common Language Infrastructure defined by the ECMA group.

The C# language was created at Microsoft by Anders Hejlsberg, Scott Wiltamuth, and Peter Golde. The standard was submitted to the ECMA group by Microsoft, Hewlett Packard and Intel.

C# is designed to be simple and safe to use. It supports cornerstones of modern language design such as strong type checking, array bounds checking, detection of uninitialized variables, and garbage collection. Though not intended to be as fast as C or C++, it is designed to reflect much of the syntax found in those languages. This was done to make it easy to attract new developers who are familiar with those languages. C# was also designed to support distributed computing.

Since the Mono group is interested in the Linux operating system, they were attracted by the fact that C# was also designed to promote cross platform development. This is, in fact, one of the goals of C# specification, as laid down in the official ECMA documents.

The Mono project has complete implementations of the Mono specification for C#, and the CLI. It also supports web and database development. That is, it implements ADO.NET and ASP.NET.

Mono has had more trouble, however, implementing a .NET based GUI front end for desktop applications. In Microsoft’s implementation of .NET, the GUI front end is handled by a toolkit called Windows.Forms. The Mono team has had trouble implementing Windows.Forms on Linux. As a result, an alternative to Windows.Forms, called GTK#, has emerged. GTK# is similar to Windows.Forms, runs on both Linux and Windows, and has in Glade a visual development environment similar to Delphi. If you have patience, you will find that the next few sections of this article will, in a methodical way, explain what GTK# is, and how it is structured.

Gnome

The Mono project is headed by Miguel de Icaza. Miguel’s background includes credit for being the chief architect behind Gnome. Gnome is a complex toolkit that consists of several disparate parts including an implementation of CORBA and a toolkit called Bonobo which is similar to Microsoft COM. However, Gnome is known most widely as a desktop environment similar to KDE or Windows. The goal of the Gnome desktop environment is to put an easy to use GUI front end on Unix. The Gnome desktop includes tools like a file manager, a panel for managing applications and windows, and various other components too numerous to mention.

Gnome, however, is more than just a desktop environment. It is also a development environment, a suite of API’s that a developer can use to access and control many features of a computer. In particular, it gives the developer control over the X Window System.

The Gnome API’s are not limited to GUI development. You can also use Gnome to develop command line applications. However, for the purposes of this article, Gnome is important because it provides a GUI development API similar to that provided by Windows.Forms.

Because the Gnome API’s are based on a desktop environment, GUI’s developed with it have a consistent and integrated look and feel. A Gnome based GUI can be run on top of the raw X Window System, on top of Microsoft Windows, or on KDE, but it has the most consistent and unified look and feel when it runs on top of the Gnome desktop.

Glib, GDK and GTK+: Important Libraries

Gnome is based on several external libraries. In particular, it relies on two libraries called Glib and GTK+. Before you can fully understand GTK#, it is best that you read at least this brief introduction to GTK+ and Glib.

Glib consists of a number of utility functions, many involving solutions to portability related problems. The developers of Glib were interested in cross platform development, and in particular, they wanted a consistent API on which to develop cross platform applications. A problem arose in cross platform development when a fundamental routine was needed on two different platforms, but was only available natively on one platform. Glib was created in large part to solve such problems. In particular, any routines that were needed on multiple platforms was implemented as part of Glib. Developers could then install the Glib library on a platform, and then map into existing Glib routines to get the functionality they needed.

It is interesting to note that in many cases problems of this kind were resolved by adding a routine to a particular language. For instance, when a routine for string manipulation was needed in the C language, sprintf was developed. When C was ported to different platforms, the sprintf routine came along as a matter of course. But the creators of Glib took a different approach. Instead of embedding a particular routine in a language like C, they imbedded it in the Glib library itself. From the beginning, Glib was designed to be run on multiple platforms and to support multiple languages. So rather than building the routine into any particular language, the routine was built into Glib, and then mappings to Glib were created for a wide range of languages, including the Mono implementation of C#.

It is interesting to note that GTK was not originally created for use in Gnome. Instead, GTK was designed to be one of the tool kits used by a sophisticated drawing program called the Gimp. In fact, the letters GTK stand for the Gimp Tool Kit. Another important library, called the GDK, is the Gimp Drawing Kit. Both the GTK and GDK depend on Glib. Gnome, in its turn, relies on GTK+, GDK and Glib. In other words, the architects of Gnome choose to rely on the same graphics libraries as the Gimp. This has proved to be a wise decision, for the GTK has been a good base on which to build.

GTK+ includes a widget library for creating controls such as buttons and list boxes. Though these controls were originally meant only to be part of the Gimp, they have now become part of Gnome, and serve as the widget set for the whole Gnome library. GTK+ also includes a type system, and a set of routines for implementing events. In GTK+, events are called signals. As you can see, GTK+ is an sophisticated library that plays a big role in Gnome development.

Because GTK+ calls into the GDK and not into the X Windows System itself, it has been possible to rewrite the GDK to hook into windowing systems on other platforms such as Windows. As a result, GTK+ and GDK run on multiple platforms, including Windows. This means you can write a single GTK+ based application and have it run unchanged on Windows and Linux.

Mono, GTK+, GTK# and Gnome

GTK# is a C# wrapper around GTK+. It therefore provides a cross platform way for Mono based applications to have access to a powerful, and well thought out, visual library of widgets and drawing routines. Mono developers using C# can call the routines in GTK# to manually create a visual interface for an application. In terms of complexity, such an operation is similar to creating a window in the raw QT API. That is to say, it is simpler than having to write a raw Win32 API application by hand, but it is more difficult than using a tool like Delphi or Visual Studio to draw the interface for an application.

Glade was designed to bring visual development to GTK+ and wrapper languages such as GTK#. Using Glade, you can create an XML file that defines a visual interface for an application. Using GTK#, you can automatically load that interface and display it to the user with a single, easy to use call.

An example of an application developed with GLADE and the GTK# library is shown in Figure 1. Applications such as this one can be quickly assembled using Glade and a very few lines of GTK#. An exact explanation of the process will be shown in part 2 of this article.

Figure 1: A simple Mono application written using the GTK library.

Summary

In this article you have learned that GTK# is a wrapper around a visual library called GTK+. This library forms the basis for the desktop environment called Gnome. Because Gnome is a complete, and sophisticated tool, it demonstrates that one can use GTK+ to create sophisticated visual applications. By creating a mapping from C# into GTK+, the Mono team has a found a good way for their developers to create sophisticated graphical user interfaces for their applications. Such applications are built on standards based, open source code that can be run on either Windows or Linux.