Making Wrong Code Not Compile

The rest of the programming world is linking to Joel Spolsky’s latest post about the need for Hungarian notation. Joel makes a nice distinction between "Apps Hungarian" and "Systems Hungarian". The latter is the one we all know and hate, all lpszThis and dwThat. The former is more interesting in that it uses prefixes to describe the role of the data in the application and therefore what can be done to it and how it can be used.

But as far as I’m concerned, all this talk about Hungarian notation is just rubbish.

The essence of Joel’s argument is that you, the developer, become attuned to the prefixes and you notice when variables whose names use different prefixes are used inconsistently. You should read Joel’s post since I’ll be discussing his example; go ahead, read it now. I’ll wait.

Now this all sounds groovy baby, and indeed I imagine several developers have been swayed by Joel’s argument (and I know he can be very persuasive) and have suddenly decided to use "Apps Hungarian".

Well, I’m not swayed: I think it’s awful, a complete throwback to the 80s. Why? Because to me Joel’s argument is antithetical to modern object-oriented practices. In fact it just reeks of old-style C programming.

Consider again Joel’s example: given a string variable it’s hard to say whether its value is the original input from the user (that may contain spurious HTML tags) or the encoded value (where the angle brackets from the spurious HTML tags are converted to their character encodings). From this he proposes using prefixes for string variable names so that you can know whether the values are "safe" (i.e., encoded) or "unsafe" (i.e., raw, direct from the user).

Well to me a string is a string is a string. It’s just an array of characters, with no other structure or semantic meaning at all. That’s it. Period. It’s just, you know, a primitive type. If you want a string to have some other overlaid semantic meaning, such as safeness, then it is no longer a simple primitive string. It is a string with new behavior; it is a string with extra properties. Certain actions are allowed with this string, others are not.

And Joel’s argument is that we should implement this through a naming convention? Wow. To me, it sounds like a new type. A class. You know: something that encapsulates data, that enforces specific behavior on that data, that constrains what you can do with the data. Then the compiler can help you maintain type safety and behavior safety. Wow, using the compiler to ensure we don’t write bad code? What a concept.

So, off the top of my head, not saying this is how I’d really do it in a production application, that your mileage may vary, etc, I’d write a UserText class with a constructor that accepted the original string from the Request instance. There would be two methods, GetSafeText() and GetUnsafeText() to return the two variants of the original string. There might be other methods as well: Store() and Load() to save and read the data from the database. Etc.

Think I’m talking rubbish? Look at the Url class in the .NET Framework. See what I’m getting at? Joel would have you prefix string variable names and have your eyeballs enforce URL type safety. The Framework designers didn’t take that naïve solution and instead gave us a class with certain behaviors and in using this class the compiler forces us to use URLs properly. That’s just — shock, horror — so type- safe.

In fact, I’d have to say that if you have "primitive data" that has other attributes or properties, or that is constrained in some way, then it should be an object, an instance of a class that you write to enforce the constraints, etc. Let the compiler take the heavy load of making sure you use the data properly, not some wacky naming convention.

Sounds like a plan to me.

Office 2003 XML: Gift or Concession?

Every once in a while a technical story has ramifications that tell us a great deal about our society. Take for instance the current negotiations between the state of Massachusetts and Microsoft over Office 2003 documents saved in XML format. This story gives us some hints about how good laws can be used to force a private company to support open standards and thereby benefit the general public.

The Microsoft Office XML formats initially seemed like a huge gift from Microsoft to the people of the world. Rather than having our documents locked up in a proprietary Microsoft format, suddenly we had a chance to share them freely in an XML format that could be read by many program on many different platforms. However, a struggle between the state of Massachusetts and Microsoft reveals much about the way governments, laws and corporations interact in 21st century America.

This is a complicated story which is neither one-sided nor easy to understand. As a result, I’m going to take you on a tour of the subject so that we can explore together the issues in this interesting and informative morality tale. In the end, you should have a better appreciation for one case in which ordinary citizens have at least a chance at winning back rights taken from them by a big corporation.

The European Union, Microsoft, and Proprietary Information

There is no greater symbol of Microsoft hegemony than the proprietary Microsoft DOC and XLS file formats. At this time, most businesses, governments and private citizens have willingly locked up their word processing and spreadsheet documents inside a proprietary format completely owned and controlled by a single company. The current state of affairs is perhaps just a bit Orwellian: Big brother is not just watching you, it controls the format in which all your personal, financial and government documents are stored!

The news that Microsoft is willingly giving up this enormous power by allowing us to save documents in an easily exchangeable XML format is hardly an every day occurrence. It is so surprising, in fact, that many people doubted that Microsoft would do such a thing just out of the goodness of their hearts.

The cynical noted that just prior to Microsoft’s announcement about opening up their format there was a string of stories about countries, mostly in Europe, who were going to insist that all government documents be stored in open formats. At least one state in the US, Massachusetts, also passed a law affirming that its government documents must be kept in open formats. The cynical said that Microsoft could either give up its position in these governments, or else open up its format. The claim then, was that the decision was not driven by common sense or altruism, but by necessity.

The position of the cynics was given additional weight by the recent decision by Microsoft to comply with a particular demand voiced by the State of Massachusetts. Microsoft’s willingness to change their license to conform with the desires of Massachusetts hints that Microsoft did in fact open up their formats primarily in order to meet the demands of governments and states who demanded open formats.

For those of us believe that we should have the right to save our documents in open, easily exchangeable formats, this whole episode is proof of the importance of political action, and of the power that people and governments have to act as a force for good in this world.

Understanding the Massachusetts Position on Open Standards

Massachusetts has a policy stating that government documents must be saved in an unrestricted open format. From my point of view, this is a sensible law that ensures that the public will have free access to a wide range of information. This law mirrors many similar laws created by members of the European Union.

Because of this law about open standards , Massachusetts had initially restricted the use of Microsoft Office because even its new open XML format was in some ways restricted and proprietary. In particular, it was evidently not legal to open Microsoft Office 2003 XML documents with a tool not made by Microsoft or licensed by Microsoft.

In a concession to Massachusetts over this issue, Microsoft changed their license to allow other software tools to open Microsoft Office documents saved in XML. In regard to this decision, Massachusetts employee Linda Hammel issued the following statement:

"Yes. [Microsoft] added a provision to the license stating that users could use ANY software (that would include GPL licensed open source desktop software) to read government records created using the MS XML reference schema."

Microsoft says the same thing in one of their public documents:

"We are acknowledging that end users who merely open and read government documents that are saved as Office XML files within software programs will not violate the license."

The point here is subtle. Microsoft freely gave away their license to use the XML formats that they had defined. Furthermore, they allowed anyone who was in compliance with the free license to build software that could read these XML files. For Massachusetts, the sticking point was that the license was still written by Microsoft, was difficult to interpret, and was required. Therefore the standard was not truly open.

What Microsoft did in response to Massachusetts’ complaint was to make an exception that specified that government documents did not need to be opened with software that was in compliance with the free Microsoft license. The license is still needed by you and me, and we would be breaking the law if we opened one of these documents with a tool that was not licensed by Microsoft. But government documents were not bound by this law.

Implications

It might be worthwhile to take a moment to contemplate this issue. Suppose you were a citizen of Massachusetts, who came across a government document saved from Microsoft Office as XML. Before Microsoft changed their license, it was not legal for you to access that document in any way, unless you had a Microsoft sanctioned tool with which to read the document. In other words, your rights as a citizen to freedom of the press, and to public government documents, was restricted by a private company that wanted to increase their profits.

Some might think it strange that Microsoft would be able to restrict access to a document just because of the format in which it was saved. But in fact, this is something we encounter all the time. It is, for instance, illegal to break the copyright software on DVD’s, or on MP3 files that are copyrighted with Digital Rights Management software. Another example would be the downloading of MP3 files over the Internet. You might be able to obtain files this way, but it is illegal.

Despite the familiarity of laws of this kind, it is still difficult for the average citizen to know their rights, and to know when they are in the wrong. For instance, who would guess that it is illegal to open a Microsoft document saved in an XML format with a non-sanctioned tool? Frankly, I had read about this issue before, and heard about Microsoft’s new "open standard" for XML documents. From what I read in the press, it never would have occurred to me that it was illegal to read documents in this "open format" with a tool not licensed by Microsoft. The very idea that it would be illegal to open a public XML document seems somewhat incredible to me. I’m sure that had I not read about this issue I would have unthinkingly opened such a document in Visual Slick Edit, emacs, or some other editor.

Massachusetts and the Law

The one thing that is clear about this case is that Massachusetts is absolutely right to insist that all documents be written in open formats that can be read by any tool created for that purpose. Computers are supposed to enhance our access to documents, not restrict it. In the past, if you were given a public government document in hard copy, the idea that you were not allowed to open it up and read it would sound absurd. But before Microsoft modified their license, there was a way to make it illegal to read a public government document that you had in your possession.

The point here is that Microsoft never would have modified their law had they not been under pressure to do so from Massachusetts. Furthermore, it is unlikely that Microsoft would ever have opened up their proprietary formats had they not been under pressure from Massachusetts and governments in Europe to do so.

Though I personally avoid proprietary formats as much as possible, I understand that private individuals and businesses should have the right to use such formats if they so desire. Of course, such business should understand that any proprietary document format could one day be abandoned by a private company, and could become partially or completely unreadable.

Unlike private businesses, I believe governments should have nothing to do with proprietary document formats. No public document should ever be written in a proprietary format, nor should any classified document that will one day be made public. Since most classified government documents should eventually be made public, there really is very little reason for any tool that creates proprietary formats to be used in a government office of any kind.

It seems a clear ethical violation for a government to save any document in a format that can only be read or edited by a product from one particular company. Clearly public documents should be freely readable, and editable, in a wide range of tools. Freedom of information and particularly a citizen’s access to government information is a more important principle than the ability of any one company to try to maximize profits. This is a classic illustration of how free societies sometimes need to create laws in order to preserve the basic rights and freedoms of its citizens.

Such matters are to me exceedingly clear. As a result, I find it amazing that laws such as those passed in Massachusetts have not been adopted by all government agencies, and most particularly by the US government in Washington. When one thinks of the silly laws that are passed in Washington nearly every day, it is amazing to contemplate how a no-brainer of this type has yet to even become a serious piece of legislation in the US Senate or Congress.

Office XML Caveats

Before closing this article I wanted to share a few facts that I picked up while researching this article. In particular, I dwell here on issues that I have with the existing Microsoft Office XML policy.

The official name for the formats in the Microsoft Office 2003 suite are WordprocessingML and SpreadsheetML. The main advantage of this technology is that XML provides a format for documents that need to be exchanged for business purposes. An XML document is structured in such a way that a program can open such a document, extract information from it, and process the data it finds in the document.

Microsoft only allows you to save to this format in 2 of the 6 available versions of Microsoft Office 2003. In other words, unless you own Office Enterprise or Professional, you can’t save in this format. For all intents and purposes, this fact alone invalidates the whole process of trying to use this format as a standard for document exchange. If you can’t exchange these documents with other Office users, then exactly what good will they be?

It is also strange to have a standard being set by one proprietary vendor. The point of this endeavor is to create a way for people to exchange documents freely between vendors and operating systems. Such a goal is unlikely to be achieved if one vendor completely controls the standard. Yes it is possible that Microsoft could manage this format in a fair and productive manner, but there are no safeguards to help ensure the quality or fairness of the product if it is completely controlled by one vendor.

There is also some argument as to whether the Office 2003 XML standard actually captures the complete set of formatting in a document. This article on internetnews.com reports a Microsoft spokesman as saying: "If you save something in raw XML format, you may lose some of the really rich formatting like graphics. That’s inherent in the way XML works." Which is true enough from one point of view. However, OpenOffice has no problem using their XML format to capture information on how graphics are stored in an OpenOffice document. If OpenOffice can do it, why can’t Microsoft?

Summary

Technology is normally detached from the everyday political battles that make up common public discourse. However, when governments begin using restricted proprietary formats to create public documents, then the arcane world of technical discourse suddenly becomes a public matter.

America was intended by its founders to be an open society, and our government was initially designed to be a public entity completely visible in all its particulars to all citizens. Attacks on this openness are constant, and often all too successful. The use of proprietary formats in government is one such case in which public access to government data is being unduly restricted. It is clear that all governments should adopt open standards for public documents of any kind.

The good news here is that Massachusetts and members of the European Union were able to force Microsoft to change their license to free up the process of reading government documents created with Microsoft or Microsoft licensed tools. This shows that it is still possible to fight back against large corporations that restrict our freedom. People too often think the battle is hopeless, and that there is nothing that can be done. Even worse, some know that it can be done, but argue that it shouldn’t be done on the absurd basis that all laws that restrict anyone in any way are necessarily wrong. Such people are really advocating that we all bow to the will of any powerful corporation or institution that comes along. Without laws restricting the rights of corporations and institutions, we would have little or no freedom.

This case shows that there is something that can be done to prevent the loss of rights and freedoms that were initially guaranteed by the constitution, laws and rights created by the founders of our country. Clearly, we should all write our Senators and Congressmen and demand that they pass similar laws promoting open standards.

Advice for the New Delphi Marketing Guy

An open letter to the new Delphi Marketing Guy:

I am glad to hear that there is a fresh face tasked with the difficult job of marketing Delphi. I’m glad because every time there is a new marketing person, it represents an opportunity to radically change the way Delphi is marketed. From reading your web site I must say that I am really encouraged. You appear to be far more technically savvy than your predecessors have been, and you clearly have a “Developer Relations” bent. That’s great. Your Zamples site is terrific. Here’s hoping you “sound” like a developer and not a marketeer!

One of the first things I am sure you will discover is that, right or wrong, many folks consider “Delphi Marketing” to be an oxymoron. You probably are making it one of your top priorities to change this state of affairs. In fact, if after twelve to eighteen months on the job, the only thing you feel you’ve accomplished is that the Delphi community no longer holds this attitude, I would say that you will have been a roaring success and will deserve a huge raise. Simply changing that one perception would be a huge step forward.

Now, I’m not a marketing guy. I admit it. I’ve never taken a marketing class, and I’ve never had a marketing job. But I do know what I like when I am marketed to and I have been hanging around the Delphi community for ten years. I’m in the business of selling Delphi and Delphi services, so I have seen a thing or two over the years. As a result, I do have some humbly-offered advice for you:

  1. Get a copy of The ClueTrain Manifesto. Buy it. Read it. Live it. Be it. In my view, the very first thing you need to do is to bring Delphi marketing into the 21st century by realizing that “Markets are Conversations”. The Internet has transformed the way marketing is done, and I must say I don’t think that in the past, the folks doing Delphi marketing have realized this. It seems that all Delphi marketing has been done in the classic “Sell Tide on the Soap Operas” mode, with Marketing 101 textbook techniques and horribly over-controlled “marketing campaigns.” That’s not the way it gets done anymore. Most of what follows here flows from the basic concepts in that book.

  2. When you get done with the ClueTrain, read everything Guy Kawasaki has written. Guy Kawasaki knows all about marketing technology in the technology age. One of Delphi’s greatest strengths is the community of developers who believe very passionately in Delphi as a tool, as a language and as a product. Guy knows how to harness these folks, and you’d do well to try to do the same.

  3. Walk the halls where the Delphi team works and read the Dilbert cartoons posted there. Scott Adams is a genius. I’m a firm believer that anybody can get the pulse of an organization and the ills that effect it by reading the Dilbert cartoons posted on people’s office doors and in their cubicles. Wandering the halls and reading the Dilbert strips posted there will be one of the best ways for you to find out what the team thinks about the problems and issues with the product and the company.

  4. Post to your blog two or three times a week. The fact that one of the first things you did on the job is to set up a blog and invite a conversation is extremely encouraging. That is really cool. Now, the trick is to stick with it. Too many blogs at http://blogs.borland.com are pretty much dark. Post what you are doing. Post where you go, the conversations you have with other Borlanders, with customers, with the execs. If you are doing market research, post about it. You don’t have to post the results, just post what you are interested in, where you are looking for information. Ask your customers questions in your blog and then respond to their comments. Get other Delphi team members to blog more. Talk about your boat, your life, funny stuff that happens at work, whatever. But just keep posting.

  5. Don’t sound like a marketing guy. I think that much of what Borland is doing with the SDO strategy is really cool. However, a lot of it sounds like marketing, not like straight talk. I’ve read it carefully, and I’m not even sure I know what it means. However, the talk that Boz Elloy gave at Borcon, particularly the skit done by the Sales Engineers, was much better. It was clear, concise, and delightfully devoid of marketing-speak. I think that Boz’s talk was so effective because he realized he was talking to developers. There’s a reason that marketing guys are such ripe targets for Dilbert cartoons. If you sound like a marketing guy, people will tune you out. Normal, rational people can’t understand the language spoken by marketeers. “Process” and “paradigm” and “maximizing” and all that stuff needs to be banned. Converse, don’t “market”.

  6. Be an active newsgroup participant. Put on your asbestos suit and start posting in the newsgroups. Clearly label yourself as the Delphi marketing guy. Start out by being adamant that you won’t discuss the past, as that is gone forever. Insist that you only want to talk about the future. You’ll be flamed and berated. You will be inundated with tons of input, flames, comments, insight, advice, and even total nonsense from all of us arm-chair marketers. But these guys and gals that are hollering at you are the heart and soul of Delphi. You must have a thick skin and listen to them. Converse with them. Talk to them. Inform them. Get to know them. They are your soldiers, your eyes and ears in places you can never be. They love Delphi. They want to spread the good news of Delphi. Be there for them to help them do that.

  7. Join the fight for more money, resources and freedom for the Borland Developer Network. BDN is utterly essential for Borland and Delphi’s success, but I sometimes get the feeling that no one outside of Developer Relations realizes this. BDN is a huge, yet totally under-utilized marketing tool. Developers need resources, code, examples, articles, support and more. Having all of that in abundance on BDN makes every Delphi sale that much easier. The Developer Relations guys do heroic, McGyver-like work in providing content on the site with a shoe string budget, masking tape, baling wire and some glue. They need more and better resources to get the job done. They need more freedom to publish content without the lawyers breathing down their back. They need strong, clear support at the highest levels. You can help them get that, and get a great marketing tool in return.

  8. Go after disaffected Visual Basic programmers. You want a rich, ripe market for Delphi? A fecund field ready for harvest? Go after the rather large group of Visual Basic programmers who are quite unhappy about what Microsoft is doing with Visual Basic. Don’t know what is going on? Give this a read and get a feel for what is going on. Remind these folks that Borland has a twenty year legacy of not doing exactly what Microsoft is doing to them. These guys are ready for the plucking. Go for it.

It’s really, really hard for open letters like this one not to sound smug, and I’ve tried hard not to be smug, but I suspect that I’ve failed. Please forgive that. All of this is probably no more than the delusions of a chuckle-headed Delphi programmer, so maybe you should treat it that way. But maybe there are some good nuggets of truth in there that might work and make the words “Delphi Marketing” roll a little more smoothly off the tongue of the average Delphi developer.

Open Source vs. Microsoft graphics technology

I got a surprise when I compared OpenGL and DirectX to see which was more popular. I was about to take a thwack at the open-vs.-closed-source hornet’s nest to see who I could irritate when I rediscovered that intuition and information are two entirely different critters. My thesis was that Microsoft’s graphics API “DirectX” was both superior and more popular than the open source graphics product called “OpenGL.” I believed that MS’s ability to drive the market, ally with device manufacturers and fund new development would lead inevitably to better technology and more popularity for DirectX API. But how do you prove that one technology is superior or more popular than another?

One way to measure the relative health of a technology is to see who’s willing to pay you for your particular brand of foolishness. Here are some extremely unofficial results from popular job listing sites that surprised me:

                             DirectX OpenGL

www.dice.com               62 55

www.monster.com         71 72

www.careerbuilder.com   9 24

www.craigslist.com          6 13

www.truecareers.com       0 5

www.softwarejobs.com    15 17

                                  163 186

I expected to see Microsoft’s technology pulling down many more job offers. But as you can see, it is only on Dice that there are more job listings for DirectX than for OpenGL; and even there, the margin is not very large. On all the other sites, OpenGL has more listings, and the total number of listings for OpenGL is greater than for DirectX. Statistics like this are not definitive proof of either popularity or technical superiority, but they do offer a simple heuristic that suggests something of the true state of affairs in the real world. Certainly information like this is more valuable than merely reading press releases on a vendor’s web site.

Consider the following questions: Of the jobs above, which are better paying? Which jobs were just including technology buzzwords? What does this imply about the future quality of either API? The job listings table above tells me is that one technology is not clearly superior to another.

Just to muddy the water, Google shows 3,120,000 DirectX hits to 2,530,000 for OpenGL. Once again, it is hard to draw definitive conclusions from this statistic. Which is more important to you: the number of references on the web, or the number of job offers? It is hard to say for sure.

The open vs. closed source question reminds me of the old days arguing whether Delphi was better than VB (well duh) or C++. With .NET, language dialect is decoupled from compiler functionality and this becomes a mute point; much like the kids in the movie “Stand by me” debating if Mighty Mouse could beat up Superman. In the end we fall back to more sensible questions like “what is the best tool for the job you need done”. In future articles I willcompare DirectX with OpenGL and how best to use each.

Design, what design?

Yesterday, I was discussing the use (and abuse) of design documents. From personal experience (not necessarily the best statistical indicator, I’ll admit), I know that design documents are flawed in a couple of main ways: first, the design, although detailed, may just not work when implemented (read: too slow, too much of a memory hog, too brittle) or may be too difficult (read: too costly) to complete, and second, the design may leave bits out, things that were just not thought of at the time and that came up as part of the implementation.

For the “doesn’t work” case, sometimes the developer may work out a different design through experimentation that does function correctly and satisfies the ultimate requirement. What happens about the design document then? Generally, it’s just left as is.

For the “bits left out” scenario, the developer may just implement the missing functionality or classes or methods as he’s writing the code. The design document is again adrift from its implementation.

The agile way is to use the requirements as the basis for the design/implementation, and to implement the requirements one by one. No design document is written, per se. Of necessity, this does mean that later requirements will cause earlier requirement implementations to be “wrong” or "not broad or deep enough" and to force them to be refactored. This is the sticking point for most developers: why waste time writing a whole bunch of code to satisfy requirement A when requirement X dictates that A should have been implemented in another way? Surely, the argument goes, you need to understand everything about the requirements space beforehand?

Well, no, you don’t. You just have to have confidence that you are able to refactor your code. You have to have confidence in your unit tests and that they will save you from refactoring mistakes. You have to relinquish the proprietary feeling you have towards your older code and thereby be confident enough to change it at will. This applies, by the way, not only to code but also to the database schema, to XML formats, and so on.

If you do have this confidence, you can design as you go along. You will be able to develop the application feature by feature.

This will avoid another issue as well: you never really understand the problem space until you try and write code to solve the requirement. Design documents have the tendency to gloss over the hard bits (I know this from my time at MS: my first design for the method parameter change refactoring was way cool, but also way too hard to implement). If you start writing code for experimental reasons (can this feature be implemented like this?) and then write a design document to reflect the results, I’d have to say why stop with the experiment? Continuing to implement and completing the code would be a better strategy, no?

The one thing that does make sense is to write up a description of where you got to in implementing a requirement and why you took the path you did. Perhaps this will help someone else understand the ultimate design decisions you made and why you made them. After all, unless you go spelunking through your source code control system, all the code shows you is the successful result; sometimes the failed experiments on the way there are more interesting.

The Zen of Computing

Many programmers have a tendency to cling to outmoded technologies. This unwarranted attachment to the past can lead to a great deal of suffering, most of which is self inflicted. The suffering can be anything from unneeded arguments with peers, to job loss, or even the end of a career. In this article I will discuss the relationship between Buddhism, attachment and computing.

It seems to me that I have read an unusually large number of technical books or articles with titles like the The Zen of X, where X can be anything from graphics programming to Jeff Bezos. The one thing most of these texts have in common is that they rarely mention anything specific about Zen Buddhism. In this case, however, I truly want to talk to about Buddhism and computers, and particular about the Buddhist doctrine of non-clinging, or non-attachment. I believe this doctrine has a special application to those of us who work with advanced technologies.

The central tenants of Buddhism are found in The Four Noble Truths. The second of the four noble truths states that most of the suffering in this world is caused by our attachment to things, people and ideas. One of the problems with clinging to the things of this world is that everything in life changes. Nothing stays the same. Inherent in the Four Noble Truths is the idea that nothing in life is permanent.

If nothing in life stays the same, then we suffer if we try to cling to something that is, by its very nature, destined to disappear. Our attachment to a particular computer technology may seem like a virtue for a time, but after awhile it will cause us and others nothing but suffering.

Open Minds

When we first learn about computers, we are open to new ideas. As a result, many of the big movements in the programming world start with college students or the younger members of a corporation. These young individuals are not yet set in their ways.

Two or three years ago, I had occasion to go back to school for a time. One of the first surprises I had when I got to school was the overwhelming preference the students I encountered had for Java. In my professional life, I was one of small group of people I knew who were taking Java seriously. But at the school I was attending, almost everyone, if given a choice, wanted to program in Java. The students I encountered were not yet set in their ways, and they were open to new technologies to which others in the professional world had closed their minds.

It is odd that we should become attached to a particular computer language. After all, a computer language is mostly syntax. How can one get irrationally, and sentimentally, attached to a bit of syntax? And yet we do.

I have heard people vehemently assert that a particular type of while loop is better than another kind. When declaring a class, one language might put the keyword class first, and then list the identifier that names the class. Other languages list the identifier first, and then the keyword class. Or perhaps they call it an object, rather than a class. There are arguments in favor of each keyword, and each technique, but surely no one would get emotionally attached to one technique or to one keyword? It is just too trivial an issue. Right?

Irrational

Our sometimes irrational attachment to seemingly trivial chunks of syntax has a serious side. If a programmer becomes attached to a particular language that dies out, then that programmer can lose their job, and be unable to find a new one.

The irony here is enormous. Engineers are supposed to be unemotional. They are supposed to look at life from an objective, scientific point of view. And yet when it comes to looking at computer languages, and computer platforms, many engineers take a deeply emotional stance. It is not at all uncommon to hear an engineer asserting categorically that their technology A is better than technology B, even though they have never taken a serious look at technology B. This happens all the time.

We get in the habit of using one type of syntax, or one technology, and then claim that it is better than another technology when what we mean to say is that it is more familiar to us. This xenophobic view of new technologies ultimately ends up stultifying the careers of individuals, and unnecessarily slowing down progress. In the worst cases, superior technologies never get adapted simply because they are new, and make some engineers uncomfortable.

Marketing Slime

The job of a marketing expert is to find ways to get people to become attached to a particular brand name. In the computer world, we are always being asked to cling to a particular technology. Marketers want us to believe that their language, their compiler, their OS, is the best technology, now and forever. After awhile, all products become sticky and slimy, coated with layers of marketing hype. When that happens, we need to wash our hands, to clean the lenses through which we view a technology, and to see it as it really is, and not as marketers hope that we see it. After a time, technologies change, the world shifts, and we need to move on. Before we can move on, we need to stop clinging to what was once true, and open our eyes to see the world as it is now.

Everything changes, everything passes. It is easy to become blind to this basic fact. We need to learn to look dispassionately at the world as it is, and not cling sentimentally to a past that no longer exists.

Summary

Watching modern technology is like looking through a kaleidoscope. At some point in our youth, we peer through the kaleidoscope and see a particular pattern that happens to be present at that point of time. We find some particular part of the pattern, perhaps a bit of blue and green down in the bottom right, that appeals to us. Many of the best minds peering through the kaleidoscope at that time agree: of all the patterns visible at that moment in the kaleidoscope, the bit down in the bottom right is the most elegant, the prettiest, the most aesthetically sophisticated. So we make that bit of the pattern our study. Over the years, the kaleidoscope slowly revolves and that bit of the larger view changes. Patterns shift, and the blue and green bit starts to become more of a yellow, blue and green bit. And it is no longer quite as pleasing to the eye. And the bit up in the upper left quadrant has changed too. The muddy colors and awkward patterns in that part of the kaleidoscope have changed. Now most of the best minds agree that the it is this bit in the upper left that is prettiest, and most elegant. But some will stubbornly cling to the bottom right of the kaleidoscope, even though the great minds who once proclaimed its significance have now moved on and focused on the upper left.

Technologies change. One day a particular language or OS is best, and then a few years later that language may still be very good, but its advantages have paled in comparison to new technologies which have built on it. We want to cling to the past, and we become blind, unable to see how the world is changing around us.

For a Buddhist, this panorama of changing landscapes is not just a phenomenon of the technological world, but a central fact of life. They see that the world is always changing, and that our suffering comes in large part because we want to cling to things that will not stay the same. We might want the world to stay the same, but it will not. None of it will stay the same. All of it will change.

Technology changes all the time. We can’t learn a new technology every day, so naturally we stay with a technology long enough to learn it well. During that time, we feel the temptation to start to cling, to become attached. But it is best to follow our Buddhist friends in this, and learn that there will always be a time when we have to let go, to stop grasping, and to move on. Our temptation to cling is only natural, but our refusal to let go is dangerous, even fool hardy. The Zen of not clinging is a hard lesson to learn, but if we can learn it, then we can be much happier, and much more successful.

Free the Delphi Help! The Dawn of the Delphi Wiki

Writing Help files, producing documentation and manuals is hard work for even the smallest application or component set. No one likes to do it. It’s usually put off as much as possible and ends up being the last thing done. Shoot, we developers don’t even like to comment our code, much less write a help file or (horrors!) a whole book about our application!

I can’t even imagine what it is like trying to manage it for a project as complex as Delphi. There’s really three large tasks: documenting the product itself, the language, and the framework. Then, of course, there is the task of tying it all together into a single entity, making the help extensible, and integrating it into the IDE. The entire process represents a formidable task no matter how you look at it. The Delphi Help team is small, and I’m sure they’d all love to have about seventeen more members. And Lord knows they’ve taken a lot of heat publicly over the years. It seems no one is every satisfied with Delphi’s documentation.

Recently, however, the team has made a bold move. They’ve undertaken the interesting task of re-architecting the whole she-bang. They’ve begun to abstract out the content into XML files so that the content of the help is separate from the presentation. That way, content can be managed separately from the presentation as a book, website, help file, or whatever. The people writing the help can now concentrate on writing good help, and stop worrying about the presentation layer. This is a really smart move. But the team is still small.

Interestingly, there is a large group that is very interested in high quality documentation for Delphi and who have, in the past, displayed a willingness to help out in such areas. And that group is us. We can help make the Delphi documentation better.

And that is why I think that the Delphi help should be published as a Wiki. If you don’t know what a Wiki is you can follow the link and poke around, or you can read the next sentence. A wiki is basically a user-updateable and user-manageable website. Anyone can come and edit the content of a Wiki site. The most famous Wiki is the Wikipedia, a huge encyclopedia with thousands and thousands of contributors. Experts and non-experts alike can make entries, update entries, correct mistakes, add new and interesting information, and keep the whole thing updated and accurate. As far as I can tell, 100% of the Wikipedia has been written by volunteers. Articles are automatically linked to other articles. It’s a beautiful thing all around.

Now, imagine the entire Delphi help system published as a Wiki. All the current information could be published unchanged. We all could edit it, refine it, add to it, and improve it. We could add tons of illustrating code snippets. We could create link pages, and tons and tons of valuable cross references. Anyone in the world could do so at all hours of the day and night.

And I’d like it if, when I hit F1 in my IDE, I was sent to the Wiki instead of the help file if my computer is on the Internet. That way, I’d be sure to be seeing the most up to the minute information. I could also use that opportunity to make my own additions. If Borland could come up with some way of integrating the task of updating right into the IDE, then that would be even better. Making it easier to update the Wiki means that the Wiki gets updated more often.

Borland could set the Wiki up so that it automatically updated their current database of content, and then they could periodically take snapshots of the database and use that to publish their local help files and manuals. Even those could be made available for download for users during off-line periods.

Another Win/Win: Borland gets the help they need to make Delphi’s documentation top of the line, and the customer gets the help of the entire Delphi community.

Technical debts

Lost in the heat of an engaging programming assignment, sweet temptation can seduce us into coding a quick and dirty hack rather than crafting a more elegant solution. This article suggests, first by analogy, and then in more concrete terms, the possibility that sometimes it might be best to succumb to the allure of the hack.

Let’s begin by setting up an analogy. You’re in the market for a new car. You weigh the pros and cons of a few models, make your decision, and then go to the dealership to buy it. Without going into too much detail, you have two ways to pay for your new wheels: you either pay for it up front (lucky you!), or you take out a loan to help pay for it (I’m ignoring leases in this argument, since they can be viewed as loans with a final balloon payment).

If you go with the loan , then you are going to be assuming some financial debt. You have to make regular payments to the bank or financial institution to pay off the principal and pay the interest on the loan. Obviously, at the end of the day you will pay more overall for the loan than you would have if you’d just slapped down the cash at the outset. But depending on your situation, the loan can still be an attractive alternative despite the extra cost.

OK, bear that in mind, and consider this. You have some software to write; maybe some extra functionality for an existing system. You recognize that you have two choices.

  1. You can write it "properly"; that is, design it using the proper agile methodology, write good test cases, refactor code smells into oblivion, ensure that it is flexible and can accommodate some possible extensions, etc
  2. Or you can write a quick and inflexible hack that gets the immediate job done.

However, you also realize that if you were to do the latter, there will come a day when you will have to add some extra functionality to your quick and dirty module. At that time, you will almost inevitably encounter several problems related to the inflexibility of the hack. You will then have to do some, or maybe all, of the work that you managed to avoid by doing the hack in the first place.

Ward Cunningham called this idea "technical debt" to mirror the monetary analogue. If you do the hack work, you are essentially reaping the rewards of getting the job done quickly and then "paying interest" later on in the form of the additional work needed to extend the functionality. You are going to assume the technical debt; hack work is not debt-free.

If you want to pay for it all up front, then you’d take the time to design and write the software using test-driven principles. Of course, just as in the case where you buy the car outright, you have to have the resources to do so.

So sometimes it will make sense to write the code properly (pay up front) and sometimes (horrors!) it can make sense to write a quick and dirty hack (pay a little now, pay interest in the future). Your job as a software developer is to recognize the situations where it does make sense to incur technical debt rather than paying up front.

As an analogy, don’t take it too far. Unlike the monetary situation where you can easily quantify the difference between paying up front and assuming interest payments, technical debt must be analyzed more qualitatively. Who knows how much more effort is expended in maintenance incurred due to a hack versus elegantly designed code?

As I said, use your judgment, but sometimes it can make sense to write a quick hack.

XP, On-site Customers, and Expertise

An "on-site" customer is an Extreme Programming term for a member of a client company that temporarily becomes a member of a development team. This person provides domain-specific knowledge to help guide the programmers while they are creating a product.

To begin an exploration of this subject, let’s draw a parallel between the role of the on-site customer, and the role played by many members of the Delphi team when Delphi was first being created. .

Delphi was Built in Delphi

Delphi was great even in its very first release, and stayed great year after year. Most members of the Delphi team would agree that this was true in part because "Delphi was built in Delphi.". The developers of the team had to use Delphi, and more particularly the VCL, in order to create Delphi. This meant several things:

  • The team found many of their own bugs, rather than relying on others to report them.
  • The team knew which bugs to fix, because they knew which ones really caused them trouble
  • The team was motivated to fix the bugs, because they needed working code so they could finish the VCL and the Delphi IDE.
  • The team knew what features were missing from the VCL, because they knew which ones were holding up development.
  • The team knew which features were missing from the IDE, and which one’s weren’t really necessary.

Put this all together, and what you have is the idea that Delphi was great in part because "Delphi was built in Delphi."

The On-site Customer

According to Scott Beck in his book "Extreme Programming Explained," the on-site customer has two roles:

  • She can produce functional tests
  • She can make small-scale priority and scope decisions for the programmers

It is interesting to note how these two ideas parallel the benefits the Delphi team derived from building Delphi in Delphi. For instance, knowing how to make priority and scope decisions requires the same kind of skills the Delphi team employed when discovering which VCL bugs to fix and which IDE features were necessary. An on-site customer will know intuitively which features are missing and which bugs are important, much as a Delphi team member knew intuitively what features were missing from the VCL and which bugs really needed to be fixed. The point is that the on-site customer uses the product every day, just as the Delphi team used Delphi every day.

It is a little harder, perhaps, to see how writing functional tests can be similar to the Delphi experience. However, a customer who works with an XP team is going to have access to daily builds. In XP programming, remember, the goal is to release a new version of the product at least once a month, preferably once every few weeks. Therefore, the on-site customer can be constantly using, and hence testing, the product, much as the Delphi team was constantly using, and hence testing, the Delphi language, the VCL, and the Delphi IDE. This knowledge can make an onsite customer an excellent candidate for designing tests, and for giving the team direction, at least in terms of the many small matters that can make a product excellent.

If you make one customer employee a member of your development team, then they have the inside knowledge necessary to help a team develop a high quality product.

Egos, Expertise and the Lines of Communication

Many developers are unlikely to approve of the idea of having a customer as a member of their development team. This enmity to the on-site customer has two sources:

  1. There is an age old war between IT developers who create resources and the customers who consume those resources. Customers are used to being disappointed by IT products that don’t quite meet their needs, and so they distrust IT. Conversely, IT is used to being treated badly by customers, and so they have come to disdain the constant stream of complaints with which their work is often greeted.

  2. Developers will have a tendency to dislike customers because they view them as non-technical. Customers just don’t have the expertise, some developers would claim, to work on the same team as a group of developers. Conversely, customers disdain developers as nerds who often lack social skills and who frequently don’t really understand the business for which their application is being created.

I think the first issue is really a matter of communication. Traditionally, customers and developers distrust and dislike one another because the lines of communication between them are down. Having an on-site customer is one way to open up those lines of communication. The on-site customer should improve the quality of the applications that are developed, and developers can warm to the customer if they meet on friendly, non-hostile ground.

On an XP team, the lines of communication between developers are open. They work in pairs, and all the team members are located inside a single room. The pairs aren’t static, and so everyone gets a chance to work with everyone else. This can be hard on great programmers, but it can be very beneficial to average programmers. Knowledge inside a team is easily shared, everyone knows who has strengths or weaknesses in particular areas, and code gets reviewed by more than one set of eyes.

If programmers can learn to work with one another, they can learn to work with a customer. After a time, they may learn to value an on-site customer’s domain expertise. Knowledge comes in many forms. A non-technical person who has a deep and intuitive knowledge of the way a department operates can teach developers a lot about the kind of application that department needs.

Success is a question of keeping the lines of communication open. Having an on-site customer does not guarantee that the lines of communication will be open, but it at least makes it possible. Without the on-site customer, the gap between developers and customers is too large, and it is unlikely that true communication will ever occur.

Summary

The Delphi team succeeded in part because the lines of communication between the consumer of a product and the developer of a product were very good. In fact, frequently the same person who needed to use a feature of Delphi was the person who created the feature. In the worst case scenario, they were at least on the same team.

An on-site customer in an XP team provides the same level of communication for a corporate project that the Delphi team had when creating Delphi. The on-site customer is a member of the team, and a regular user of the product.

Of course, an on-site customer does not guarantee success for a project. Communication can collapse if egos get in the way, if programmers become insulated and withdrawn, or if the customer refuses to accept the expense of sharing a good employee during the development cycle.

Despite these potential problems, XP at least offers a potential solution to a problem that plagues many development projects. The problem of communication is age old. XP offers one potential solution.

Ever notice that CPU speed has stalled?

I just read Herb Sutter’s blog for the first time in a while. For some reason I hadn’t subscribed before, but I remember reading it in the past. Perhaps I’d lost interest because he’s one of the architects of Visual C++, and I’m not interested in that variant of the C family. If given the choice, for low-level Win32 code, I’d use Delphi, and for .NET development I’d use C#.

Anyway, his latest post references an article he’s written for the March 2005 issue of Dr Dobbs that mentions a fact I’m sure we’ve all noticed in our subconscious: the speed of Intel CPUs in commodity PCs hasn’t changed all that much in the past year or more. It seems to be stuck at the 3GHz barrier (plus or minus a few tenths of a GHz).

There’s certainly a lot of chat around on how graphics cards are getting more and more powerful. In fact, some graphics cards now have more memory than the entry-level PCs from Dell. We’re hearing about the next WiFi after 802.11g. LCD screens are de rigueur. Fast disks? Pile ’em on. But CPU speed? Ho hum, everything goes quiet.

Herb believes that the lease is expiring on Moore’s Law. Intel et al just can’t get more paths through that silicon.

What does that mean for software developers? Well, sorry and all that, but the CPU bullet train has reached the terminus: we can no longer count on faster chips hiding our software performance problems. We should be rigorous in measuring performance and optimizing. Don’t micro-optimize, but optimize at an algorithmic level (hey, my specialty!).

Herb has another solution, one that would strike terror in a lot of developers’ hearts: use concurrency. It’s likely that commodity PCs will start to appear with two CPUs, possibly with gaming machines first, and then it’ll spread into the mainstream. Writing multi-threaded applications to take advantage of two CPUs is harder than writing single-threaded ones. Nevertheless, one way to get more performance from our applications would be to off-load some processing onto another CPU. If I were you, I’d start researching and experimenting with how to write multi-threaded algorithms, with how to write thread-safe code, and with how to deal with deadlocks.

Also be on the lookout for extensions that would help make writing multi-threaded apps easier and less error-prone, for error-prone they are. For C# check out Microsoft Research’s Comega prototype. For Delphi? Well, that’s a topic for another day.