VPNs for Small Businesses

New products make it more affordable for small and medium-size businesses to take advantage of virtual private network (VPN) technology. VPNs have attracted the attention of large, distributed enterprises because they let businesses create links across public and private networks to customers, branch offices, and telecommuters for less money than the cost of a traditional private network.

The choice of which VPN is best for a smaller business often comes down to how much programming you are willing to do. One such product that you can use to build a VPN inexpensively–although you’ll have to tinker with it a bit–is Microsoft’s BackOffice Small Business Server (SBS). SBS delivers elements of its parent BackOffice suite, including NT 4.0 Server, Exchange Server 5.0, SQL Server 6.5, Proxy Server 1.0, fax and modem services, and a simplified administration console. Another product, Lotus’s Domino Intranet Starter Pack (DISP) 2.0, includes the Domino 4.6.1 server, five licenses for either Notes or browser clients, and the SiteCreator tool for generating and managing 12 business applications.

Novell has Microsoft’s SBS in its sights with NetWare for Small Business (NSB) 4.11, which combines a single-site version of Novell Directory Services (NDS) with GroupWise 5.2, NetWare Multi-Protocol Router, Network Address Translator, Netscape FastTrack Web Server, and other third-party database, fax, virus, and backup products. Netscape has no small business suit; instead it partners with Concentric Network Corporation to offer Netscape Virtual Office, an on-line intranet center hosting private discussion, e-mail, calendaring, and other applications for a monthly fee. Microsoft continues to upgrade NT Server, which is part of SBS, with capabilities that improve its viability as a VPN platform. The NT 4 Option Pack, Routing and Remote Access Services (RRAS) update, and Service Pack 4 add an enhanced IIS 4.0, Microsoft Transaction Server (MTS), Microsoft Message Queuing Services (MSMQ), Index Server, Certificate Server and SSL 3.0, and Point-to-Point Tunneling Protocol (PPTP), all for free if you already have NT Server. You can upgrade Small Business Server to take advantage of NT’s new tools with careful planning. RRAS lets you tunnel into a PPTP-enabled server, and then to any workstation on the internal network. However, this defeats the security provided by Proxy Server 1.0, forcing an upgrade to the 2.0 version that supports packet filtering.

You’ll also need to apply a new Proxy Server hotfix to repair support for multihoming (the ability to host more than one site on a server), as well as an SBS service pack to allow use of Internet Explorer 4.01. Still, the hotfix and service pack upgrades are free, and the cost of the Proxy Server upgrade is just $505, which makes this solution almost $2000 less than buying BackOffice 4.0. The SBS solution won’t be suitable for some scenarios. For example, SBS disables NT trusts between domains, limits SQL Server database size to 1GB, and does not support Exchange directory replication. These changes cripple SBS’s flexibility for use in satellite offices. Microsoft is readying an upgrade path from SBS to Back Office 4.0 that will add the full version of SQL Server 6.5, Proxy Server 2.0, Exchange Server 5.5, Systems Management Server (SMS), SNA Server, and Site Server, but at press time, pricing was not determined. While SBS, RRAS, and the Option Pack provide the infrastructure for business-to-business communications, you need programming expertise, especially in Visual Basic and Visual InterDev, to make it all work. Domino Intranet Starter Pack on the other hand, comes ready with a secure, browser-based application suite.

Contact management, customer tracking, company forms, job postings, project management, registration, discussion, and document library databases are all part of the package. You can manage sites remotely via a browser or the native Notes client. Lotus’s DISP includes the latest Domino server and on-line documentation, but you’ll need to buy the Notes Designer client to customize or add applications. Domino 4.6.1 comes with a Certificate Server and sample registration templates for SSL 3.0 client authentication, but DISP only uses the less-secure password technology. Multihoming will not work with SSL certificates; the work-around requires partitioning with the more costly ($1000 additional) Advanced Services version of the server. NSB, DISP, and SBS are tactical products, balancing a mix of features and services that evangelize their underlying architectures without cannibalizing full-blown suite sales. Novell is counting on GroupWise’s user friendliness and advanced document management tools to retain mind share in the face of NT’s application services momentum. Lotus continues to provide a Web-based application development environment that outperforms Microsoft in its own NT backyard. And Microsoft moves steadily forward, integrating security, messaging, indexing, standards-based file formats, and directory services that the competition can’t afford to give away.

Weaving the Post Factual Internet

I live in Santa Cruz, CA. When I left home this morning to drive over the hill to San Jose, I was surrounded by fog. Winding along the highway up into the Santa Cruz mountains, I rose above the mist into the sunlight. Green redwoods towered above the road. I looked back over my shoulder at the low lying gray clouds that obscured the California coast. Dazzled by the sunlight, I thought back on the experience of driving through the shifting gray swirls of mist and decided that it was an experience not unlike browsing the Internet.

We live in the age of spin. Most people who write the prose we read on the Internet do not even pretend to be interested in telling the truth. Instead, they pride themselves on their ability to "spin the truth," that is, they are professional liars. These writers don’t like the sunlight. Instead, they prefer the mist, the fog, the places where facts are hard to discern, and nothing is clear.

In his famous essay, "Politics and the English Language," George Orwell, taught us to shun cliches such as "spin the truth." He warned that people who indulge in slovenly language often end up, either consciously or unconsciously, hiding the truth in a swirling bank of fog.

The heart of George Orwell’s essay is a plea for simple, straight forward language. A phrase like "spin the truth" is not only a cliche, but it is also what Orwell calls a "verbal false limb." Orwell writes, " Instead of being a single word, such as break, stop, spoil, mend, kill , a verb becomes a phrase , made up of a noun or adjective tacked on to some general-purpose verb such as prove, serve, form, play, render." In this case, simple verbs such as lie or dissemble are replaced with a verb phrase such as "spin the truth."

We live in a post factual world. When confronted with an inconvenient fact, politicians, marketers and bloggers never acknowledge its truth. Instead, they attempt to "spin" the facts, to distort reality. Such behavior is so common that most of us no longer blink when confronted by even its most blatant expressions. We believe that WMD simultaneously exists and does not exist, we think the definition of the word "is" might be open to interpretation. The sin is not confined to any one party, it is universal. It is the mist in which we walk each day. It is easy enough to climb up out of the fog, but few take the trouble to make the hike.

Hotbeds of Post Factual Writing

The Internet provides a new twist to the problems outlined by Orwell. All over the web one can find specialized web sites designed to push a particular agenda.

One of my favorite web sites is called Slashdot. This is a place where people who like Linux and Open Source, but who oppose Microsoft and proprietary technology, can get together and talk shop. The site consists mostly of links to interesting articles about technology. On the Slashdot site, people can comment on the articles. There is a certain class of people on Slashdot who can be counted on to exercise all of the sins outline in Orwell’s article. In their attacks on Microsoft and proprietary software, they are often angry or even personal.

In email, in blogs, in the comments found on places like Slashdot, the Internet is rife with hostility masquerading as opinion. The art of writing clear prose that states facts without hostility is rare. People prefer to bludgeon one another with words rather than attempt to communicate. If no facts are handy, an insult will do just as well. It’s not enough that someone disagrees with an opinion, the opposition won’t rest until they have claimed the contrarian is stupid, immoral and sexually inadequate.

As I say, I enjoy Slashdot. Nevertheless, I often find it impossible to read the comments on the articles linked to from the site. The articles themselves are often interesting, but the community often seems to exist in a post literate world where being considerate is a form of weakness.

SlashDot, however, is only one of many sites that pander to particular world views. These sites are the frontline in the post factual Internet.

Even neutral sites such as Amazon are hotbeds of anger and deception. Go read the reviews of nearly any controversial political book found on Amazon, and one will find plenty of "reviews" written by people who clearly have never read the work in question. After all, if the book is written by Bill Clinton or Newt Gingrich, there is no need for many people to actually read the text in question. Instead, they pretend to have read it and offer up a series of insults and groundless abstractions that they believe add up to a meaningful review.

The Deceptive Power of Abstractions

George Orwell’s essay focuses on political speech. In today’s world, however, the political and the social have blended in a way that was hard to image in 1946, when Orwell wrote his essay. This blending of politics and social interaction is nowhere more evident than on the Internet.

Orwell writes: "Statements like Marshal Petain was a true patriot, The Soviet press is the freest in the world, The Catholic Church is opposed to persecution, are almost always made with intent to deceive. Other words used in variable meanings, in most cases more or less dishonestly, are: class, totalitarian, science, progressive, reactionary, bourgeois, equality."

Since 1946, when Orwell wrote his essay, most of us have forgotten Marshal Petain, the French war hero from World War I who helped the Nazi’s when they occupied France during World War II. But we know instinctively what Orwell means. We have all read phrases such as "Oliver North was a true Patriot," or "Daniel Ellsburg was a true patriot." In each of these cases, the word patriot has been redefined to represent a post factual interpretation of the word "patriot." It doesn’t matter what the dictionary says, the word now has a new meaning that only the author fully understands. The list of words that Orwell specifies has morphed over the years, but we can replace them with modern tag lines such as paradigm, patriot, free market, relevant, free election, terrorist, freedom fighter, conservative or liberal.

Bringing this subject back down to technical matters, we often hear abstract words used to describe an OS or a computer language. By now, many of us have been trained to instinctively ignore phrases such as "Delphi is the best language," "Java is the best language," or ".NET is the greatest thing since sliced bread." Phrases or words that we have learned to distrust include paradigm, next generation, productive, user experience, user friendly, bottom line, critical mass, market momentum, exit strategy, investment climate, and so on.

Post Factual Marketing

If we all deplore the low standards found in email, blogs and some newsgroups, it can be more interesting to explore carefully thought out prose written be obviously intelligent people. Let’s go to a particular pages on the web to see how it works. First we’ll look at marketing from Sun, and then from Microsoft.

I’ll quote first from a paragraph grabbed at random from the lead article on Sun’s web site. The topic in question is OpenSolaris, Sun’s open source version of their Solaris operating system. Here is the quote:

"’It’s easy to focus on the source, the tools, and all the other tangibles involved in open source. But at the end of the day, the true measure of success for the OpenSolaris project is the community,’ says Sun senior marketing manager, OpenSolaris, Claire Giordano. ‘It’s not about the source code–it’s about the conversation.’"

I think it is wonderful that Sun is releasing an open source version of their operating system. However, I don’t think they do it to foster a conversation. Instead, they are interested in promoting their product. If people are talking about their OS, then they are likely to use it. But we are so used to people "spinning" their point of view, that it almost seems impolite, or unfair, to point this out. I also distrust pat phrases such as "at the end of the day," and "the true measure of success." I also find it instructive to consider what the referent of the article "it" might be in the phase "It’s not about the source code — it’s about the conversation." In my opinion, the likely referent is "Our marketing plan," but most readers would never understand that from a simple reading of Claire’s words. Though in this case, the quote may be real, I know from my years in the corporate world that it is not at all uncommon for a marketing person to write out a quote, and then attribute it to someone higher up in their company. In today’s world, it is hard to know if these are really quotes from Claire, or just the output from a brainstorming session.

I will stop picking on Sun now, and turn my attention instead to Microsoft. Here is paragraph chosen completely at random from the first page I came to on Microsoft’s web site. When selecting the quote, I picked one of the two first full paragraphs I could find describing the company’s push for Window’s Vista:

"The Aero philosophy is not only to deliver a user experience that feels great but also to fundamentally change the way usability is measured. In the past, Windows focused heavily on usability as defined by such metrics as discoverability, time to task, and task completion. Aero continues to deliver on these metrics, but it will also enable Windows Vista and WinFX applications to generate a positive connection with users on first sight, on first use, and over the long term."

Forget the made up words such as "discoverability," and mangled verb forms such as "deliver on." And don’t bother wondering whether or not a product called Aero can have a philosophy. These issues are merely the tip of a dangerous hidden world that lies hidden beneath a seemingly benign cloud layer.

The claim of this paragraph is that the heart of the "Aero philosophy" is to "deliver a user experience that feels great." What is really going on here, however, is that the marketing people who promote this product want us to "feel great" about Windows Vista. No one really expects us to feel great when we click a button in the Windows UI. But such distinctions are lost in a swirling fog spun by the marketers.

The latter half of this paragraph is even more meaningless. It claims that the "Aero philosophy" is also to "fundamentally change the way usability is measured." This will be done by "creating a positive connection with users." That’s a funny way to measure usability. How in the world can you measure something as abstract and meaningless as "creating a positive connection?" Again, the desire to "create a positive connection with users" is not a goal of the product, but a goal of the marketers. Furthermore, this another example of what Orwell called "a verbal false limb." And lord knows no one in the Aero development team even once considered changing the way "usability is measured." The whole riff on measuring usability is simply a verbal mist designed to disorient the reader. Changing the way usability is measured is simply not one of the goals of the product. Or at least I hope it was not their goal. Such an intent would imply an all too conscious and sinister Orwellian manipulation of the public.

The paragraph we have found on Microsoft’s web site is the written equivalent of a fog bank. It surrounds the reader in a swirling mist through which nothing can be clearly seen. A single screen shot would tell us much more.

The ultimate irony, however, is that the underlying message that the paragraph wants to convey is in fact entirely benign. Translated into simple English of the type Orwell advocates, the paragraph would read as follows; "We want to create an attractive interface that is easy to use." Reading that simple sentence is a bit like watching the fog burn off the California coast. The fog might have looked sinister, but it was covering a very pretty landscape.

Both Microsoft and Sun have a strong simple message, which is easy to convey. Why don’t they simple say it? Are they trying to deceive customers, their boss, themselves? Who knows? We can’t see through the fog to find out what they are actually thinking. And yet, though I personally prefer OpenSolaris to Microsoft Vista, when I compare these two prose samples, I would rather be exposed to Microsoft’s benign floundering than to Sun’s false altruism. But it is a fool’s choice.


Words are seductive. I’ve written a lot of them, and there is no sin described in this article that I have not committed. I’m writing from experience to remind myself and others of the dangers that lurk as close at hand as the nearest mail client.

Most of us know what it means to be good, and to be honest. Many of the same people who insult others on Slashdot probably go home at night and kiss their spouses and hug their children. The prose on Microsoft’s or Sun’s web sites may well be written by people who attend a church, synagogue, temple or mosque. We are so lost in the fog of our words that we can no longer see the clash of values between what we tell our spouse, minister or children, and how we act on the Internet.

When we start writing words many of us get lost in a fog of abstractions. We use words not to communicate, but to bludgeon one another. We all do it, whether we are politicians making a national address, marketers trying to sell a product, or just ordinary citizens commenting on a blog. It is a disease indigenous to the modern world. We wander each day through a verbal mist designed to deceive. We are so used to it that we often distrust or disparage the simple truth when we chance to encounter it. It seems at times starkly impolite, and at other times naive.

We forget the damage created when we enter the post factual world of aggressive hostility and meaningless abstractions. The idea of stating a simple truth is foreign to us, and attempts to be helpful or polite now seem quaint and out moded.

The solution is fairly simple. The first step is to read Orwell’s essay and to make a resolution to always use the simplest, most direct verbs and nouns that we can find. The second step is to place others first in our hearts. Is what we are writing going to hurt someone else unnecessarily? Are we being so selfish as to attempt to deceive others for personal gain? Are we trying to communicate, or are we just trying to win some imaginary competition? Do we really believe that the insults that we utter on a newsgroup are necessary, or are we just trying to cover up own insecurities in a blustering fog of abstractions? If we have a talent for prose, are we using it to deceive or to inform?

Virtual Madness

I was going to write an article about virtual machines and performance. In particular, I intended to say that the Java VM and the Microsoft .NET virtual machines represented an extra level of code between applications and the operating system. I want my machine to run as quickly as possible, and I feel that having all these virtual machines between applications and the operating system was taking up memory and slowing my machine down. I was hoping that in the future improvements could be built into the OS rather than added on top of it in the form of a virtual machine.

However, a funny thing happened on the way to writing that article. I could explain to myself why the Java virtual machine existed: It helped provide a common platform for applications running on diverse operating systems. But trying to explain the existence of the Microsoft CLR proved more challenging, in part because I know so little about the subject. Certainly .NET code does run on multiple platforms because of the MONO project, but I didn’t feel that cross platform support was the primary reason that Microsoft created .NET. Granted, Windows CE is a distinct platform from the main Windows platform, but I’m not sure it is so radically different or radically challenging as to mandate creating a new virtual machine for use on Windows XP or Windows Server 2003.

Lost in these ruminations, and unable to complete my article, I turned to my compatriots at Falafel for enlightenment. Unexpectedly, chaos ensued. It turned out that we could agree on almost nothing regarding the CLR, including whether or not it was a virtual machine.

As explained in the previous paragraphs, this is a subject about which I know very little. My main purpose here is simply to ask for your opinion. In particular, I want to ask you, the readers of CodeFez, “Why did Microsoft build the CLR, and is it a virtual machine?”

Is the CLR a Virtual Machine?

To help you get started, you might be interested in hearing some of the discussion we had at Falafel.

There were two main points of view:

  1. That the CLR was a virtual machine

  2. That he CLR was not a virtual machine

In the spirit of full disclosure, I’ll admit that I thought the CLR was a virtual machine. However, some of my compatriots argued vehemently that it was not a virtual machine. So one question I’d like to hear from you about is whether or not you think Microsoft .NET is run on top of a virtual machine.

I define a virtual machine as a set of code that provides services similar to those offered by an operating system, but which sits between a running application and an operating system. An application that runs on top of a virtual machine needs to first locate or load the virtual machine in memory, then it can begin to request services from the virtual machine in lieu of calling the OS directly.

I see virtual machines as providing a superset of the features found in virtual memory. Virtual memory is designed to isolate a program from other programs and from the hardware for security purposes. Virtual machines also have this ability to add memory protections, but they do more than that. We had virtual memory even back in the last days of DOS, but protected mode DOS was not the same thing as a Java virtual machine. Virtual machines provide more than just memory protection; they provide a complete set of services. At any rate, that’s my view of a virtual machine. What do you think? For help, and for a variation on my point of view, you might want to read the Wiki on this subject.

A distinction needs to be made between an API, such as the Delphi VCL, and a virtual machine, such as the JVM. The Delphi VCL provides printing services, and the JVM provides printing services, but the distinction between them is that the JVM represents a complete set of services loaded into memory between you and the operating system, while an API like the VCL can be linked directly into your application, and does not necessarily provide a complete set of services equivalent to an operating system.

In some cases, the CLR is hardwired to give you fairly direct access to the operating system. This is also true of the JVM. For instance, you can write Java code that directly accesses DirectX, thereby giving your program a big boost in graphics performance. If some parts of the JVM or CLR give you direct access to the hardware, does this mean they are not virtual machines? Is it possible there is no such thing as a virtual machine?

Why Did Microsoft Create the CLR?

Assuming that you think the CLR is a virtual machine, or something like a virtual machine, the second question is why Microsoft used that architecture rather than extending the OS itself. It is probably clear to all of us that COM could be improved upon, and that .NET represents such an improvement. Most of us would also agree that .NET has other nice features, such as good type checking, garbage collection, multi-language support, heap management, etc. These features all represent potential improvements over Win32 code.

The question, of course, is not whether the CLR has good features, but why those features had to be implemented in a virtual machine. After all, Microsoft has control of the operating system. If they wanted to add a new feature to Windows such as improved type checking, multi-language support, or garbage collection, then they could have built it right in to the OS itself. And engineers have implemented all of those features without building virtual machines. So there is not, at least from my point of view, any law that dictates that type checking is not available unless there is a virtual machine. But perhaps there is something about a virtual machine that makes it possible to build better type checking, better garbage collection, better heap management than can be built without a virtual machine? If so, what is it about virtual machines that makes it possible for them to offer better type checking than you can get without a virtual machine?

Let me add one last thought that occurred to me while exploring this subject. Trying to write my original article, failingly about in the darkness, looking for answers, asking others for help, I began to wonder if Microsoft needed a way to transition from the Win32 world to a new world in which .NET features could be built directly into the OS. For now, Microsoft has to continue to support Win32, there are too many applications that run on top of it. And they couldn’t just abandon overnight all that work and set about the daunting task of rewriting all those services. So perhaps they chose the CLR as transitional step. Maybe they implemented their new improved OS at first as a virtual machine, leveraging the existing Win32 code as a starting point, and figuring that later they might build it into the OS itself? Is it possible that .NET as we know it is just a half way house on the way to a real implementation of these services inside of Windows itself?

Not everyone agreed with the idea that the CLR was simply a step in a larger process. But I am still intrigued by this idea, and so I’ll ask you if you have an opinion on this matter. Is it possible that the CLR is just a transitional phase and that eventually .NET will be built into Windows and Windows CE?


Admitting that I know almost nothing about this subject, I would like to hear your opinion. Why did Microsoft create the CLR instead of building these services directly into the OS? Is the CLR a virtual machine? Is it an execution engine? A runtime? How do you define these terms? Why do you think the CLR is or is not a virtual machine, or a runtime, or an execution engine? Do you like virtual machines? Should we build more of them? Do we need to get rid of them? What about building them into the CPU itself?

Parochial vs Cosmopolitan Computing

There is an old saying that travel broadens the mind. I think that a wide experience of different technologies can have the same beneficial effect for computer users.

A person who has traveled can distinguish between human traits that are peculiar to a particular area, and those traits that are universal, that are part of human nature. Such knowledge gives them a broader, more sophisticated view of the world. Ultimately, it teaches them compassion, and acceptance. Such people gain a willingness to see the good in people with customs different from their own.

The same can be said of computer users who have experience with multiple operating systems and multiple tool sets. People who use only one operating system, and one set of tools, generally don’t have as deep an understanding of computing or computers as do people who have wide experience with several operating systems and several different tool sets. A specialist may have a deeper understanding of a particular field, but their overall understanding of computing in general may be limited. This limitation traps them in a series of narrow minded prejudices which are both rude and limiting. It is hard for them to make good choices, because they don’t understand they options open to them.

There has long been a general prejudice in favor of people with a cosmopolitan or broad outlook and against people who have a parochial or narrow outlook. The reason a term like hick or yokel is considered derogatory is because people from rural areas who have not seen much of the world tend to have restricted or narrow points of view. For instance, there is something innately comic about a rural farmer from 100 years ago who lived off collard greens, chitlins and pigs feet reacting with disgust to the thought of a Frenchman eating snails. The joke was three fold:

  • Chitlins and collard greens are themselves exotic foods. There is something innately comic about people with exotic tastes making fun of someone else for having exotic tastes.

  • Though southern cooking can be delicious, it was not uncommon to see chitlins and collards prepared poorly, while French escargot, as a rule, was a delicacy prepared with exquisite refinement by some of the best cooks in the world.

  • The final, and most telling part of the joke was that southern cooking in general probably owed as much to French cooking as to any other single source. By deriding the French, our hapless yokel was unintentionally deriding his own heritage.

Most programmers start out using a particular computer language, such as Java, VB, C++ or Pascal. At first, their inclination is to believe that their language is the only "real" language, and that all other computer languages are "dumb." Take for instance, a deluded Visual Basic programmer who tries to use a PRINT statement in C++, finds that it won’t compile, and comes away thinking that C++ is a hopelessly crippled language. The truth of the matter, of course, is that C++ does support simple IO routines like PRINT, but the syntax in C++ is different than in VB.

This kind of narrow computer prejudice is similar to the viewpoint of our rural farmer from a hundred years ago who is suddenly transplanted to Paris. She goes home and tells everyone that there is nothing to eat in Paris. "They just don’t serve real food there. They think we are supposed to live off snails!" Or perhaps they conclude that Frenchmen are cruel because they laughed when the farmer started ladling up the flowers from her finger bowl with a spoon. What they forget, of course, is that everyone back home in Muskogee will laugh at a Frenchman who tries to eat corn on the cob with a knife and fork.

There is an interesting moment in the life of many developers when they start to understand parochial computing. As stated above, programmers tend to start out by getting to know one particular language in great depth. To them, their language is the computer language, and all other languages pale in comparison.

Then one day, disaster strikes. The boss comes in and tells them that they have to work on a project written in a second language, let’s say Java. At first, all one hears out from our hapless programmer is that Java "sucks." They are full of complaints. "You can’t do anything in this language. It doesn’t have feature X, it uses curly braces instead of "real" delimiters, the people who wrote this language must have mush for brains!"

Then, over time, the complaints lessen. After all, you can type a curly brace faster than the delimiters in their favorite language. That doesn’t make Java better than the developer’s favorite language, but it "is kind of convenient, in a funny kind of way." And after a bit, they discover that Java doesn’t support a particular feature of their favorite language because Java has another way of doing the same thing. Or perhaps the feature is supported, but the developer at first didn’t know where to look to find it. Of course, they are still heard to say that Java isn’t nearly as good as their favorite language, but the complaints lack the urgency of their initial bleatings.

Finally, after six months of struggling on the Java project, the big day comes: the developer has completed his module and can go back to work on a project using his favorite computer language. But a funny thing happens. At first, all goes swimmingly. How lovely it is to be back using his favorite editor and favorite language! But after an hour or so, curses start to be heard coming from his cube. "What’s the matter?" his friends ask. The programmer inaudibly mumbles some complaint. What he does not want to give voice to is the fact that he is missing some of the features in the Java language. And that Java editor, now that he comes to think of it, actually had a bunch of nice features that his editor doesn’t support! Of course, he is not willing to say any of this out loud, but a dim light has nonetheless been lit in the recesses of his brain.

Perhaps, if he is particularly judicious and fair minded, our newly enlightened programmer might suddenly see that though his language enjoyed some advantages over Java, Java was in some ways better than his own language! It is precisely at that moment that he begins to move out of the parochial world of prejudice and into the broader world of cosmopolitan computing.

The OS Bigot

The type of narrow viewpoint discussed here has no more common manifestations than in the world of operating systems. We have all heard from Microsoft fanatics, who, when asked to defend their OS, say: "There are more Microsoft users than users of all other operating systems combined." Yes, that is true, but it is also true that there are more people in India than in the United States. But believe me, there are few Americans who want to go live amidst the poverty, technical backwardness, and narrow provincialism of even a "thriving" Indian city such as New Delhi.

Microsoft users might also complain that it is hard to install competing OS’s, such as Linux. When asked to defend their point of view, they will eventually confess that their opinion is based on experiences that they had some five years earlier, when it was in fact true that most Linux installations were difficult. Today, Linux usually installs more quickly, and with much less fuss, that Windows.

Of course, people on the other side are no less narrow minded. A Linux install may be simpler and faster than a Windows install, but Linux typically does not have as good driver support, particularly for new devices. Thus it is not unusual for a Linux user to have no trouble with his video and sound cards, but to have to do work to get his CD burner working or scanner working.

It is true that the Windows GUI environment is still better than the one found in Linux. But the advantage seems to shrink not just with each passing year, but with each passing month. For the last year, and for most of the last two years, the KDE Linux environment has been at least as good as the GUI environment found in Windows 98, and in some areas it is superior to that in Windows XP.

Conversely, just as Windows has a slight advantage in the GUI world, Linux has long enjoyed a significant advantage when working at the command prompt. A typical Windows user will say, "So what? Who wants to work at the command prompt?" That’s because they are used to using the Windows command prompt, which has historically been very bad. But watching a skilled user work at the command prompt in Linux can be a revelation. There are things you can do easily with the BASH shell that are hard, or even impossible, to do with the Windows GUI. But in recent years, even this truism has been shown to have its weaknesses. The command prompt in Windows XP is much improved over that found in Windows 98 or Windows 2000, and the porting of scripting languages such as Python and Perl to Windows has done much to enhance life at the Windows command prompt.


Linux users often argue that their software is free in two senses of the word:

  • It has zero cost

  • And it comes with source and can be freely modified

All that is true, but Windows has a wider range of available applications. Who would deny that there is a very real sense of freedom that one gets from using a beautifully designed piece of software?

And yet, if you are a student, or an older person on a limited income, you might not be able to afford all that fancy software. In such cases, you might be better off using Linux, where you can easily find free versions of the tools you need.

Again, one might read the above and come to the narrow conclusion that proprietary software is always better than open source software. But this is not always true. For instance, Mozilla is clearly a much better browser than the Internet Explorer. It more closely conforms to the HTML standard, it handles popups better, it has a better system for handling favorites, and it has a feature, tabbed windows, that gives it a massive usability advantage over IE.

On the other hand, there is simply nothing in the open source world to compare to a tool like DreamWeaver. There are probably a hundred different open source web editors, but only the HTML editor in OpenOffice provides even the rudimentary features found in DreamWeaver.

The Historical Perspective

The ultimate irony, of course, comes when a person with a limited perspective imitates another culture, and goes about crowing about this borrowed sophistication as if he invented it himself.

I used to do this myself, back when I promoted Delphi for a living. Unknowingly, I often championed features in Delphi that were in fact borrowed from VB. I would say, Delphi is better than VB because it has feature X. I didn’t know that VB not only had the same feature, but that the creators of Delphi had in fact borrowed the feature from VB.

I have seen the same thing happen when advocates of C# crow about how much better it is than Java, and then use one of the many features that C# borrowed from Java as proof of the fact. The same often happens when a user of a DotNet based application approaches a Linux user and shows off the great features in their product. The fact that not only the feature, but the entire product and its architecture was stolen directly from an open source application written in PHP is of course lost on the advocate of DotNet’s prowess.

In fact, it is generally true that Microsoft is a company that uses derived technologies. DotNet is just an attempt to emulate the features found in Java and PHP. C# is for the most part simply an imitation of Java with a few features from Delphi thrown in for good luck. IE is an imitation of the features found in the old Netscape browser. The Window’s GUI is an imitation of the Mac GUI.

One of the signs of a cosmopolitan person is that they have an historical perspective, and can know something about where cultural habits originated, or from which sources they were derived. A provincial person thinks not only that his culture is best, but that his country invented the very idea of culture.

Of course, one should rise above even this insight. It is true that Microsoft is a company based on borrowed ideas. But Microsoft does a good job of borrowing technology. The old joke states that Microsoft begins by deriding new inventions, then imitates them, and ends up claiming they invented them. But what people forget is that Microsoft often does "reinvent" technologies in a meaningful way by implementing them very well, and by adding special touches that improve upon the original product.

So the correct perspective is to recognize that derivation lies at the heart of Microsoft technology, but to also recognize their technical expertise. Gaining that kind of nuanced world view is part of what it means to be a sophisticated computer user. Knowing such things can help you make informed decisions, rather than decisions based on prejudice.


Ultimately the kind of narrow prejudice found by advocates of single platforms or single technologies offers a frighteningly restricted world view. Such people are indeed a bit like a hick or yokel from 100 years ago who arrives in the big city and feels overwhelmed by a kind of sophistication that they had never imagined and cannot comprehend. They dislike the big city not only because it is different, but because it threatens them. They are suddenly a small fish in a big pond, and from the heart of their insecurity, they begin to mock the city sophisticates who swim in the urban sea.

This is not to say that our yokel might not have cultural advantages over a "snob" from the big city. For instance, it is well known that rural farmers in America 100 years ago were renowned for their friendliness. It is true that such people often worked together to help a neighbor through a tough time, and they often worked together and shared resources in ways that their friends from the big city could not even imagine, let alone imitate. And of course they would have a specialized knowledge of how to survive in their rural world that the Parisian could not match.

The key difference, of course, is that a truly cosmopolitan person could have the perspective to appreciate all this, while a person from a rural area would be more inclined to adopt a narrow, provincial point of view. The cosmopolitan person could admire both Parisian society, and rural America.

This is the perspective that Alexis de Tocqueville brought to his book Democracy in America. Alexis de Tocqueville understood both European culture, and American culture, and that gave him the insight needed to write so trenchantly about American society.

The mark of the cosmopolitan is that she will:

    • Be gracious enough to help without condescension foreigners who are unfamiliar with the customs of her land.

    • Have enough perspective to laugh goodnaturedly at herself when caught out not knowing the customs of a foreign land.

    • Have the perspective to see what is truly best in any one culture because her perspective is broad and informed.

A cosmopolitan person has these traits instinctively, and without self consciousness. She knows that each land has its own customs, and that deep down where it counts, people are the same when it comes to matters of the heart and soul. The may have different habits, but it is narrow minded, provincial, even parochial, to regard people with a different perspective as innately inferior to oneself.

Software developers who have broken out of the narrow prejudices formed when using their first language and first OS have the same advantages. They know what is best in multiple worlds, and therefore have the wisdom to search for those features on whatever platform they use. They don’t waste time embarrassing themselves by making snide, narrow minded comments, that polite people can’t even correct without sounding condescending or unemotionally hurting someone’s feelings. They have gained a sophistication, and a broader perspective, that makes them better at everything they do, regardless of their toolset.

Windows Search that doesn’t Suck

If you were recently in a temporary coma you may have missed the news about the release of Google Desktop Search, which leverages Google’s search technology on individual PCs by enabling quick and easy access to information buried in Outlook/OE email messages, MS Office documents, web history, IMs, and text files. After trying Lookout a few months back, I became totally addicted to actually being able to find email messages while I was still interested in the information they contained. I was eager to try out Desktop Search to see if it could do for other documents what Lookout did for email.

After the quick install, the product spent the better part of 2 days indexing the 55.8 gigs of occupied space on my laptop’s hard disk. However, unlike the porcine Index Server that comes with Windows, Google Desktop Search doesn’t peg my CPU trying to do its indexing work while I am in the middle of trying to my work. Instead, Desktop Search waits until I am not using the PC, so, while the process took quite a while, the impact of the indexing process on my life was nil. Once complete, the utility had indexed a total of 60,578 unique items.

The application sits in the taskbar as a tray icon, its local menu containing options to search, set preferences, and so forth. Interestingly, but not surprisingly, the user interacts with the application using locally-served web pages with a look and feel similar to that of Google’s web site. So, for example, selecting the “Search” item from the tray icon’s local menu brings up a local web page that looks a lot like www.google.com.

So, how good is it? Well, searching for the string “codefez” brought me to a results page containing 35 emails, 5 office documents, and 93 pages from web history in less than a second. A more complex search string, such as “+falafel -lino” gave me 533 emails, 11534 files, and 3535 pages from web history in about a second. How good? Damn good.

Of course, performance like this doesn’t come for free. The index files necessary to accommodate those 60,578 unique items occupy a total of 485 megs of disk space on my laptop. For me, this is a small price to pay for actually being able to find things on my computer based on their contents. Imagine!

On a related note, Microsoft has announced their intention to ship a beta version of a similar tool before the end of 2004. It will be interesting to see what they can produce, but whatever it looks and smells like one thing is certain: large, talented companies competing to build great free software can mean only goodness for consumers. Meanwhile, I’m sticking with Google Desktop Search.

Introduction to Yum

Learn how to use yum, a tool for automatically maintaining your system. You can use yum to make sure your entire system is up to date, or to automatically add or remove applications and services.


Installing and updating software can be one of the more unpleasant computer maintenance tasks. The process of inserting CDs, browsing for a particular app, answering install questions, looking for the right version, etc, can be boring and time consuming.

What one wants, ideally, is to be able to say to the computer, “install the latest version of OpenOffice,” and then the computer would go out and do just that. Or, one might like to ask the computer to automatically “make sure you are up to date.” Linux doesn’t offer any features quite that fancy, but modern distributions come with tools like yum, urpmi, YaST and rpmapt (or debian apt) which come close to providing these advanced features. The GUI based up2date project that ships with Fedora and Redhat is also useful. However, I have found that up2date on Fedora Core 2 is not entirely bug free, which is what led me to yum. After trying yum a few times, I found that it is much more powerful and useful than up2date.

These various tools are often associated with particular distributions. For instance, apt is native to Debian, urpmi to Mandrake, and yast to SUSE. Yum is usually associated with RedHat and Fedora, though like apt, it can be used on multiple distributions.

Yum is the Yellowdog Updater, Modified. It is very easy to use. For instance, if you have yum installed properly, then you can issue a command like this to install OpenOffice:

yum install openoffice

OpenOffice, and any dependencies on which it relies, will be automatically installed. In other words, all the packages necessary to install the most recent version of openoffice will automatically be downloaded from the Internet and installed.

To make sure your entire system is up to date, you can issue this command:

yum update

After issuing this command, any out of date files will be updated, and any missing dependencies will be installed. If a new version of one piece of software requires that another piece of software be updated, that task will be accomplished for you automatically.

The rest of this article will describe how yum works, how to install it, how to configure it, and how to perform routine tasks with it. If you understand how yum works, then you should have little trouble understanding either apt or urpmi.

Installing Yum

Yum is part of the Fedora Core standard install. If yum is installed, then you can become root and type yum to test it:

[root@somecomputer etc]# yum
    Usage:  yum [options] 
          -c [config file] - specify the config file to use
          -e [error level] - set the error logging level
          -d [debug level] - set the debugging level
          -y answer yes to all questions
          -t be tolerant about errors in package commands
          -R [time in minutes] - set the max amount of time to randomly run in.
          -C run from cache only - do not update the cache
          --installroot=[path] - set the install root (default '/')
          --version - output the version of yum
          --exclude=some_pkg_name - packagename to exclude - you can use
            this more than once
          --download-only - only download packages - do not run the transaction
          -h, --help this screen

If yum is not on your system, you can download it from the Duke web site. Here is a download directory where all the versions of yum are kept. Information on downloading Yum for RedHat 9 or 8 is available at the Fedora Wiki.

Yum usually comes in the form of a rpm file, which can be installed like this:

rpm -Uhv yum-2.0.7-1.noarch.rpm

RPM is the Redhat package manager, and it is used to automatically install packages that are already on your system. After you have installed yum, then you can use yum to install or update all the other applications or services on your machine. In other words, you would only have to manually use rpm to install yum, and after that yum would control rpm for you automatically. Yum is much more powerful and much easier to use than rpm.

Configuring Yum

Yum needs to know what software should be installed on your system. For instance, if you are using Fedora Core 2, then it needs to know what packages make up a standard install of Fedora Core 2. The packages needed for a particular Linux distribution are stored in repositories on the Internet. To properly configure yum, you need to open a file called /etc/yum.conf, and make sure it contains the proper information. In other words, you use yum.conf to point yum at the repositories on the Internet that define the files needed for your distribution of Linux.

If you have installed Fedora Core from CD, then you probably have a valid yum.conf file on your system already. However, at the end of this article you will find a simple yum.conf file for RedHat 9, and a more complex yum.conf file for FedoraCore. These are complete files, and can be used to replace your existing yum.conf file; though of course I would recommend backing up any file you wish to replace.

Additional information on config files are found in various places across the web, including the information found at the following URLs:

  • http://www.fedoraforum.org/forum/archive/index.php/t-2067.html
  • ttp://www.xades.com/proj/fedora_repos.html
  • ttp://dries.studentenweb.org/apt/

Yum Packages

I’ve talked several times in this article about yum packages. A package in yum is an rpm file. Each rpm file has a header, that defines the contents of the file and any dependencies it might have. In particular, it defines the versions of the programs upon which the code in the rpm file depends. Using this header, it is possible to calculate exactly what packages (rpm files) need to be downloaded in order to successfully install a particular product.

When you first start yum by becoming root and typing yum list, it usually spends a long time (15 to 60 minutes) downloading not entire rpm files, but instead the headers for all the rpm files that define your distribution. After it has downloaded all these headers, then you can issue a command like yum update, and yum will compare the current contents of your system to the records found in the rpm headers it has downloaded. If some of the headers reference files that are more recent than the files currently installed on your system, then yum will automatically download the needed complete rpm files and use them to update your system.

Besides the headers for your distribution, you can configure yum to reference other repositories that contain additional files that might interest you. For instance, you can ask yum to download all the headers for the files needed to install mono, or all the fedora extras, or all the files that are part of jpackage. Once the headers are in place, you can download all or part of the packages found in these repositories. You can also point yum at freshrpms, a location where yum is likely to find any number of packages that might interest a Linux user. The complex yum.conf file at the end of this article is set up to do most of these things automatically. In another technical article which will soon appear on this site, I will discuss configuring yum so that it will automatically install mono.

If you want to visit a yum repository to see its structure, you can do so. Here, for instance, is the yum repository for Mandrake:


The Yum Cache

Yum stores the headers and rpms that it has downloaded on your system. Here is the directory structure that yum uses for its cache on of my old RedHat systems:


As you can see, the cache is divided up into two sections, the base files and the updates. The headers for each section are stored in one directory, and any downloaded packages in another directory.

If you look at the simple yum.config file at the end of this article, you will see that defines where the cache will be stored, and that it has two sections called base and updates. The more complex yum.config file points at the same cache, but it has more repositories upon which it draws. As a result, using it will likely lead to you have more than two simple sections called base and updates. For instance, you might have sections called jpackage or updates-released.

Running Yum Update

As always, there is no better way to learn how yum works than simply getting your hands dirty at the command line by using it. The closest I can come to that experience in an article of this type is show you the output at the command line of the simple command yum update. At the time I ran this command, my system was already reasonably up to date, so only a few files are downloaded. The complete run is shown in Listing 1.

Listing 1: A simple run of yum update has three parts, first contacting the servers, then downloading the headers and parsing them, then downloading and installing the needed packages.

[root@somecomputer etc]# yum update
Gathering header information file(s) from server(s)
Server: Fedora Core 2 - i386 - Base
Server: Fedora.us Extras (Stable)
Server: Fedora.us Extras (Testing)
Server: Fedora.us Extras (Unstable)
Server: Livna.org - Fedora Compatible Packages (stable)
Server: Livna.org - Fedora Compatible Packages (testing)
Server: Livna.org - Fedora Compatible Packages (unstable)
Server: macromedia.mplug.org - Flash Plugin
Server: Fedora Core 2 - i386 - Released Updates
Finding updated packages
Downloading needed headers
cups-libs-1-1.1.20-11.6.i 100% |=========================| 6.8 kB   00:00
redhat-artwork-0-0.96-2.i 100% |=========================| 102 kB   00:00
libxml2-0-2.6.15-2.i386.h 100% |=========================| 3.0 kB   00:00
libxml2-python-0-2.6.15-2 100% |=========================| 4.3 kB   00:00
cups-1-1.1.20-11.6.i386.h 100% |=========================|  23 kB   00:00
perl-HTML-Template-0-2.7- 100% |=========================| 2.2 kB   00:00
jhead-0-2.2-0.fdr.1.2.i38 100% |=========================| 1.7 kB   00:00
cups-devel-1-1.1.20-11.6. 100% |=========================| 7.0 kB   00:00
libvisual-devel-0-0.1.6-0 100% |=========================| 2.4 kB   00:00
libvisual-0-0.1.6-0.fdr.2 100% |=========================| 1.9 kB   00:00
perl-Glib-0-1.061-0.fdr.2 100% |=========================| 4.4 kB   00:00
libxml2-devel-0-2.6.15-2. 100% |=========================|  14 kB   00:00
Resolving dependencies
Dependencies resolved
I will do the following:
[update: cups-libs 1:1.1.20-11.6.i386]
[update: redhat-artwork 0.96-2.i386]
[update: libxml2 2.6.15-2.i386]
[update: libxml2-python 2.6.15-2.i386]
[update: cups 1:1.1.20-11.6.i386]
Is this ok [y/N]: y
Downloading Packages
Getting cups-libs-1.1.20-11.6.i386.rpm
cups-libs-1.1.20-11.6.i38 100% |=========================| 101 kB   00:00
Getting redhat-artwork-0.96-2.i386.rpm
redhat-artwork-0.96-2.i38 100% |=========================| 4.4 MB   00:28
Getting libxml2-2.6.15-2.i386.rpm
libxml2-2.6.15-2.i386.rpm 100% |=========================| 625 kB   00:03
Getting libxml2-python-2.6.15-2.i386.rpm
libxml2-python-2.6.15-2.i 100% |=========================| 435 kB   00:02
Getting cups-1.1.20-11.6.i386.rpm
cups-1.1.20-11.6.i386.rpm 100% |=========================| 2.5 MB   00:16
Running test transaction:
Test transaction complete, Success!
libxml2 100 % done 1/10
cups-libs 100 % done 2/10
redhat-artwork 100 % done 3/10
libxml2-python 100 % done 4/10
cups 100 % done 5/10
Completing update for cups-libs  - 6/10
Completing update for redhat-artwork  - 7/10
Completing update for libxml2  - 8/10
Completing update for libxml2-python  - 9/10
Completing update for cups  - 10/10
Updated:  cups-libs 1:1.1.20-11.6.i386 redhat-artwork 0.96-2.i386
libxml2 2.6.15-2.i386 libxml2-python 2.6.15-2.i386 cups
Transaction(s) Complete
[root@somecomputer etc]#

You can probably parse that file with no trouble on your own. However, I will take a few moments to break it apart, just you can be absolutely clear about what happens when yum performs an operation of this type.

The first step is to contact the server specified in your yum.conf file:

Gathering header information file(s) from server(s)
Server: Fedora Core 2 - i386 - Base
Server: Fedora.us Extras (Stable)
Server: Fedora.us Extras (Testing)
Server: Fedora.us Extras (Unstable)
Server: Livna.org - Fedora Compatible Packages (stable)
Server: Livna.org - Fedora Compatible Packages (testing)
Server: Livna.org - Fedora Compatible Packages (unstable)
Server: macromedia.mplug.org - Flash Plugin
Server: Fedora Core 2 - i386 - Released Updates

Yum then downloads the headers it found on the servers:

Finding updated packages
Downloading needed headers
cups-libs-1-1.1.20-11.6.i 100% |=========================| 6.8 kB   00:00
redhat-artwork-0-0.96-2.i 100% |=========================| 102 kB   00:00
libxml2-0-2.6.15-2.i386.h 100% |=========================| 3.0 kB   00:00
libxml2-python-0-2.6.15-2 100% |=========================| 4.3 kB   00:00
cups-1-1.1.20-11.6.i386.h 100% |=========================|  23 kB   00:00
perl-HTML-Template-0-2.7- 100% |=========================| 2.2 kB   00:00
jhead-0-2.2-0.fdr.1.2.i38 100% |=========================| 1.7 kB   00:00
cups-devel-1-1.1.20-11.6. 100% |=========================| 7.0 kB   00:00
libvisual-devel-0-0.1.6-0 100% |=========================| 2.4 kB   00:00
libvisual-0-0.1.6-0.fdr.2 100% |=========================| 1.9 kB   00:00
perl-Glib-0-1.061-0.fdr.2 100% |=========================| 4.4 kB   00:00
libxml2-devel-0-2.6.15-2. 100% |=========================|  14 kB   00:00

Next the dependencies are calculated and the user is asked whether she wants to download the needed packages:

Resolving dependencies
Dependencies resolved
I will do the following:
[update: cups-libs 1:1.1.20-11.6.i386]
[update: redhat-artwork 0.96-2.i386]
[update: libxml2 2.6.15-2.i386]
[update: libxml2-python 2.6.15-2.i386]
[update: cups 1:1.1.20-11.6.i386]
Is this ok [y/N]: y

If the user gives permission, then the needed packages are downloaded:

Downloading Packages
Getting cups-libs-1.1.20-11.6.i386.rpm
cups-libs-1.1.20-11.6.i38 100% |=========================| 101 kB   00:00
Getting redhat-artwork-0.96-2.i386.rpm
redhat-artwork-0.96-2.i38 100% |=========================| 4.4 MB   00:28
Getting libxml2-2.6.15-2.i386.rpm
libxml2-2.6.15-2.i386.rpm 100% |=========================| 625 kB   00:03
Getting libxml2-python-2.6.15-2.i386.rpm
libxml2-python-2.6.15-2.i 100% |=========================| 435 kB   00:02
Getting cups-1.1.20-11.6.i386.rpm
cups-1.1.20-11.6.i386.rpm 100% |=========================| 2.5 MB   00:16

Finally, some tests are run to make sure everything is as it should be.

Running test transaction:
Test transaction complete, Success!

If the the calculations check out, then the packages are installed and the user is notified that the transaction is complete:

libxml2 100 % done 1/10
cups-libs 100 % done 2/10
redhat-artwork 100 % done 3/10
libxml2-python 100 % done 4/10
cups 100 % done 5/10
Completing update for cups-libs  - 6/10
Completing update for redhat-artwork  - 7/10
Completing update for libxml2  - 8/10
Completing update for libxml2-python  - 9/10
Completing update for cups  - 10/10
Updated:  cups-libs 1:1.1.20-11.6.i386 redhat-artwork 0.96-2.i386
libxml2 2.6.15-2.i386 libxml2-python 2.6.15-2.i386 cups
Transaction(s) Complete

Basic commands Used with Yum


yum clean

cleans up stuff.


yum provides PackageName

Find out which packages provide a particular feature.


yum install PackageName

install a package or group of packages.

update a package

yum update PackageName

update a package or group of packages

update system

yum update

update everything currently installed


yum remove PackageName

remove a package

check for updates

yum check-update

See if there are any updates available


yum search

Useful if you know something about a package, but not its name


yum list

list what packages are available. Many options.


yum info

Find information on a package.


yum upgrade

Like update, but helpful for moving between distro versions, depricated.

It has been reported that in FC3 you need only type yum list recent to learn of packages added to the repository in the last seven days. In general, you can run yum with no parameters or with -h as a parameter in order to get a sense of what you can do with any particular version of the product. Typing man yum is also a good way to learn more about the various commands you can give when using yum.

Can You Afford Not to Know Linux

A split is beginning to emerge in the computer world between programmers who use Linux and open source, and end users who run Windows. This is not a fait accompli yet, only a trend. But increasingly, we find Linux on the backend, Windows on the desktop. If you are a programmer, then you are creating the apps run on the backend. Though Windows still has majority piece of this pie, it is a shrinking piece, and one that has an uncertain future. For programmers, this means that knowing Linux and following the Open Source community is not just an option, but more and more of a necessity.

Cutting edge technology companies that produce big results in the real world often use Linux. You’ve probably read about Industrial Light and Magic using Linux. Publishers are starting to use Linux when they create books.

IDC states that Linux has a 24% share of the server market today, and will have 33% share in 2007. That compares with a 59% share of the market currently owned by Microsoft. The total market for Linux based devices is currently at 11 billion dollars, and is expected to grow to 35.7 billion dollars by 2008.

So we find that increasingly, Linux is showing up on servers. On the desktop, however, the Linux 3% market share still lags behind even the struggling Mac. Nevertheless, the Linux share of the desktop is expected to grow to 6 percent by 2007. If that trend continues, it will mirror Moore’s law, with the Linux share of the desktop doubling every two years. That means Windows will continue to dominate the desktop world for some time. Nevertheless, almost certainly it will be the technical users who will be switching to the more flexible Linux desktop, and the end users who lag behind, bound by their allegiance to the familiar.

IT and Foreign Markets

If you work as a programmer, you no doubt have noticed that one by one, many of the major corporations in the world are running their IT departments on Linux. They may give the end users in the company Windows boxes for desktop use, but the apps that run the big corporations are increasingly being built on Linux. Doc Searls over at the Linux Journal has been documenting this process, reporting how many of the biggest Fortune 500 companies are increasingly reliant on Linux for doing the heavy lifting in IT, while Windows is still out there on the desktop.

Many of the programmers I know run Linux, and use OpenOffice, Firefox, Mozilla, Apache, Eclipse, JEdit, and other open source tools. When they go home for vacation, their help their less tech savvy parents and sibs configure their Windows boxes and Microsoft applications.

Driving this trend toward open source is the movement by governments such as China, which has already adopted Linux, to smaller countries like Venezuela, who are considering making the move to free software . As entire nations start running on open source software, it will become increasingly difficult for others to resist the inevitable.

If this trend continues, then in 5 years, most technical developers will be running Linux. After all, the percentage of servers running Linux is 24% now, and will be 33% in 2007, but 2010, it is easy to imagine over 50% of the servers running Linux. As a rule, when the tech savvy people pick up a trend, it is not long before the end users start to follow. We may soon reach the time when nearly all new IT development will be done on Linux, and Windows boxes will be kept around in IT shops primarily for legacy purposes. When that day comes, the future will be all about knowing Linux and open source.

The CLR and JVM: Peas in a Pod

The CLR is the core of Microsoft’s .NET technology. During the last week I’ve learned quite a bit about the CLR from CodeFez readers and other sources. I’m still not an expert on this difficult subject, but I now hope I know enough to at least advance the subject beyond last week’s article.

I’m now convinced that the CLR is a virtual machine. However, I have come to have a deeper appreciate for virtual machines, and to understand that there are good reasons why virtual machines such as the JVM or CLR can at times equal or even outperform standard code.

As I’ll explain in depth later in this article, my reasons for believing the CLR is a virtual machine are threefold, though the first two reasons are closely related:

  1. The architecture of the CLR is remarkably similar to the architecture of the Java Virtual Machine. The two are not identical, but they share enough in common that if one wants to claim that JVM is a virtual machine, and most would agree that it is, then the CLR must also be a virtual machine.

  2. The CLR, like the JVM, is an abstract stack machine. As you will see later in this article, an abstract stack machine is, by its very definition, a virtual machine. Neither the JVM nor the CLR let you access the CPU, its registers, or its stack. Instead, they both present you with a view of an abstract stack and abstract heap which are both managed by either the CLR or the JVM. It is this abstract, virtual, machine, that is used when you program with either the JVM or the CLR. Since both the JVM and the CLR are abstract stack machines, then they both must be virtual machines. If you are short on patience, you can skip directly to the heart of this article, which is my discussion of abstract stack machines.

  3. There is a general consensus among many authorities that the CLR is a virtual machine. In particular, the words "virtual machine," and the word "virtual," are used over and over again when describing .NET and CLR technology. For instance, the Wikipedia defines the CLR as a virtual machine, Mono defines their implementation of the CLR as a virtual machine, and the CLR is mentioned as an example of virtual machine in the Wikipedia definition of virtual machines. It’s also interesting to note that a primary document from the Microsoft web site describes the CLR as a Virtual Execution System. The document in question is the ECMA specification, which was written in part by Microsoft. In that document, the VES is defined as "an environment for executing managed code. It provides direct support for a set of built-in data types, defines a hypothetical machine with an associated machine model and state, a set of control flow constructs, and an exception handling model."

The points outlined in step three of these three points speak for themselves. They show that it is common to refer to the CLR as a virtual machine. However, the issues discussed in the first two points are more complicated, but also more directly relevant to the current discussion. After all, the fact that some authorities call the CLR a virtual machine doesn’t prove that it is a virtual machine. After all, those authorities, no matter how well qualified, could be wrong. It’s not likely that they are wrong, but it is possible. But a discussion about similarities between the JVM and CLR is enhanced if it is more than an argument from authority. The same applies to a discussion of abstract stack machines. As a result, in this article, I’ll concentrate on these points. In particular, I’ll begin by showing similarities between the CLR and the JVM.

Compiling Code for the JVM and the CLR

There is a huge and obvious similarity between C# and Java code. However, this fact alone does not illustrate that the CLR and the JVM have any deeper, architectural, similarities. C# is just one way of writing code for the .NET virtual machine, just as Java is just one way to write code for the Java Virtual Machine. The machines themselves, however, are not defined in high level languages such as Java, C#, Python or Visual Basic. Instead, they are defined in IL, or intermediate level code. In other words, the machine depends not on the syntax of C# or Java, but on the syntax of intermediate level code.

NOTE: Compiling to the CLR or the JVM is usually at least a two step process. First source is compiled to an intermediate language (IL), and then later the IL is translated or compiled into machine code. As we will see, it is not normally possible to execute code in the CLR or JVM without first translating it into valid IL code. It is therefore the syntax and semantics of IL code that define these virtual machines. In fact, I’ll argue in this article that an understanding of the CLR begins with an understanding of the IL virtual machine.

More fertile ground is found when one turns away from high level languages, and begins looking at the compilation process. Both Microsoft and Sun use a specialized, multi-phase, compilation process. In particular, both compilers take multiple passes over the code:

  • First they both create IL code from a high level language such as Java or C#.

  • Then they use machine specific optimizations to tune that code for a particular type of computer

  • Finally, they both compile that code to machine code and perform additional optimizations.

When reviewing this process, it is important to understand that we are not comparing the abstract definitions of the CLR and the JVM, but the actual implementations of those machines as produced by Microsoft and Sun. In other words, there is nothing in the definition of the JVM or the CLR that insists that the compilation process must proceed as outlined here. In fact, there are implementations of the JVM and the CLR that do not follow this process in precise detail. However, Microsoft .NET and the Sun HotSpot technologies do implement the JVM and CLR as outlined here. In particular, they use what are called JIT’s, or Just in Time Compilers, to create optimized machine code in the last phases of this process.

To really understand the similarity between the Sun and Microsoft implementations of the JVM and the CLR, you need to look at the last step in some detail. Both the CLR and the JVM do not compile the entire program to machine code at once. Instead, they both elect to compile the code one method or one class at a time. The first time you touch a method in either the CLR or the JVM, it is often compiled to machine code, and then need not be compiled again until the program is reloaded. This means that in both Java and in .NET, if you load two instances of a program at the same time, they will both need to be separately recompiled to machine code. In both Java and .NET, if you unload a program and relaunch it, then the IL must be recompiled to machine code. Conversely, in both instances, after code has been touched once, it is typically not recompiled until the program is unloaded from memory. However, there are cases when both of these virtual machines might recompile code at runtime if such a recompilation will lead to faster or safer code.

It is also important to point out that both Microsoft and Java have special compilers that are designed to compile IL to machine code before the program runs. However, a discussion of the Microsoft NGEN technology and the related Java technologies is not included in this article. It should be noted, however, that compiling code in this way usually does not bring significant performance benefits to either the JVM or the CLR. The reasons for this will be explained briefly in the section on performance near the end of this article.

Does Microsoft Call the CLR at Runtime?

Many of the points outlined so far emerged during the on line discussion of last week’s article on virtual machines. However, one thoughtful participant in that discussion made an interesting observation. He said that the compilation process for the two systems might be similar, but that .NET was fundamentally different from the JVM because "As I understand it a Java app calls into the JVM while it is running, but a .NET app does not call into the CLR." If true, this statement would highlight a significant difference between the JVM and the CLR. So let’s take some time to see if it is true.

During this discussion, the key point to grasp is that the letters CLR stand for the Common Language Runtime. One would assume that something that is called a "runtime" is used at runtime. But let’s dig a little deeper to see exactly how it is used.

It turns out that most .NET applications do in fact explicitly call into the CLR. In particular, there is a part of the CLR called the Base Class Library, or BCL. The BCL is laid out as part of the CLR in the ECMA specification. In an interview on Channel Nine, Kit George, the Microsoft Program Manager for the BCL, takes obvious pride in asserting that the BCL is part of the CLR.

The Base Class Library includes System.Collections, as well as support for string handling and file I/O. In other words, if your .NET program uses collections, handles strings, or does file I/O, then it is likely calling into the CLR.

But the tie between a .NET application and the CLR goes much deeper than simple calls into the CLR. You can’t load any .NET application or library without first loading the CLR into memory. The .NET PE format used in all .NET application and libraries has a section in it called the CLR header. The CLR header is executed immediately after a .NET program is loaded into memory. In fact, the purpose of this header is to ensure that the CLR is already loaded into memory, or to load the CLR into memory if necessary. In other words, you can’t run a .NET program without having the CLR loaded into memory. The very file format of .NET applications, as defined in the ECMA specification, and as implemented by Microsoft .NET, includes a section designed to load the CLR into memory.

It would be interesting, and perhaps slightly humorous, if the CLR were simply loaded into memory and then disappeared into the background and never got used again. However, this is not the case. Instead, it is usually active throughout the run of an application. It is doing many things, but the primary thing it does is provide a framework for the execution of managed .NET code.

As explained in the ECMA specification created by Microsoft and other companies, at the heart of the CLR lies the Virtual Execution System. The VES "is responsible for loading and running programs written for the CLI. It provides the services needed to execute managed code and data, using the metadata to connect separately generated modules together at runtime (late binding)." In short, whether or not a program calls into the CLR, it is running inside of the CLR, and without the CLR it cannot execute any code. Since code execution in the CLR and JVM is a dynamic process that typically goes on continually throughout the lifetime of a program, it is obvious that the CLR and your program are bound together in ways that transcend the simple act of calling into the CLR. The most important and intimate relationship between your code and the CLR is not when you call into the CLR, but the fact that your code is hosted, managed, and executed by the CLR.

To me, this symbiotic relationship between .NET code and the Virtual Execution System fits the prototypical definition of a virtual machine. In other words, the CLR is a software machine, a virtual machine, designed to host .NET programs. However, to really understand why the CLR is a virtual machine, you need to go one step further and examine what Simon Robinson, in his book "Expert .NET 1.1 Programming" calls an abstract stack machine.

The IL Virtual Machine

Both the JVM and the CLR are abstract stack machines. What does this mean?

If you are writing 80×86 assembly code, then you are writing directly to the specification of a particular processor designed by Intel. This CPU has registers, and a stack, and supports the concept of pointers to memory.

Neither the JVM nor the CLR directly model their software on an INTEL CPU. Neither of them even have an abstraction for the concept of a hardware register. Neither of them can access the hardware stack which is managed by the CPU. In fact, neither the JVM nor the CLR give you direct access to the system heap. Instead, you write to a virtual machine which has little in common with the underlying CPU. Simon Robinson points out that this virtual machine has seven parts:

  • An area for Static Fields

  • A Managed Heap

  • A Local Memory pool containing:

    • An Evaluation Stack

    • A Dynamic Memory Pool

    • A Local Variable Table

    • A Method Argument Table

The heart of this system is the evaluation stack.

The beautiful thing about an abstract stack machine is that it provides a very simple way to check on the safety of any particular method call. Both the CLR and the JVM manage a virtual stack that is typically used to hold the parameters passed to methods, and to hold the results passed back from methods. Type safety in .NET and the JVM consists primarily of making sure that the code defined by programmer is going to fit safely on that virtual stack. If it does fit safely on that stack, then the code compiles, and is blessed as safe, "managed" code. If it does not fit safely on that stack, then compilation fails. In other words, the virtual stack is defined in such a way that code cannot be placed on it unless it is type checked and considered safe to execute.

This system provides two great virtues:

  1. It is not a real machine, but a virtual machine, and hence provides an abstraction away from any particular hardware or particular operating system. CLR and JVM code don’t need a particular piece of hardware, they just need an implementation of their virtual machine, of their abstract stack machine. They don’t need an Intel processor, and they don’t need Windows. They need their virtual machine. In one case the virtual machine is called the JVM, in a second case it is called the CLR. But they are both virtual machines. Because of these IL virtual machines, it is possible to compile IL code from both the JVM and the CLR to run on either Windows or Linux. You can’t run JVM code on the CLR, and you can’t run CLR code on the JVM. However, you can compile Java code to IL and execute it unchanged on any proper JVM, whether it is hosted on Linux, Windows, or a cell phone. Likewise, you can compile .NET code with Visual Studio to IL and run it unchanged on Windows, on a cell phone, or on Linux, using Mono.

  2. The second great thing about abstract stack machines is that they provide a simple, neat way to check the type safety of your code. All that matters is whether or not your code fits on the virtual stack created by either the CLR or the JVM. If it does, then the code will run on the CLR or JVM virtual machines.

Performance Issues

Now that we have established that the CLR is a virtual machine, the next step is to understand why performance on virtual machines can be so efficient. In the discussion that follows, I will try to highlight one or two prototypical examples of how virtual machine optimizations take place. This is not meant to be a complete list, but only to give examples that I feel are representative of the type of optimizations performed by both the CLR and the JVM.

The key reason why a virtual machine can sometimes be faster than normal code is that the entirety of a program need not be compiled in order to run inside the JVM or CLR. Instead, only code that is actually being executed will be compiled. It is possible to perform optimization on small chunks of code that could not be performed on an entire program.

NOTE: In the discussion that follows, I talk about inlining a method. Normally, if you want to call a method, you must jump from one place in memory to the place where the called method resides. This jump takes time and resources. A compiler can, however, inline a method by moving it wholesale into the current memory location. It is therefore not necessary to jump in memory from one place to another. This can be a big performance boost in some cases.

Both the JVM and the CLR can inline methods on the fly, when necessary, without any request from the programmer. Normally, in a standard compiler, it is not possible to inline a virtual method. This can be a big problem for languages like Java and C#, where most methods are virtual by default. However, the JVM is smart enough to look at virtual methods, and decide if, in this particular case, on this particular machine, with this amount of code in play, it is possible to inline a particular virtual method. In particular, it will decide if it can resolve the address of the method at compile time, rather than at run time. If it can do so, then it will. If, later on, it decides that it is no longer possible to inline a virtual method, then it will stop doing so.

Microsoft has not had time yet to advance to this level of optimizing virtual methods, but it does inline many methods where possible, so long as the entire method is smaller than 32 bytes. This means that the CLR, like the JVM, can optimize code that appears in loops by adding inlining. Over time, Microsoft will surely add the ability to automatically inline virtual methods.

Another obvious advantage of not having to compile the whole program ahead of time is that the whole compiled program need not be loaded into memory at start up. In other words, if your code at first only calls a few small methods or classes, then only those chunks of code will be compiled, and hence only that small amount of code will need to be loaded into memory. This leads to faster start up time, and helps explain why NGEN and related Java technologies are not necessarily faster than JITed code. If you compile the whole program ahead of time with NGEN, then the whole program must be loaded entirely into memory at startup. This can, in some cases, slow program execution. Also, decisions about how to optimize code based on which code is in memory at a particular time cannot be used in pre-compiled code, and hence programs compiled with a tool like NGEN can be slower than JITed code.

I should add that the Microsoft abstract stack machine is very clever about using registers. I mentioned earlier that the CLR and JVM abstract stack machines are not based on the real underlying machine, and that it does not model CPU registers or the hardware stack. However, the CLR is very clever about placing some parts of the virtual stack in registers when possible. This can’t be done with objects, but it can be done with simple values such as integers. This can improve performance in an application. As far as I know, the JVM does not have a similar optimization.

So far, we have talked about how JITed code can outperform code compiled in a normal manner. Yet anyone who has used the CLR or JVM knows that not all programs run in a virtual machine are faster than normal programs. In part, this is because the CLR or JVM needs to be loaded into memory, which takes time and valuable RAM. However, to understand this subject in more depth we need to go back and think some more about the abstract stack machine.

Earlier in this article there was discussion about how the JVM and CLR compile IL down to machine code. It is, however, important to understand that this machine code is still executed inside the JVM or CLR virtual machines. That is, they follow the abstract stack machine model. It may be true that a particular JIT is capable of compiling code that does not fit on the abstract stack, but that fact is irrelevant because the code would never have been compiled to IL unless it fit inside the CLR or JVM virtual machines. This is why it is not always interesting to discuss the details of how JIT’s produce machine code, or whether or not that code calls into the CLR. It does in fact call into the CLR, and the CLR calls into it. But that is not important, because no .NET code, and no JVM code can be executed except on the abstract stack that is part of both the CLR and the JVM.

I should also point out that neither the CLR nor the JVM depends on the presence of a finely tuned JIT compiler. In fact, there is nothing in either the CLR or JVM specification that says that such a highly optimized compiler need exist. JIT’s are really an added feature that both Microsoft and Sun have introduced to improve performance. In short, when you are trying to understand the CLR or the JVM, concentrating on the machine code produced by a JIT can be a distraction. Both the CLR and the JVM have JIT’s, but that is not what is important about the CLR or the JVM. Instead, you should concentrate on the fact that both of these virtual machines are built around an abstract stack. It is this stack that makes the safety and portability of the CLR and the JVM possible. A JIT is a highly valuable feature, and I think everyone should use one, but they are not part of the core of either the JVM or the CLR.


This article described how both the Java Hot Spot JIT technology and the Microsoft CLR compile bytecode to machine code. Both the Java Virtual Machine and the Microsoft CLR virtual machine compile code only the first time it is run during a particular session. If you close down a .NET application, then the code needs to be recompiled when you launch it again. If you have two instances of a .NET application running at one time, then the code for each needs to be compiled separately. Both environments compile code on demand, that is, they do not compile the whole program at once, but compile on the fly and as necessary. In short, both the CLR and the JVM have very similar compilation systems.

It turns out that the compiled machine code in a .NET application has a very intimate relationship with the CLR. In many cases, it literally calls into the CLR. In all cases, it cannot run or execute without the assistance of the CLR.

Finally, at the end of this article, it became clear that the CLR sets up an IL virtual machine that is built around a single abstract stack. This stack provides a high level of code safety, and also provides running code with a virtual machine inside of which it can execute. This virtual machine isolates the program from the particular features of the machine on which it runs.

In writing this article, I have explained why the CLR and the JVM are both virtual machines. In the process, I have come to have a deeper understanding of, and appreciation for, what virtual machines like the CLR or JVM do for developers. In my previous article, I asked if virtual machines were worth the price that we pay for them in terms of memory usage and machine resources. At this stage, I’d be prepared to answer in the affirmative to this question. Sun did a remarkable thing when they created the JVM, and Microsoft’s implementation of this same technology is equally impressive. I have been using the JVM heavily for years, and I have been using various implementations of the CLR for over a year now. I plan to continue using both technologies, and my appreciation for them only grows as I learn more about them.

Some Links



A virtual machine is a machine completely defined and implemented in software rather than hardware. It is often referred to as a "runtime environment"; code compiled for such a machine is typically called bytecode.



From the Wikipedia definition of a virtual machine:

More modern examples include the specification of the Java virtual machine and the Common Language Infrastructure virtual machine is at the heart of the Microsoft .NET initiative.

Reference the IL Virtual Machine: http://www.microsoft.com/australia/events/teched2003/tracks/tools.asp

Two Kinds of Software, Two Kinds of Freedom

In this article, I ask the perennial question: Is your definition of free choice when a company CEO gets to make a decision or when you get to make a decision?

About five or six years ago, I attended my first Linux World. I went to book signing by a guy I kept hearing about named Eric S. Raymond. At the time, Raymond had a new book out called "The Cathedral and the Bazaar."

In the midst of a profound crisis of doubt about the future of Kylix, I decided that I had to talk to this guy, and figure out if this Linux thing really made any sense. In my inimitable way, I posed to him what must surely have been one of the stupidest questions he ever received: "I’m thinking of buying your book, but frankly, I’m a developer. I don’t want a book by a manager. Have you ever written any code, or is your experience just on the management side?" I now know that this was a bit like walking up to Anders Hejlsberg and asking him if he knew how to write a for loop.

Raymond reeled back in his chair. His mouth hanging open, he stared at me blankly for a moment. Finally he turned to the guy sitting next to him and said: "What do you think, am I developer?" And to their credit, these two guys patiently explained to me that Eric S. Raymond had written more software than most people are ever likely to have time to use.

So I bought Raymond’s book, and he politely signed it "For Charlie," rather than writing: "Dear idiot," or whatever he was truly thinking. And I took the book home and swallowed it whole in one or two evenings.

Open Source and Open Standards

Raymond and other folks in the open source movement had found extremely fertile ground for their ideas. Everything they said about open standards and about open source resonated with me. I wanted to be on Linux, but my compiler wasn’t truly cross platform, and wasn’t written to an open standard. I felt trapped. Descriptions of Open Source and Open Standards showed me exactly how I was pinned.

Before that moment, I, with good reason, judged development languages in terms of ease of use and robustness of architecture. But suddenly I had a new yard stick by which to judge a tool:

  • Did it encourage free choice? If I chose the tool, could I use it on any platform, or was I locked into a particular platform?
  • Did I have the ability to fix bugs or change features in the product? In other words, did it come with source, and could I compile it?
  • What dictated product cycles? Were releases all about making money, or was the product released when it was ready to ship?

Money and Open Source

Many people get the Open Source movement confused with the Free Software movement. In other words, they confuse RMS (Richard Stallman) and his Free Software movement with ESR (Eric Steven Raymond) and the Open Source movement.

To get a quick overview of their differences, see this short article: http://www.catb.org/~esr/open-source.html. In it, Raymond explains that the Open Source movement has always been about moving into corporations. This is one of the core purposes of the whole movement.

People are expected to, and encouraged to, make money with Open Source software. It is, again, one of the core purposes of the entire movement.

This article from Information Week shows that there is not just a little bit of money in the open source movement. By 2008, Linux is expect to bring in over 35 billion dollars in annual revenues.

The goal of the open source movement is not to keep people from making money, it is to make it easier for them to make money. Developer tools should be released as open source because it helps developers make better software, and hence to make money more easily. Whenever you use proprietary software, you are selling out your chances to make money, your chances to be successful, so that some proprietary company can lock you into solutions that make them money.

If the paying software market were truly a free market, and were not controlled by one or two big companies, then the open source model would be ubiquitous. It is so much better than the closed source model that it would easily win out. But in the meantime, the only way to compete against some companies is to release free software. If you go into the "free market" with them, they will either buy you out, or build a competing product and release it for free.

There is currently only a severely limited form of competition in the software market, hence the importance of free software. But the coincidence of free software and open source software is just a by product of the current software market, it is not the purpose of the Open Source movement. The purpose of the movement is to create better software, and hence to make it easier for people to make money.

Making a Choice

If you can make money using open source software, then why not use it? Well, some people would argue that proprietary software is better than open source software. But is this true?

Is IIS better than Apache? Few people would claim that it is superior. Is the Borland C++ compiler better than GCC? In some ways yes, but in other ways no. Is Visual Basic a better language than Python? Not likely.

And yet, it is not always easy to answer these questions. For instance, MS SQL has more features and better tools than MySQL. But MySQL is much faster than MS SQL. The last major release of MS SQL was some five years ago. MySQL has gone through many cycles during that period, and in the process they have responded to the needs of developers who wanted important fixes or important new features. Which tool is better? It’s not always so easy to decide.

Of course, the great open source projects, such as Perl, Python, Ant, GCC and Apache, are released at extremely high levels of quality. There are thousands of eyes looking at these products, and as a result, the have very few defects. But in some cases, proprietary software has the edge. For instance, there is no open source version of Pascal that is as good as Delphi 7.

Some products, such as Python, Perl, GCC, and Apache, give you complete freedom of choice. They ship with source, and you can change them, fix them, even contribute new software to the project. Other products, such as Delphi or Java, ship with at least significant amounts of source, but have some limitations on what you can do with the software. For instance, Delphi is not cross platform, there is no open standard for the language, and there is no free compiler which guarantees the product’s future regardless of what happens to Borland. Other products don’t ship with source, don’t let you fix bugs, don’t let you customize the software in any way you want. They are the personification of non-free software. The companies that make that kind of software are primarily interested in controlling the customer. There are still issues of free choice involved, it’s just that you don’t get to make the choice, the company does.


With Linux on the road to earning 35 billion a year, money is clearly no longer the deciding issue. People have always made money off "free" software like Java and Apache. The debate over open standards and open source was never about money.

The debate is over freedom of choice. Are you locking yourself into a proprietary solution, or are you working with open standards and open source? Is your definition of free choice when a company CEO gets to make a decision or when you get to make a decision?

Ultimately, the choice is yours.

Fedora Core 3 as a Multimedia Laptop OS

I recently upgraded my laptop, a Dell Inspiron 8200, to Fedora Core 3. Fedora Core 3 is a widely used Linux distribution that is the successor to RedHat Linux. If you are unfamiliar with Fedora, you can learn more at the following URLs:

  • http://fedora.redhat.com/

  • http://www.fedorafaq.org/

  • http://fedora.linux.duke.edu/fedorapeople/

This article is a review of Fedora Core 3, and takes the form of a description of my experiences upgrading my laptop machine. The experiences described here actually took place over the course of several days, but I will not always present an exact chronological sequence of events, but will instead smooth out the narrative some for the sake of clarity, while still giving you a fair sense of how long each stage took.

The big drama in this event was my desire to actually get a number of devices up and running on a laptop so that I could write this review. You should understand that Linux was originally a server OS, not a desktop OS. Furthermore, laptops present more challenges that do desktops. As a result, I was setting the bar high by asking Linux to perform not as a server, but as a multimedia workstation, and to run not on a desktop machine, but on a laptop.

In the past, I will confess that I often relied on Windows to perform certain tasks. For instance, if I couldn’t burn a CD, rip CDs, or play music on Linux, well, there was always a Windows machine around somewhere for those tasks. I just used the Linux desktop for programming and creating documents, tasks for which it was well suited. But my goal here was to write a review of Fedora Core 3 as a multimedia workstation, which meant I had to actually get my multimedia tools up and running on a Linux laptop.

Further impetus was provided by the fact that I also have been using my Linux laptop at work, where I really only have good access to one machine at a time. That meant my laptop was playing a larger role in my life than it had before. In other words, there was no Windows machine to turn to at work if things weren’t working for me.

The one final note I should add is that I am the sole Linux user in a very hardcore Windows shop. I had absolutely no one to turn to for advice. I had to figure everything out by taking strange and unusual steps like reading manuals and browsing the web.

So all bets were off this time. I was going to dedicate some time to seeing if I could get my laptop into a truly functional state using Linux, and Linux alone. As you will see, I ended up having a fair amount of success, but the price I paid was high. In short, I could do what I wanted to do, but it wasn’t easy.

Additional Caveats

Linux has always been a better server and networking platform than Windows. It has always been faster and more stable than Windows. For several years now it has had OpenOffice, a tool that opens, edits and saves Microsoft Office files and that competes reasonably well with Microsoft Office in terms of functionality. In the browser world, both Mozilla and Firefox are well ahead of Microsoft’s Internet Explorer. Just these features alone represent reason enough to leave Microsoft Windows and switch to Linux.

But my goals were not to test the tried and true features, but to push the envelope with the multimedia features. As a result, I don’t spend much time in this article talking about reliable, well tested features such as networking, OpenOffice and Mozilla, all of which work beautifully without any tweaking on the user’s part. Instead, I’m going to focus on the parts of Linux that still need work.

I should also add, that I was working with a machine that had limited disk space. As a result, I did not elect to install everything during the initial setup. This meant that there were some things that I had to install manually that the Fedora Anaconda installer might have done for me automatically. However, taking a similar step on Windows would not have been nearly as time consuming.

Some features that will interest many readers are not tested in this review. I think the most important omissions are wireless networking and scanning. Basic USB functionality is flawless with the new kernel, and I had no trouble using a USB mouse.

The Install

Because of disk space issues, I wanted to repartition my laptop. As a result, I began the process by creating a huge tarball of my entire user directory, wrapping up all of /home/charlie into one compressed file. I then used SSH to copy that file to another system. (An alternate technique would have been to to use NSF to connect two machines and then iteratively copy the home directory to the second machine, but it is generally simpler to create a big tarball.)

Once I had backed up my home directory, I downloaded the latest Fedora Core 3 ISO files from http://www.linuxiso.org. There was a DVD image of Fedora Core 3 on the site, but I do not have a DVD burner, so I opted to download all 4 CD images. I started the copy before I went to sleep and woke up in the morning to find all CD images had been downloaded to my Windows machine. I then used tools on my Windows box to burn all four ISO images to CD, and popped the first one in the CD ROM drive of my laptop and rebooted.

My system came back up automatically into the Fedora Core 3 install program welcome screen. More out of habit than any real need, I ran the install program in text mode. It has been years since I have had any trouble running a Linux install in graphics mode, but habit led me to type in the word text at the prompt on the first screen, and to then press enter to enter the install process proper in text mode.

I ran across my first problem when I ran the media check on the first CD. It came back reporting that there was something wrong with the CD. I tried it again, and got the same result. So I downloaded the MD5 SUM value for the CD image I had downloaded and used md5summer.exe to confirm that I had downloaded a valid image. The image was okay, so I assumed the problem was in the burn of the CD. To fix the problem, I burned the CD a second time, but when I tested the new CD I found that I had the same problem. By that point I was out of patience, and so I elected to go ahead with the install, regardless of the report on the CD image. Since the install then proceeded flawlessly, I suspect that there may be a bug in the CD media check application.

Because I wanted to repartition, I completely reformatted my hard drive, and installed an entirely new image. The process went smoothly, and in 30 or 40 minutes I was up and running with Fedora Core 3. I then copied back the tarball of my home directory, and restored most of it. This allowed me to preserve all my settings for things like my email and web browser. As a result, I was up and running in a fully functional state in a very short order.

First Impressions and Updating

My first impression was good. When I first signed in, the Gnome desktop was active. I still prefer KDE, though the differences have become much less stark. So I ran SwitchDesk from the shell prompt to switch over to KDE. I booted up smoothly into KDE with everything looking fine.

The next step was to make sure everything was up to date. To update the system, I found it simplest to go to the command line and and use yum:

yum update

Of course, before running update you should download an up to date yum.conf file. I got mine by heading first to http://www.fedorafaq.org/, where I found a link pointing to the latest yum.conf file, which I copied over the existing one in my /etc directory. This process was eased considerably by the presence of FireFox as the default browser. FireFox has a fantastic facility for searching through the contents of a web site. The little search prompt that appears on a status bar at the bottom of the browser offers a significant improvement over the search dialogs in Mozilla, or in the Evil Empire’s browser.

Taking a Look Around

With the system installed and updated, I was ready to take a look around. I soon found out that all my major apps were running perfectly. The browser and mail settings were preserved perfectly. OpenOffice ran smoothly. Compilers such as gcc and Python were installed and in perfect working order. In short, were I not trying to use the machine as a multimedia desktop, I would have been happily up and running in very short order. The Linux install and the basic desktop functionality was in perfect working order. If you left the multimedia business out of it, Linux and Windows were on a par in terms of ease of use and functionality.

The next thing I noticed was that there was a nice little icon on the system tray showing the state of the battery for my laptop. For some reason, this gave me a sense of comfort, as if there were thoughtful programmers working somewhere who had actually considered the possibility that users might be running Linux on a laptop.

Beneath the battery icon was a tool called KwikDisk. From inside this tool, I could launch KDiskFree, a tool that enabled me to get a graphical report on free disk space and also to mount floppies and cdroms.

The whole process of mounting floppies and cdroms on Fedora Core 3 confused me, since their mount point is no longer /mnt/floppy and /mnt/cdrom, but /media/floppy and /media/cdrecorder. This seems like a simple enough change on the surface, but it was very confusing to me when I went to the /mnt and found only an empty directory, with no familiar floppy and cdrom subdirectories. It seems like a small thing, but I was stumped at first, and had done silly things like issuing arcane mount commands such as: mount /dev/hdb /mnt/cdrom. This worked well enough, but it was not a very satisfying experience. Now that I had found KwikDisk, this mystery was resolved, and I found it easy to use floppies and CDROMS and to work from the /media rather than the /mnt directory.

The next step was to pop in a CDROM and see if I could play some music. I put in the Garden State soundtrack, and a moment later I was listening to ColdPlay sing Don’t Panic. Looking around a bit more, I saw that there was a tool under the Sound & Video menu called Sound Juicer. A few moments later I was using Sound Juicer to rip the CD to the default OGG file format. There were also options to rip to MP3.

If you want to play MP3 files in xmms (Sound & Video | Audio Player), enter the following command:

yum install xmms-mp3

I also installed mplayer so that I could watch Quicktime movies and listen to music stored in Microsoft formats. Finally I added the Linux version of RealPlayer10, which allowed me to view yet more movies and listen to yet more online music. You can also use the Package Management tool to install the HelixPlayer, which is nearly identical to, and which forms the basis for, RealPlayer10. You can access the Package Management tool through the System Settings | Add Remove Applications menu.

All of the steps described here took time to perform, but none of them were particularly troublesome. As a rule, I was able to perform each task for the first time in under 15 minutes. This is much longer than it would have been in Windows, but not particularly painful. Probably the most complicated single step was installing mplayer. After doing some research, I went to the command prompt, became root, and typed:

yum install mplayer-gui
yum install mplayerplug-in

After doing that, I went to the following site and watched a video:


I was also able to listen to music streamed for the Microsoft Media player. For instance, I could listen to the short previews you find for CD tracks on Amazon. In general, I found that mplayer worked, and provided a valuable service, but that it was a bit unstable when shown in a browser. In particular, attempting to move off a web page while mplayer was running tended to result in nearly disabling my GUI for a period of several minutes. After a time, I was able to kill the browser, and everything returned to normal. However, I found that it was wisest to open a browser in its own desktop when using mplayer. That way the browser did not get hidden behind other windows, and I could more easily kill it if it locked up. When run in standalone mode, mplayer was more stable, but still not as efficient as other Linux tools, such as Rhythmbox or xmms. However, it does provide a very valuable service in terms of giving you access to a wide range of multimedia content.

As a final step, I successfully and quickly installed FlashPlayer 7. I will not detail that process as it has been working smoothly on Linux for years.

After testing multimedia features such as streaming, video and playing music, the next logical step would have been to see if I could burn a CD. However, I just wasn’t mentally ready for that yet, so instead I set about updating my graphics capability. This should have been a simple step, but there was considerable drama awaiting me in this area.

High Performance Graphics

The trouble I had with my graphics card was the most extreme that I encountered while working with Fedora Core. I had no problem with graphics at home, after my initial install. Nor do I have trouble with graphics on my desktop (non-laptop) FC2 and Mandrake machines. But I did have problems when I took my laptop to work. There I had trouble plugging into an emachines monitor (eview 171). There was no trouble if I wanted to use my LCD screen only, but trying to switch over to the emachines’ monitor after working with my Sylvania F97 monitor at home sent my screen output into blurry or pattern strewn fits.

As mentioned before, my laptop uses an Nvidia video card. As many readers know, Nvidia has not released the specs for their card. As a result, the drivers for Nvidia cards are supplied separately, and in binary form only, by Nvidia. This breaks the whole open source Linux philosophy, and means that the problems outlined here are due to the proprietary nature of Nvidia’s license, and not to problems with Linux itself. Nevertheless, the problems do exist, and many users encounter them.

I struggled with my monitor problems off and on for about two weeks, and fairly quickly found a painful, but effective work around. But a workaround is not a real solution. The real fix was to install the Nvidia drivers, a step which did not work properly for me on Fedora Core 2 when I downloaded the drivers directly from the Nvidia web site and attempted to install them using their easy to use custom install utility. In that case the install went smoothly, but the problem was not fixed. This time I avoided the Nvidia website and instead used Yum to update the drivers, issuing the following command:

yum install kernel-module-nvidia-2.6.9-1.681_FC3 

Installing the drivers this way solved my problems. I was able to type startx and bring up X11 (the xorg version) with no troubles both at home and at work. If you are working with a different kernel, or if you have a different type of machine, you can type the following command to get a hint as to what you want to download:

yum info kernel-module-nvidia*

At this point, I thought all my troubles were past. But I got an error when I booted back up into KDE and tried to run tuxracer. When I tried to run tuxracer from the menu, the application closed suddenly without explaining what was wrong. I could not see the error message until I ran tuxracer from the command prompt. The error explained that I needed to read the following file:

cat /usr/share/doc/nvidia-glx-1.0.6629/README | less

Inside that file I learned I must edit this file:


In particular, console.perms contains the following line which needs to be deleted:

=/dev/nvidia* /dev/3dfx*

Finally, I needed to issue the following commands:

chmod 0666 /dev/nvidia* 
chown root /dev/nvidia*

The end result was that I was able to switch back and forth between my two monitors, and I was able to run high performance 3D games such as TuxRacer.

Though I was now up and running, I still had troubles logging into my machine as myself, rather than as root. The best solution I could find was to restore the console.perms to original state, but to run the following command as root before accessing any high performance graphcs applications such as tuxracer:

chmod 0666 /dev/nvidia*

This means that with my machine in its current state I can log in and use both monitors with no trouble, but I have to run the chmod command once if I want to play any games that use high performance graphics. Hopefully this issue will be resolved soon. If it is, I will post a solution.

Burning a CD

I had never burned a CD from a LinuxBox before, and I approached this step with considerable trepidation, which turned out to be fully warranted.

I began by making sure that K3b, the excellent GUI front end for cdrecorder, was installed. If K3b is not available on your system, you can use the Add/Remove Applications tool from System Settings menu to install it, or else type the following command as root:

yum install k3b-mp3

To make a very long story short, K3b is easy to use, but it did not work on my system because the underlying cdrecorder application that ships with FC3 was broken on my machine. I fixed the problem by downloading the original version of cdrecorder by author Jorg Schilling from his site. The version that ships with Fedora has the word Clone in its version number:

Cdrecord-Clone 2.01-dvd. 

Schilling’s version, however, does not:

Cdrecord 2.0 (i686-pc-linux-gnu) Copyright (C) 1995-2002 J�g Schilling

Unfortunately, Schilling’s version did not work correctly with K3b. This meant I had to do my work from the command line. To proceed, I first created an ISO image of the files I wanted to burn to my CD. I did this with the mkisofs program:

mkisofs -r -o GardenState.iso Garden_State/*

This produced a file called GardenState.iso. This file contained an image of the directory containing all my OGG files from the Garden State soundtrack.

I then mounted the iso file to be sure that it actually did, in point of fact, contain the OGG files from my Garden State CD:

mount -t iso9660 -o ro,loop=/dev/loop0 GardenState.iso bar/

I then ran the following command to burn my CD:

cdrecord -v speed=4 dev=/dev/hdb -data GardenState.iso

To my utter surprise, this worked flawlessly. I burned all the OGG files to my CD as data files. I then mounted the CD and played the songs successfully. You could have knocked me over with a feather.

Networking: Samba and NSF

I haven’t used Samba in years, and had trouble with it in FC2, so I was hesitant to test this functionality. Figuring that I had to make an effort for this review, I first went to the command line, and typed:

smbclient -L

I was then prompted for a password, entered it, and immediately got a list of shares from my local Windows machine. Emboldened by this success, I brought up the well designed Konquerer, and typed smb:// in the address field. I was immediately presented with a hierarchical view of the shares on my Windows machine. Browsing through the shared folders was fast and simple. For instance, there was no perceptible delay from the time I pressed the plus next to the word SharedDocs and the time when I saw a list of folders and files in the SharedDocs directory.


I then proceeded to copy a file from my Windows machine to my Linux laptop, and to copy a file from my Linux laptop to a share on my Windows machine that allowed writing. I did all of this with a series of right clicks and copy and paste operations between two convenient tabs in Konquerer. In Figure 1 you can see the tabs which allowed my to move back and forth between a view of my home Linux directory (charlie) and the Windows shares (smb://

Next I wanted to test NFS, the file system that allows one of several methods of sharing data between Linux machines. I opened my /etc/exports file and typed in the following:

/home -r

The command shown here states that I want to share my /home directory in read only mode with any machine that has the IP address Then I started my NFS service, which I normally keep shutdown because of security reasons:

/etc/init.d/nfs start

Finally, I went to the System Settings menu and opened my easy to use GUI based Security Level tool and turned off my firewall.

I then went to my Linux machine with IP address, and typed:

mount /mnt/share

I was immediately connected to the share on my remote Linux machine. Acting as a user with the same UID as the shared user directory on my FC3 machine, I was able to access the files on my FC3 machine and copy some of them to the local machine. Because I had set permissions to read only (-r), I was not able to copy files back.

Satisfied that all was working correctly, I unmounted the shared drive, turned the firewall back on, and stopped the NFS service.

I won’t detail the process here, but I also had success when using the much more secure SSH protocol to move files back and forth between machines. In general, for security reasons, I much prefer to use SSH rather than NFS.

Obviously I have included this section to remind everyone that the non-multimedia features in Linux usually work smoothly out of the box. Even many of the multimedia features run smoothly out of the box. The problem I was facing was getting them to run smoothly on a laptop.


In the end I have mixed feelings about Fedora Core 3. I’m very pleased that I can have easy access to a powerful office suite like OpenOffice, and to all of Linux’s advanced networking capability, to its astounding stability, to its fine browsers and to its mail servers and clients. The addition of yum to the Fedora Core distribution has also been a major breakthrough. Many troubling install problems are now resolved quickly and easily with yum.

I am also amazed that I am finally able to watch movies, listen to music, and burn CDs on my Linux box. This is a huge accomplishment.

Unfortunately, I believe that the key multimedia features in Fedora Core 3 still need work. A non-technical person would have little chance of getting them up and running, and even a talented user from the Windows world would find it a challenging and time consuming process. I’ve been working with Linux for years, and it took me well over eight hours to get the right movie clients, music clients, and video drivers installed. Better problem solving skills would have helped me, as would more experience with Linux, yet still this is not a good situation. I am well aware that my experiences should help make the process much easier the next time through, but the first time through was a bit of a challenge.

The bottom line is this: if you are a power user, or a developer, you will be able to get Fedora Core 3 and everything you want. If you want to watch movies, rip CDs, listen to music, browse the web, create documents, or develop software, then you will find all the tools you need on Fedora Core 3. There is no longer any need to feel that you have to stick to Windows just to get the full range of features from your computer. However, Fedora Core 3 is still not ready for the mass market. In that world, computers are a commodity, and they are expected to work smoothly out of the box. An experienced user can get Fedora Core 3 to run smoothly, but it doesn’t occur automatically just out of the box.