The Drive to Write Free Software. Part 1

I had lunch with a colleague the other day. We talked about a free, open source project that we use at CodeFez. We both agreed that the project was well designed and well crafted. But after a bit, my friend turned to me and said, with obvious sincerity, “But I just don’t get it! Why do people build free software? What motivates them? It doesn’t make any sense!” I had no definitive, irrefutable answer to that question. But it did seem the sort of question that led to interesting speculation.

Economics: Rounding Up the Usual Suspects

There are certain obvious, yet superficial, answers to the question of why the open source movement exists, and why people build free software. For instance, it is difficult to compete on an economic basic with companies that have a monopoly or near monopoly position in a market. In the absence of legislation limiting the scope of these monopolies, the only alternative is to build free software. The free market system collapses in the face of a monopoly. Free software is one alternative that promotes competition and choice in a market dominated by massive forces with virtually unlimited power.

A less dramatic force driving the free software movement can be seen in corporations where software developers need tools. Developers in corporations work for departments, and each department has a budget. As a rule, these budgets are not designed to be flexible, but instead set up a static framework in which developers are expected to work. Hampered by these budgets, it is often difficult, though by no means impossible, for developers to buy the tools they need. As a result, software developers have formed small international coalitions to develop the software tools that they need. Go to SourceForge and you can see tens of thousands of these tiny international coalitions creating software tools under the aegis of the open source movement.

As powerful and important as they are, the economic and legal forces discussed in this section of this article are not really the basis of the free software movement. They answer some questions, but they leave too many other questions unanswered. Why aren’t people unhappy using the software provided by a monopoly? Why should employees bother to gang together to solve their employer’s problems? It is clear that to understand free software, one needs to dig a little deeper.

The train is starting to leave the station

One of the most important things to come out of Microsoft for the last couple of years tends to be played down in the press, and sniped at by technology pundits. From my viewpoint the industry’s tendency to ignore this technology is simply bizarre. The thing I’m talking about is the Tablet PC. Before you pooh-pooh my assertion, stay a while.
As developers, we’re constantly on the look-out for changes in the environments with which we work. So we’re interested in news about the next versions of our IDEs, about the various flavors of server technology coming out of Redmond and elsewhere, about Borland’s SDO, about .NET, about web services, about anything that will make our jobs easier.
But what about things which make our users’ lives easier? This is where the Tablet PC fits in. At the moment, the second generation of these machines is starting to appear. They’re faster, they have higher resolution screens and larger hard disks, and don’t cost that much more than normal laptops. Windows XP SP2 has a whole slew of improvements for tablet users, including a kick-ass handwriting recognition subsystem. Users are starting to take notice and new applications are starting to appear that have a richer user experience if they happen to be run on a tablet.
And why are users taking note of tablets? Well, the form factor and usage is the main reason: it’s just plain easier to write on a screen (and have your writing converted into ordinary text) and tap with a pen than using a mouse and keyboard. They’re lighter than the average laptop. It’s more fun to write emails than to type them, and to include quick line drawings as well. It has a more natural input interface.
Do you go to meetings? Do all those people who bring in their laptops and then proceed to type away loudly annoy you? And what happens if someone draws a diagram on the white board? Unless you have a pad of paper as well as your laptop you’re toast. But what about taking in a digital notepad that you write on? Well, it’s much quieter and less obtrusive. People don’t notice (apart from the initial Wow! factor, I suppose) as much as a normal laptop. And at the end of the meeting you already have the notes for the meeting in a form you can use (archive them, email them to Australia, print and distribute them, blog about them).
Using a tablet instead of a laptop in face-to-face interviews (say surveys, gathering information for insurance quotes or for banking-type products, one-on-one teaching, and so on) is less intimidating and more open. Heck, you can even read on-line documentation in the smallest room.
My point here is that I believe the Tablet PC software industry is about to take off, not only in the corporate world, but also for individual users. In the first two years, over a million tablets have been sold; a small number when compared to the overall laptop market, sure, but it’s still impressive. I can foresee a day when the tablet technology is included in laptops as a matter of course, or at least as a low-cost option.
But at present the number of applications that can take advantage of tablets is small. Microsoft obviously have several (OneNote suddenly makes more sense when using a tablet), and the third-party market is growing and it’s wide-open.
I don’t know about you, but this market excites me. My latest work machine is one of the new Toshiba Portege M205s (1.8GHz Pentium M CPU, wireless B & G, 1400*1050 screen resolution, 7200rpm 60Gb disk, 4 hours battery life, 4.5 Lbs). I’m now using a tablet every day, as my one and only development machine. I’m finding out what the issues are with them, how to use digital ink, what applications or components are needed.

Who’s Buying Borland?

If I had a dollar for every rumor that has been circulated about Borland getting bought out by , I could buy the company myself.

The latest rumor has Microsoft buying Borland. In the past I’ve heard that Novell, BEA, IBM, Corel (oh, wait, that rumor was true!), Oracle, CA, SAP, HP, and McDonalds. Okay, I made that last one up. But nevertheless, every one of those rumors has been just that – a rumor. As far as I know, there hasn’t been a serious attempt to buy Borland since the Corel fiasco. Borland’s stock price has gone up and down on these rumors over the years, but no one aside from Corel has ever made a serious bid.

I’m no Mergers & Acquisitions expert, but it seems to me that if someone were going to buy Borland, they would have done so already. Borland is only getting stronger. I’d guess that all that money in the bank makes them tough to buy if they don’t want to be bought. Because Borland has one foot planted firmly in both the Java and .Net spaces, it makes only half the company attractive to most companies out there. MS wouldn’t have a clue what to do with JBuilder, and BEA would look at Delphi like we all would look at a man from Mars. Borland has a lot of valuable parts, but the some of those parts doesn’t really appeal to any one entity. In the end, it seems unlikely that anyone could or would really buy Borland. But it sure makes for interesting speculation on the Yahoo BORL board.

But lets imagine that someone did buy Borland. Such a company would have an interesting conundrum: what to do with the widely disparate development tool sets that Borland owns? Should a Java-ish company try to jump into the .Net world with Delphi? Should a .Net-minded company try to do the same into the Java world?

The only concern I personally would have would be for the future of Delphi. A company buying Borland may or may not see the value in Delphi; thus the specter of Borland being bought is a bit scary to us Delphi fans. Delphi going away would be a Very Bad Thing™ for the developer community on the .Net side of things. Delphi’s demise would leave .Net developers at the mercy of one company – the dreaded Microsoft. And of course, we can’t have that, now, can we?

Borland is a much stronger company than the average IT “expert” seems to realize, and they do have more bases covered in the software development market than any other company, even Microsoft. Sometimes we developers forget that Borland is made up of tools that cover many areas beyond development tools. They have StarTeam, CaliberRM, Together, Visibroker, and OptimizeIt. Borland has been doing more than merely preaching the ALM message, they’ve been acting on it, putting themselves years ahead of the competition in many areas. And in doing so, they’ve made themselves large enough and diverse enough that they would be a hard pill to swallow.

In the end, I’m inclined to believe that rumors of Borland’s acquisition have been greatly exaggerated.

Community Beats Borland to the Punch with a C++ Open Letter

Slashdot recently posted an article highlighting the unhappiness and frustration of the Borland C++ Builder community at the lack of attention paid to the product line by Borland. The community voiced their collective opinion in an open letter, which details some of the of large organizations relying today in BCB and the impact of Borland’s inaction upon these organizations. One of the chief organizers of this effort, Paul Gustavson, also wrote of this predicament in a blog entry this week.

The BCB community’s complaints regarding the product line seem quite reasonable and valid, and they can be boiled down to the following:

  • Lack of product updates for C++Builder 6, leaving key issues unaddressed and users without the latest development features.

  • Minimal support for C++Builder features in the newer C++Builder X product line, including no support for VCL-based projects or C++Builder 6 project files.

  • Many failures in communication with Borland’s C++ user community, most notably a much-promised open letter to the community that was never delivered.

I have to agree that these guys have a legitimate beef. Borland’s C++Builder user community has been treated rather poorly. It’s one thing for a company to simply stop updating a product, but it’s quite another to release new versions of similar products that seemingly abandon existing users and then to compound the problem by remaining mum on what the plans are for those existing users. It’s clear that somebody wasn’t minding the C++ store at Borland.

At the same time, I have to wonder just how effective the community’s open letter will ultimately be, seeing as how it seems to be written more from their hearts than from their minds. Yes, large companies and government organizations depend on C++Builder, and yes, their efforts may be hamstrung by Borland’s inattention to this product line. However, what the letter fails to do is make a strong business case for continued investment in C++Builder technology. It’s not enough just to say that if Borland doesn’t take care of C++Builder users they might lose some customers. There needs to be a legitimate case for making money with C++Builder technology. The list of signatories for the open letter is impressive, but we all know that it doesn’t necessarily translate into sales.

Let’s face it: Borland isn’t going to invest much more than lip service in C++Builder as a community service. Their grandiose past notwithstanding, Borland is a relatively small company with comparatively modest resources. As such, their management is going to insist – rightfully – that business units invest in endeavors that pay real cash dividends. We can find wisdom in the Flying Lizards’s 1979 hit here. The community’s love may give Borland a thrill, but it don’t pay the bills. They want your money.

As an occasional user of C++Builder, and one of the developers of the tool during my own days at Borland, I sincerely would like to see this situation work out in such a way that the technology lives on. For this to happen, the C++ product team needs to be able to build a business case around it. If I may offer my advice to the C++Builder community, this business case would be great place to focus their own evangelism efforts. For example, what evidence is there that producing a new C++Builder 7 will sell enough to make it worth the effort? Or how can adding VCL support to C++Builder X result in more sales? Does open sourcing some of the technology make sense? Can a case be made for C++ support in Borland Developer Studio, supporting VCL and VCL.NET? Microsoft all but admitted they dropped the ball with managed C++ in the 1.x version of .NET, so there is certainly opportunity here.

Borland has committed to making a final call on the C++Builder product line by December 14, 2004. That a little more than a month away. No matter how the situation is resolved, at least we won’t have to forever this time around to find out.

Parochial vs Cosmopolitan Computing

There is an old saying that travel broadens the mind. I think that a wide experience of different technologies can have the same beneficial effect for computer users.

A person who has traveled can distinguish between human traits that are peculiar to a particular area, and those traits that are universal, that are part of human nature. Such knowledge gives them a broader, more sophisticated view of the world. Ultimately, it teaches them compassion, and acceptance. Such people gain a willingness to see the good in people with customs different from their own.

The same can be said of computer users who have experience with multiple operating systems and multiple tool sets. People who use only one operating system, and one set of tools, generally don’t have as deep an understanding of computing or computers as do people who have wide experience with several operating systems and several different tool sets. A specialist may have a deeper understanding of a particular field, but their overall understanding of computing in general may be limited. This limitation traps them in a series of narrow minded prejudices which are both rude and limiting. It is hard for them to make good choices, because they don’t understand they options open to them.

There has long been a general prejudice in favor of people with a cosmopolitan or broad outlook and against people who have a parochial or narrow outlook. The reason a term like hick or yokel is considered derogatory is because people from rural areas who have not seen much of the world tend to have restricted or narrow points of view. For instance, there is something innately comic about a rural farmer from 100 years ago who lived off collard greens, chitlins and pigs feet reacting with disgust to the thought of a Frenchman eating snails. The joke was three fold:

  • Chitlins and collard greens are themselves exotic foods. There is something innately comic about people with exotic tastes making fun of someone else for having exotic tastes.

  • Though southern cooking can be delicious, it was not uncommon to see chitlins and collards prepared poorly, while French escargot, as a rule, was a delicacy prepared with exquisite refinement by some of the best cooks in the world.

  • The final, and most telling part of the joke was that southern cooking in general probably owed as much to French cooking as to any other single source. By deriding the French, our hapless yokel was unintentionally deriding his own heritage.

Most programmers start out using a particular computer language, such as Java, VB, C++ or Pascal. At first, their inclination is to believe that their language is the only "real" language, and that all other computer languages are "dumb." Take for instance, a deluded Visual Basic programmer who tries to use a PRINT statement in C++, finds that it won’t compile, and comes away thinking that C++ is a hopelessly crippled language. The truth of the matter, of course, is that C++ does support simple IO routines like PRINT, but the syntax in C++ is different than in VB.

This kind of narrow computer prejudice is similar to the viewpoint of our rural farmer from a hundred years ago who is suddenly transplanted to Paris. She goes home and tells everyone that there is nothing to eat in Paris. "They just don’t serve real food there. They think we are supposed to live off snails!" Or perhaps they conclude that Frenchmen are cruel because they laughed when the farmer started ladling up the flowers from her finger bowl with a spoon. What they forget, of course, is that everyone back home in Muskogee will laugh at a Frenchman who tries to eat corn on the cob with a knife and fork.

There is an interesting moment in the life of many developers when they start to understand parochial computing. As stated above, programmers tend to start out by getting to know one particular language in great depth. To them, their language is the computer language, and all other languages pale in comparison.

Then one day, disaster strikes. The boss comes in and tells them that they have to work on a project written in a second language, let’s say Java. At first, all one hears out from our hapless programmer is that Java "sucks." They are full of complaints. "You can’t do anything in this language. It doesn’t have feature X, it uses curly braces instead of "real" delimiters, the people who wrote this language must have mush for brains!"

Then, over time, the complaints lessen. After all, you can type a curly brace faster than the delimiters in their favorite language. That doesn’t make Java better than the developer’s favorite language, but it "is kind of convenient, in a funny kind of way." And after a bit, they discover that Java doesn’t support a particular feature of their favorite language because Java has another way of doing the same thing. Or perhaps the feature is supported, but the developer at first didn’t know where to look to find it. Of course, they are still heard to say that Java isn’t nearly as good as their favorite language, but the complaints lack the urgency of their initial bleatings.

Finally, after six months of struggling on the Java project, the big day comes: the developer has completed his module and can go back to work on a project using his favorite computer language. But a funny thing happens. At first, all goes swimmingly. How lovely it is to be back using his favorite editor and favorite language! But after an hour or so, curses start to be heard coming from his cube. "What’s the matter?" his friends ask. The programmer inaudibly mumbles some complaint. What he does not want to give voice to is the fact that he is missing some of the features in the Java language. And that Java editor, now that he comes to think of it, actually had a bunch of nice features that his editor doesn’t support! Of course, he is not willing to say any of this out loud, but a dim light has nonetheless been lit in the recesses of his brain.

Perhaps, if he is particularly judicious and fair minded, our newly enlightened programmer might suddenly see that though his language enjoyed some advantages over Java, Java was in some ways better than his own language! It is precisely at that moment that he begins to move out of the parochial world of prejudice and into the broader world of cosmopolitan computing.

The OS Bigot

The type of narrow viewpoint discussed here has no more common manifestations than in the world of operating systems. We have all heard from Microsoft fanatics, who, when asked to defend their OS, say: "There are more Microsoft users than users of all other operating systems combined." Yes, that is true, but it is also true that there are more people in India than in the United States. But believe me, there are few Americans who want to go live amidst the poverty, technical backwardness, and narrow provincialism of even a "thriving" Indian city such as New Delhi.

Microsoft users might also complain that it is hard to install competing OS’s, such as Linux. When asked to defend their point of view, they will eventually confess that their opinion is based on experiences that they had some five years earlier, when it was in fact true that most Linux installations were difficult. Today, Linux usually installs more quickly, and with much less fuss, that Windows.

Of course, people on the other side are no less narrow minded. A Linux install may be simpler and faster than a Windows install, but Linux typically does not have as good driver support, particularly for new devices. Thus it is not unusual for a Linux user to have no trouble with his video and sound cards, but to have to do work to get his CD burner working or scanner working.

It is true that the Windows GUI environment is still better than the one found in Linux. But the advantage seems to shrink not just with each passing year, but with each passing month. For the last year, and for most of the last two years, the KDE Linux environment has been at least as good as the GUI environment found in Windows 98, and in some areas it is superior to that in Windows XP.

Conversely, just as Windows has a slight advantage in the GUI world, Linux has long enjoyed a significant advantage when working at the command prompt. A typical Windows user will say, "So what? Who wants to work at the command prompt?" That’s because they are used to using the Windows command prompt, which has historically been very bad. But watching a skilled user work at the command prompt in Linux can be a revelation. There are things you can do easily with the BASH shell that are hard, or even impossible, to do with the Windows GUI. But in recent years, even this truism has been shown to have its weaknesses. The command prompt in Windows XP is much improved over that found in Windows 98 or Windows 2000, and the porting of scripting languages such as Python and Perl to Windows has done much to enhance life at the Windows command prompt.

Freedom

Linux users often argue that their software is free in two senses of the word:

  • It has zero cost

  • And it comes with source and can be freely modified

All that is true, but Windows has a wider range of available applications. Who would deny that there is a very real sense of freedom that one gets from using a beautifully designed piece of software?

And yet, if you are a student, or an older person on a limited income, you might not be able to afford all that fancy software. In such cases, you might be better off using Linux, where you can easily find free versions of the tools you need.

Again, one might read the above and come to the narrow conclusion that proprietary software is always better than open source software. But this is not always true. For instance, Mozilla is clearly a much better browser than the Internet Explorer. It more closely conforms to the HTML standard, it handles popups better, it has a better system for handling favorites, and it has a feature, tabbed windows, that gives it a massive usability advantage over IE.

On the other hand, there is simply nothing in the open source world to compare to a tool like DreamWeaver. There are probably a hundred different open source web editors, but only the HTML editor in OpenOffice provides even the rudimentary features found in DreamWeaver.

The Historical Perspective

The ultimate irony, of course, comes when a person with a limited perspective imitates another culture, and goes about crowing about this borrowed sophistication as if he invented it himself.

I used to do this myself, back when I promoted Delphi for a living. Unknowingly, I often championed features in Delphi that were in fact borrowed from VB. I would say, Delphi is better than VB because it has feature X. I didn’t know that VB not only had the same feature, but that the creators of Delphi had in fact borrowed the feature from VB.

I have seen the same thing happen when advocates of C# crow about how much better it is than Java, and then use one of the many features that C# borrowed from Java as proof of the fact. The same often happens when a user of a DotNet based application approaches a Linux user and shows off the great features in their product. The fact that not only the feature, but the entire product and its architecture was stolen directly from an open source application written in PHP is of course lost on the advocate of DotNet’s prowess.

In fact, it is generally true that Microsoft is a company that uses derived technologies. DotNet is just an attempt to emulate the features found in Java and PHP. C# is for the most part simply an imitation of Java with a few features from Delphi thrown in for good luck. IE is an imitation of the features found in the old Netscape browser. The Window’s GUI is an imitation of the Mac GUI.

One of the signs of a cosmopolitan person is that they have an historical perspective, and can know something about where cultural habits originated, or from which sources they were derived. A provincial person thinks not only that his culture is best, but that his country invented the very idea of culture.

Of course, one should rise above even this insight. It is true that Microsoft is a company based on borrowed ideas. But Microsoft does a good job of borrowing technology. The old joke states that Microsoft begins by deriding new inventions, then imitates them, and ends up claiming they invented them. But what people forget is that Microsoft often does "reinvent" technologies in a meaningful way by implementing them very well, and by adding special touches that improve upon the original product.

So the correct perspective is to recognize that derivation lies at the heart of Microsoft technology, but to also recognize their technical expertise. Gaining that kind of nuanced world view is part of what it means to be a sophisticated computer user. Knowing such things can help you make informed decisions, rather than decisions based on prejudice.

Summary

Ultimately the kind of narrow prejudice found by advocates of single platforms or single technologies offers a frighteningly restricted world view. Such people are indeed a bit like a hick or yokel from 100 years ago who arrives in the big city and feels overwhelmed by a kind of sophistication that they had never imagined and cannot comprehend. They dislike the big city not only because it is different, but because it threatens them. They are suddenly a small fish in a big pond, and from the heart of their insecurity, they begin to mock the city sophisticates who swim in the urban sea.

This is not to say that our yokel might not have cultural advantages over a "snob" from the big city. For instance, it is well known that rural farmers in America 100 years ago were renowned for their friendliness. It is true that such people often worked together to help a neighbor through a tough time, and they often worked together and shared resources in ways that their friends from the big city could not even imagine, let alone imitate. And of course they would have a specialized knowledge of how to survive in their rural world that the Parisian could not match.

The key difference, of course, is that a truly cosmopolitan person could have the perspective to appreciate all this, while a person from a rural area would be more inclined to adopt a narrow, provincial point of view. The cosmopolitan person could admire both Parisian society, and rural America.

This is the perspective that Alexis de Tocqueville brought to his book Democracy in America. Alexis de Tocqueville understood both European culture, and American culture, and that gave him the insight needed to write so trenchantly about American society.

The mark of the cosmopolitan is that she will:

    • Be gracious enough to help without condescension foreigners who are unfamiliar with the customs of her land.

    • Have enough perspective to laugh goodnaturedly at herself when caught out not knowing the customs of a foreign land.

    • Have the perspective to see what is truly best in any one culture because her perspective is broad and informed.

A cosmopolitan person has these traits instinctively, and without self consciousness. She knows that each land has its own customs, and that deep down where it counts, people are the same when it comes to matters of the heart and soul. The may have different habits, but it is narrow minded, provincial, even parochial, to regard people with a different perspective as innately inferior to oneself.

Software developers who have broken out of the narrow prejudices formed when using their first language and first OS have the same advantages. They know what is best in multiple worlds, and therefore have the wisdom to search for those features on whatever platform they use. They don’t waste time embarrassing themselves by making snide, narrow minded comments, that polite people can’t even correct without sounding condescending or unemotionally hurting someone’s feelings. They have gained a sophistication, and a broader perspective, that makes them better at everything they do, regardless of their toolset.

Windows Search that doesn’t Suck

If you were recently in a temporary coma you may have missed the news about the release of Google Desktop Search, which leverages Google’s search technology on individual PCs by enabling quick and easy access to information buried in Outlook/OE email messages, MS Office documents, web history, IMs, and text files. After trying Lookout a few months back, I became totally addicted to actually being able to find email messages while I was still interested in the information they contained. I was eager to try out Desktop Search to see if it could do for other documents what Lookout did for email.

After the quick install, the product spent the better part of 2 days indexing the 55.8 gigs of occupied space on my laptop’s hard disk. However, unlike the porcine Index Server that comes with Windows, Google Desktop Search doesn’t peg my CPU trying to do its indexing work while I am in the middle of trying to my work. Instead, Desktop Search waits until I am not using the PC, so, while the process took quite a while, the impact of the indexing process on my life was nil. Once complete, the utility had indexed a total of 60,578 unique items.

The application sits in the taskbar as a tray icon, its local menu containing options to search, set preferences, and so forth. Interestingly, but not surprisingly, the user interacts with the application using locally-served web pages with a look and feel similar to that of Google’s web site. So, for example, selecting the “Search” item from the tray icon’s local menu brings up a local web page that looks a lot like www.google.com.

So, how good is it? Well, searching for the string “codefez” brought me to a results page containing 35 emails, 5 office documents, and 93 pages from web history in less than a second. A more complex search string, such as “+falafel -lino” gave me 533 emails, 11534 files, and 3535 pages from web history in about a second. How good? Damn good.

Of course, performance like this doesn’t come for free. The index files necessary to accommodate those 60,578 unique items occupy a total of 485 megs of disk space on my laptop. For me, this is a small price to pay for actually being able to find things on my computer based on their contents. Imagine!

On a related note, Microsoft has announced their intention to ship a beta version of a similar tool before the end of 2004. It will be interesting to see what they can produce, but whatever it looks and smells like one thing is certain: large, talented companies competing to build great free software can mean only goodness for consumers. Meanwhile, I’m sticking with Google Desktop Search.

Introduction to Yum

Learn how to use yum, a tool for automatically maintaining your system. You can use yum to make sure your entire system is up to date, or to automatically add or remove applications and services.

Introduction

Installing and updating software can be one of the more unpleasant computer maintenance tasks. The process of inserting CDs, browsing for a particular app, answering install questions, looking for the right version, etc, can be boring and time consuming.

What one wants, ideally, is to be able to say to the computer, “install the latest version of OpenOffice,” and then the computer would go out and do just that. Or, one might like to ask the computer to automatically “make sure you are up to date.” Linux doesn’t offer any features quite that fancy, but modern distributions come with tools like yum, urpmi, YaST and rpmapt (or debian apt) which come close to providing these advanced features. The GUI based up2date project that ships with Fedora and Redhat is also useful. However, I have found that up2date on Fedora Core 2 is not entirely bug free, which is what led me to yum. After trying yum a few times, I found that it is much more powerful and useful than up2date.

These various tools are often associated with particular distributions. For instance, apt is native to Debian, urpmi to Mandrake, and yast to SUSE. Yum is usually associated with RedHat and Fedora, though like apt, it can be used on multiple distributions.

Yum is the Yellowdog Updater, Modified. It is very easy to use. For instance, if you have yum installed properly, then you can issue a command like this to install OpenOffice:

yum install openoffice

OpenOffice, and any dependencies on which it relies, will be automatically installed. In other words, all the packages necessary to install the most recent version of openoffice will automatically be downloaded from the Internet and installed.

To make sure your entire system is up to date, you can issue this command:

yum update

After issuing this command, any out of date files will be updated, and any missing dependencies will be installed. If a new version of one piece of software requires that another piece of software be updated, that task will be accomplished for you automatically.

The rest of this article will describe how yum works, how to install it, how to configure it, and how to perform routine tasks with it. If you understand how yum works, then you should have little trouble understanding either apt or urpmi.

Installing Yum

Yum is part of the Fedora Core standard install. If yum is installed, then you can become root and type yum to test it:

[root@somecomputer etc]# yum
    Usage:  yum [options] 
         Options:
          -c [config file] - specify the config file to use
          -e [error level] - set the error logging level
          -d [debug level] - set the debugging level
          -y answer yes to all questions
          -t be tolerant about errors in package commands
          -R [time in minutes] - set the max amount of time to randomly run in.
          -C run from cache only - do not update the cache
          --installroot=[path] - set the install root (default '/')
          --version - output the version of yum
          --exclude=some_pkg_name - packagename to exclude - you can use
            this more than once
          --download-only - only download packages - do not run the transaction
          -h, --help this screen

If yum is not on your system, you can download it from the Duke web site. Here is a download directory where all the versions of yum are kept. Information on downloading Yum for RedHat 9 or 8 is available at the Fedora Wiki.

Yum usually comes in the form of a rpm file, which can be installed like this:

rpm -Uhv yum-2.0.7-1.noarch.rpm

RPM is the Redhat package manager, and it is used to automatically install packages that are already on your system. After you have installed yum, then you can use yum to install or update all the other applications or services on your machine. In other words, you would only have to manually use rpm to install yum, and after that yum would control rpm for you automatically. Yum is much more powerful and much easier to use than rpm.

Configuring Yum

Yum needs to know what software should be installed on your system. For instance, if you are using Fedora Core 2, then it needs to know what packages make up a standard install of Fedora Core 2. The packages needed for a particular Linux distribution are stored in repositories on the Internet. To properly configure yum, you need to open a file called /etc/yum.conf, and make sure it contains the proper information. In other words, you use yum.conf to point yum at the repositories on the Internet that define the files needed for your distribution of Linux.

If you have installed Fedora Core from CD, then you probably have a valid yum.conf file on your system already. However, at the end of this article you will find a simple yum.conf file for RedHat 9, and a more complex yum.conf file for FedoraCore. These are complete files, and can be used to replace your existing yum.conf file; though of course I would recommend backing up any file you wish to replace.

Additional information on config files are found in various places across the web, including the information found at the following URLs:

  • http://www.fedoraforum.org/forum/archive/index.php/t-2067.html
  • ttp://www.xades.com/proj/fedora_repos.html
  • ttp://dries.studentenweb.org/apt/

Yum Packages

I’ve talked several times in this article about yum packages. A package in yum is an rpm file. Each rpm file has a header, that defines the contents of the file and any dependencies it might have. In particular, it defines the versions of the programs upon which the code in the rpm file depends. Using this header, it is possible to calculate exactly what packages (rpm files) need to be downloaded in order to successfully install a particular product.

When you first start yum by becoming root and typing yum list, it usually spends a long time (15 to 60 minutes) downloading not entire rpm files, but instead the headers for all the rpm files that define your distribution. After it has downloaded all these headers, then you can issue a command like yum update, and yum will compare the current contents of your system to the records found in the rpm headers it has downloaded. If some of the headers reference files that are more recent than the files currently installed on your system, then yum will automatically download the needed complete rpm files and use them to update your system.

Besides the headers for your distribution, you can configure yum to reference other repositories that contain additional files that might interest you. For instance, you can ask yum to download all the headers for the files needed to install mono, or all the fedora extras, or all the files that are part of jpackage. Once the headers are in place, you can download all or part of the packages found in these repositories. You can also point yum at freshrpms, a location where yum is likely to find any number of packages that might interest a Linux user. The complex yum.conf file at the end of this article is set up to do most of these things automatically. In another technical article which will soon appear on this site, I will discuss configuring yum so that it will automatically install mono.

If you want to visit a yum repository to see its structure, you can do so. Here, for instance, is the yum repository for Mandrake:

http://mirrors.usc.edu/pub/yum-repository/mandrake/

The Yum Cache

Yum stores the headers and rpms that it has downloaded on your system. Here is the directory structure that yum uses for its cache on of my old RedHat systems:

var/cache/yum
var/cache/yum/base
var/cache/yum/base/headers
var/cache/yum/base/packages
var/cache/yum/updates
var/cache/yum/updates/headers
var/cache/yum/updates/packages

As you can see, the cache is divided up into two sections, the base files and the updates. The headers for each section are stored in one directory, and any downloaded packages in another directory.

If you look at the simple yum.config file at the end of this article, you will see that defines where the cache will be stored, and that it has two sections called base and updates. The more complex yum.config file points at the same cache, but it has more repositories upon which it draws. As a result, using it will likely lead to you have more than two simple sections called base and updates. For instance, you might have sections called jpackage or updates-released.

Running Yum Update

As always, there is no better way to learn how yum works than simply getting your hands dirty at the command line by using it. The closest I can come to that experience in an article of this type is show you the output at the command line of the simple command yum update. At the time I ran this command, my system was already reasonably up to date, so only a few files are downloaded. The complete run is shown in Listing 1.

Listing 1: A simple run of yum update has three parts, first contacting the servers, then downloading the headers and parsing them, then downloading and installing the needed packages.

[root@somecomputer etc]# yum update
Gathering header information file(s) from server(s)
Server: Fedora Core 2 - i386 - Base
Server: Fedora.us Extras (Stable)
Server: Fedora.us Extras (Testing)
Server: Fedora.us Extras (Unstable)
Server: Livna.org - Fedora Compatible Packages (stable)
Server: Livna.org - Fedora Compatible Packages (testing)
Server: Livna.org - Fedora Compatible Packages (unstable)
Server: macromedia.mplug.org - Flash Plugin
Server: Fedora Core 2 - i386 - Released Updates
Finding updated packages
Downloading needed headers
cups-libs-1-1.1.20-11.6.i 100% |=========================| 6.8 kB   00:00
redhat-artwork-0-0.96-2.i 100% |=========================| 102 kB   00:00
libxml2-0-2.6.15-2.i386.h 100% |=========================| 3.0 kB   00:00
libxml2-python-0-2.6.15-2 100% |=========================| 4.3 kB   00:00
cups-1-1.1.20-11.6.i386.h 100% |=========================|  23 kB   00:00
perl-HTML-Template-0-2.7- 100% |=========================| 2.2 kB   00:00
jhead-0-2.2-0.fdr.1.2.i38 100% |=========================| 1.7 kB   00:00
cups-devel-1-1.1.20-11.6. 100% |=========================| 7.0 kB   00:00
libvisual-devel-0-0.1.6-0 100% |=========================| 2.4 kB   00:00
libvisual-0-0.1.6-0.fdr.2 100% |=========================| 1.9 kB   00:00
perl-Glib-0-1.061-0.fdr.2 100% |=========================| 4.4 kB   00:00
libxml2-devel-0-2.6.15-2. 100% |=========================|  14 kB   00:00
Resolving dependencies
Dependencies resolved
I will do the following:
[update: cups-libs 1:1.1.20-11.6.i386]
[update: redhat-artwork 0.96-2.i386]
[update: libxml2 2.6.15-2.i386]
[update: libxml2-python 2.6.15-2.i386]
[update: cups 1:1.1.20-11.6.i386]
Is this ok [y/N]: y
Downloading Packages
Getting cups-libs-1.1.20-11.6.i386.rpm
cups-libs-1.1.20-11.6.i38 100% |=========================| 101 kB   00:00
Getting redhat-artwork-0.96-2.i386.rpm
redhat-artwork-0.96-2.i38 100% |=========================| 4.4 MB   00:28
Getting libxml2-2.6.15-2.i386.rpm
libxml2-2.6.15-2.i386.rpm 100% |=========================| 625 kB   00:03
Getting libxml2-python-2.6.15-2.i386.rpm
libxml2-python-2.6.15-2.i 100% |=========================| 435 kB   00:02
Getting cups-1.1.20-11.6.i386.rpm
cups-1.1.20-11.6.i386.rpm 100% |=========================| 2.5 MB   00:16
Running test transaction:
Test transaction complete, Success!
libxml2 100 % done 1/10
cups-libs 100 % done 2/10
redhat-artwork 100 % done 3/10
libxml2-python 100 % done 4/10
cups 100 % done 5/10
Completing update for cups-libs  - 6/10
Completing update for redhat-artwork  - 7/10
Completing update for libxml2  - 8/10
Completing update for libxml2-python  - 9/10
Completing update for cups  - 10/10
Updated:  cups-libs 1:1.1.20-11.6.i386 redhat-artwork 0.96-2.i386
libxml2 2.6.15-2.i386 libxml2-python 2.6.15-2.i386 cups
1:1.1.20-11.6.i386
Transaction(s) Complete
[root@somecomputer etc]#

You can probably parse that file with no trouble on your own. However, I will take a few moments to break it apart, just you can be absolutely clear about what happens when yum performs an operation of this type.

The first step is to contact the server specified in your yum.conf file:

Gathering header information file(s) from server(s)
Server: Fedora Core 2 - i386 - Base
Server: Fedora.us Extras (Stable)
Server: Fedora.us Extras (Testing)
Server: Fedora.us Extras (Unstable)
Server: Livna.org - Fedora Compatible Packages (stable)
Server: Livna.org - Fedora Compatible Packages (testing)
Server: Livna.org - Fedora Compatible Packages (unstable)
Server: macromedia.mplug.org - Flash Plugin
Server: Fedora Core 2 - i386 - Released Updates

Yum then downloads the headers it found on the servers:

Finding updated packages
Downloading needed headers
cups-libs-1-1.1.20-11.6.i 100% |=========================| 6.8 kB   00:00
redhat-artwork-0-0.96-2.i 100% |=========================| 102 kB   00:00
libxml2-0-2.6.15-2.i386.h 100% |=========================| 3.0 kB   00:00
libxml2-python-0-2.6.15-2 100% |=========================| 4.3 kB   00:00
cups-1-1.1.20-11.6.i386.h 100% |=========================|  23 kB   00:00
perl-HTML-Template-0-2.7- 100% |=========================| 2.2 kB   00:00
jhead-0-2.2-0.fdr.1.2.i38 100% |=========================| 1.7 kB   00:00
cups-devel-1-1.1.20-11.6. 100% |=========================| 7.0 kB   00:00
libvisual-devel-0-0.1.6-0 100% |=========================| 2.4 kB   00:00
libvisual-0-0.1.6-0.fdr.2 100% |=========================| 1.9 kB   00:00
perl-Glib-0-1.061-0.fdr.2 100% |=========================| 4.4 kB   00:00
libxml2-devel-0-2.6.15-2. 100% |=========================|  14 kB   00:00

Next the dependencies are calculated and the user is asked whether she wants to download the needed packages:

Resolving dependencies
Dependencies resolved
I will do the following:
[update: cups-libs 1:1.1.20-11.6.i386]
[update: redhat-artwork 0.96-2.i386]
[update: libxml2 2.6.15-2.i386]
[update: libxml2-python 2.6.15-2.i386]
[update: cups 1:1.1.20-11.6.i386]
Is this ok [y/N]: y

If the user gives permission, then the needed packages are downloaded:

Downloading Packages
Getting cups-libs-1.1.20-11.6.i386.rpm
cups-libs-1.1.20-11.6.i38 100% |=========================| 101 kB   00:00
Getting redhat-artwork-0.96-2.i386.rpm
redhat-artwork-0.96-2.i38 100% |=========================| 4.4 MB   00:28
Getting libxml2-2.6.15-2.i386.rpm
libxml2-2.6.15-2.i386.rpm 100% |=========================| 625 kB   00:03
Getting libxml2-python-2.6.15-2.i386.rpm
libxml2-python-2.6.15-2.i 100% |=========================| 435 kB   00:02
Getting cups-1.1.20-11.6.i386.rpm
cups-1.1.20-11.6.i386.rpm 100% |=========================| 2.5 MB   00:16

Finally, some tests are run to make sure everything is as it should be.

Running test transaction:
Test transaction complete, Success!

If the the calculations check out, then the packages are installed and the user is notified that the transaction is complete:

libxml2 100 % done 1/10
cups-libs 100 % done 2/10
redhat-artwork 100 % done 3/10
libxml2-python 100 % done 4/10
cups 100 % done 5/10
Completing update for cups-libs  - 6/10
Completing update for redhat-artwork  - 7/10
Completing update for libxml2  - 8/10
Completing update for libxml2-python  - 9/10
Completing update for cups  - 10/10
Updated:  cups-libs 1:1.1.20-11.6.i386 redhat-artwork 0.96-2.i386
libxml2 2.6.15-2.i386 libxml2-python 2.6.15-2.i386 cups
1:1.1.20-11.6.i386
Transaction(s) Complete

Basic commands Used with Yum

clean

yum clean

cleans up stuff.

provides

yum provides PackageName

Find out which packages provide a particular feature.

install

yum install PackageName

install a package or group of packages.

update a package

yum update PackageName

update a package or group of packages

update system

yum update

update everything currently installed

remove

yum remove PackageName

remove a package

check for updates

yum check-update

See if there are any updates available

search

yum search

Useful if you know something about a package, but not its name

list

yum list

list what packages are available. Many options.

info

yum info

Find information on a package.

upgrade

yum upgrade

Like update, but helpful for moving between distro versions, depricated.

It has been reported that in FC3 you need only type yum list recent to learn of packages added to the repository in the last seven days. In general, you can run yum with no parameters or with -h as a parameter in order to get a sense of what you can do with any particular version of the product. Typing man yum is also a good way to learn more about the various commands you can give when using yum.

The Varieties of Testing Experience

There are many different types of tests that are commonly discussed in the literature on this subject. In fact, there are at least six main types of tests, and the boundaries between them can be fuzzy at times. Nevertheless, it is possible to make at least a few generalizations which can help you navigate the lingo found in most texts and web sites. This article is designed to give you basic definitions for each of the following types of tests:

  • Unit Tests
  • Integration Tests
  • Functional Tests
  • Stress, Load and Performance Tests
  • Acceptance Tests
  • Regression Tests

Unit Tests

As you will see, unit tests can be used to perform any of the six types of test. However, there is a pure form of unit testing that exists on a separate plain from the five other types of tests. When defined in this strict manner, unit tests represent the lowest level of tests. This “pure” or “strict” form of unit testing is very fine grained, and works on a method by method basis. One class is tested at a time, with mocks standing in for other classes. Only public methods are covered. Other rules, many of which will be discussed in depth, are covered in various sections throughout the rest of this text. For now, the key point to grasp is that unit testing, defined in the strictest sense, represents the lowest, and most detailed, level of the six common types of tests. It is possible to use unit tests to perform other types of tests, such as integration or functional tests. However, the “pure” form of unit testing represents a unique kind of test distinct from any other form of testing.

NOTE: When I say that unit tests can be very fine grained, or very low level, I do not mean to imply that unit tests are difficult to write. They are, in fact, quite easy to write. Nevertheless, they work at very detailed and precise level of the development process.

Integration Tests

While unit tests work on a method by method level, Integration tests explore the ways that large entities work together. The classic example would be testing if two classes can interoperate. But really any two abstractions, such as a class, package, assembly, service, or a subsystem, would do. The point is that two distinct entities are being tested to see if they play well together. A lot of people, including myself, use unit test technologies to to run many integration tests. However, there are some kinds of integration tests that can best be run without using unit tests. A good tester is in part defined or recognized by their ability to know when to use unit testing, and when to turn to some other technology.

NOTE: When running unit tests, people frequently use mock objects to simulate what happens when one class needs to interact with another class. Two classes are definitely involved in a typical mock test scenario. However, I think it is simplest to think of mock tests as being part of unit testing, and not a true example of an integration test.

Functional Tests

Functional tests demonstrate whether a subsystem or class meets a requirement. For instance, a functional test on a compiler might check to see whether the compiler can compile Hello World. A functional test on an FTP client might test whether the client could connect, download, or list a directory.

Functional tests are sometimes run on whole applications, while unit tests and integration tests are run on subsystems or methods. A lot of people, including myself, use unit testing technology to run at least some functional tests. Once you start testing a whole application, however, you are not really in the realm of unit testing any longer.

Both unit tests and integration tests are often run at a fairly fine grained level that is very close to the metal, and generally of interest primarily to programmers. Functional tests exist on a somewhat higher, and less technical, level. It is not just a programming team or QA that is interested in the results of these kinds of tests. Managers like functional tests. Managers are frequently working their way through a checklist of requirements, and they can use functional tests to see if there are items on their list they can check off.

Stress, Load and Performance Tests

Stress, load and performance tests verify that an application can handle a load or perform in predefined period of time, or under particular circumstances. Can a server handle 50 clients? How about 250? Or consider the compiler project mentioned in previous paragraphs. How long does it take to compile Hello World? How long does it take to compile an application with 500,000 lines of code? Can it be compiled in a reasonable period of time?

You generally can’t use unit tests to do this kind of thing, though there may be a few simple tests you could run inside a unit testing framework. A number of powerful commercial applications, including TestComplete (which is owned in part by my employer, Falafel), are designed to make this kind of testing easy. A whole class of tools, called profilers, are also designed to fill this role in the development cycle.

Acceptance Tests

Acceptance tests represent the highest, most abstract level of testing. They resides at the opposite end of the spectrum from the “strict” unit test as defined earlier in this section. Unit tests are typically very detailed and very technical, while acceptance tests work with entire applications, and usually concern the major features found in the interface of an application.

Acceptance tests are often run by the customer, and are designed to show whether or not the application meets an agreed upon specification. Acceptance tests are one way of helping a customer decided whether or not they are satisfied with a delivered product. This kind of test often includes at least some functional and some performance tests, but only those that directly impact the end user of an application. Issues like appearance, ease of use, etc, are part of this level of tests. I think of acceptance tests as being run by the client, or a by a very disengaged manager. Unit, functional tests and integration tests help the developer get the application ready to pass an acceptance test.

Regression Tests

Regression tests are not really another type of test so much as another way of looking at the testing process. These tests are designed to detect if new bugs are being introduced into existing code. If a programmer changes an existing code base, then comes back and runs tests to confirm that everything still works, then he or she is running a regression test. Unit, integration, functional, stress and acceptance tests can all be run as regression tests. All the matters is that you are looking back to confirm that nothing is broken after a change was made to the application. The change might be the addition of a new feature, the removal of an existing feature, or simply the refactoring of existing code without adding or deleting any features. Regardless of the type of change that was made, we run regression tests to confirm that the change did not break any existing code.

Unit tests of all kinds are frequently run as regression tests. As you will see, developers automate unit tests to run at least daily. By viewing the results of these automated tests, developers can confirm that nothing has been broken by any recent changes introduced into a code base. When used in this manner, unit tests can be viewed as a common, and very powerful, form of regression test.

Summary

All six types of tests covered in this section of the text are important. An experienced team of developers will use a wide variety of tests to ensure that their code is working properly. Different types of tests will appeal to different types of developers. It is not possible to assert that any one type of test is more important than another. Nevertheless, there is no question that many programmers find themselves particularly interested in unit testing since it can become so intimately bound up with the process of developing an application. Though unit testing is probably the most recently developed of all six types of tests, it appears to me that it is written about more often, and discussed more heatedly, than all the other types of tests combined. This does not mean that it is necessarily more important than the other types of tests, but only that it is more interesting, and of more direct importance to programmers from the perspective of their day to day development cycle.

What is Aspect Oriented Programming?

This article will give you a brief introduction to Aspect Oriented Programming using an upcoming library product that’s being developed at RemObjects. The project is currently codenamed “RemObjects Taco”. Taco is a library that will enable you to leverage concepts of Aspect Oriented Programming (AOP) in your .NET applications. Unlike other aspect oriented tools available, Taco is a language-independent library and will allow you to both use and implement aspects using the .NET language of your choice.

Aspect Oriented Programming is based on the concept of expanding and specializing class implementations, not by extending their code or using traditional inheritance models, but by attaching pieces of code, called Aspects, to them.

Assume that you have a fairly extensive class with many methods, and you now are faced with the task of applying a certain layer of logic to that class. This layer could involve thread synchronization, security checks, or something as simple as logging all method calls to a log file for debugging or audit purposes.

If the class in question is fairly extensive and contains a large number of methods, adding code for that purpose to each and every method would be a huge amount of work. It would also involve adding a lot of duplicate code to your library, and adding it in places where it does not really belong. (After all, a method should focus on the task at hand and should not be weighted down with external “plumbing.”)

With Aspect Oriented Programming, you implement your logic in a separate Aspect class, independently of the class (or classes) you will later want to augment. Once the aspect is implemented, you can easily attach it to any given class in your class library, and your logic will be applied to all (or selected) calls made into the class. Neither the class nor the caller will need to worry or even be aware of the aspect. For instance, you can use this technology to add a Critical Section; code for checking the user’s access rights; or code for writing data to a log file. All of this will be implemented in a separate class called an Aspect and will not clutter up your primary code.

An Example

Let’s look at an example to illustrate this concept by implementing an aspect that performs thread synchronization. Taco already comes with a prebuilt Synchronize aspect to provide this functionality and a lot more flexibility then the example shown; but for the purpose of this article let’s assume that thread synchronization represents some custom logic you want to implement yourself.

Let’s assume that you have a (completely contrived) MyData class already implemented. While scaling up your application to be multi-threaded, you find that it would be helpful if the MyData class was thread-safe (which the current implementation isn’t). Here is your existing code for the MyData class:

type

MyData = class(MyBaseClass)

private

fValue: integer;

public

method Calculate;

property Value: integer read fValue write fValue;

end;

end;

implementation

method MyData.Calculate;

begin

fValue := (fValue+3)*5;

end;

To make even this simplistic class thread-safe using conventional programming techniques would involve a an amount of code that far exceeds the current class implementation – you’d have to

  • add a private field to hold a CriticalSection or Mutex
  • add a constructor to initialize the critical section
  • add calls to CriticalSection.Enter/Exit with corresponding try/finally blocks to all methods
  • add getter/setter methods for the property so that you could acquire the critical section for property access.

In contrast, using Aspect Oriented Programming and assuming that you have implemented your Synchronize aspect, you would add exactly one line of code to your class definition and it will be thread-safe automatically:

type

[Synchronize]

MyData = class(MyBaseClass)

private

fValue: integer;

public

method Calculate;

property Value: integer read fValue write fValue;

end;

end;

implementation

method MyData.Calculate;

begin

fValue := (fValue+3)*5;

end;

With the exception of the [Synchronize] attribute added to the class declaration, the code is completely identical to the original version. The individual methods and properties are unchanged and not cluttered with synchronization code.

Implementing the Aspect

Now that you’ve seen how to augment a class with an aspect, let’s take a look at what’s involved with writing a custom aspect (in this case, we’ll implemented a simple version of the Synchronize aspect used above).

Taco provides a base class (RemObjects.Taco.Aspect) for you to descend from to implement your own aspects. All we need to do is create a descendant, instantiate a Mutex object, and implement the PreprocessMessage and PostprocessMessage methods to acquire and release the Mutex, respectively:

type

SynchronizeAspect = assembly class(Aspect)

private

fLock: Mutex := new Mutex();

protected

method PreprocessMessage(aMessage: CallMessage); override;

method PostprocessMessage(aMessage: ReturnMessage); override;

end;

implementation

method PreprocessMessage(aMessage: CallMessage);

begin

fLock.WaitOne();

end;

method PostprocessMessage(aMessage: ReturnMessage);

begin

fLock.ReleaseMutex();

end;

The PreprocessMessage method of your aspect will be executed prior to any call into the classes augmented with your aspect and the PostprocessMessage method will be executed after any of the calls return. This happens whether they return successfully or were aborted via an exception. Note that the aMessage parameter also gives you access to details about the call being made (such as which object is being called, which method, and what parameters are being passed or returned.) While this simple aspect didn’t need this information, it is available for more complex aspects.

How Does this Work?

Underneath the hood, Taco uses .NET’s messaging architecture, which is also used by .NET Remoting, to enable the injection of code. Basically, every method or property call made to your augmented object will be run thru a number of Message Sinks before reaching the actual method. Taco aspects hook into this list to execute the logic you provide in the PreprocessMessage and PostprocessMessage methods.

Converting the method call to a message that can be processed by your aspect and back does of course introduce a small amount of overhead. This hit, however, is not serious. The overhead is comparable to calling an object from a different AppDomain (which basically uses the same technique) or a COM object hosted in the COM+ runtime. For normal “business logic” type object hierarchies, this overhead will be negligible, but you probably would not want to use AOP inside the core rendering engine of your new first person shooter game!

The above code snippets are written in Chrome, but the same principles apply to other .NET languages.

Please also note that Taco is currently in early alpha state, so the exact class interfaces and syntax shown in the last code snippet below might still be subject to change before public release. If you’re interested in joining the Taco beta program, when it becomes available, please drop me a mail.

Further Reading

Hopefully this article has given you a quick introduction to Aspect Oriented Programming and has given you a sense of the scope of what Taco will provide for .NET developers seeking to use AOP.

The links below provide some more general information on AOP:

  • AOP: Aspect-Oriented Programming Enables Better Code Encapsulation and Reuse — MSDN Magazine, March 2002
  • Aspect-Oriented Software Development Community
  • LOOM.NET