The top 10 telecommuting traps

Most IS managers focus on the myriad technical details when developing a telecommuting program. However, personnel, psychological, and legal issues can overwhelm even the most technically perfect program. We discuss the top 10 reasons why telecommuting programs fail and how to prevent them. Issues are presented in reverse order of importance.

10 Insufficient Support Infrastructure: Because they often work extended hours or in a different time zone, teleworkers can stretch an enterprise’s support infrastructure. Teleworkers cannot easily give their machines to technical support when problems arise, nor can technical support use remote-control tools to troubleshoot remote computers if the employee’s problem is with remote access. Solution: Train telecommuters on remote workstation configuration and maintenance before they begin to work from their home offices. Train the support staff on the remote-access environment and consider expanding the hours for which technical support is available.

9 Insufficient Security Policies: Teleworkers typically require full access from home to all the system resources that would be available to them within the enterprise. But it can be difficult to validate the teleworker’s identity. Solution: Revise security policies to address the issues regarding employees working in a home environment (e.g., corporate use of personal computers and personal use of corporate computers should be discouraged, and sign-on and authentication procedures should be strengthened).

8 Union Difficulties: Many unions feel that telecommuting interferes with their representation and collective bargaining power. Solution: Approach union leaders early to construct a program that is acceptable to both the enterprise and union.

7 “Quantifiable” Productivity Gains Aren’t Achieved: Too frequently, the enterprise embraces telecommuting to attain a mythical 20 percent increase in user productivity. However, changes in productivity are difficult to measure; many knowledge workers don’t have quantitative (or even objective) performance metrics. Solution: Rewrite performance metrics for all eligible job roles to focus on objective, output-oriented metrics, and train managers to use the new performance metrics.

6 Teleworker Productivity Declines: Telecommuter productivity usually declines in the first six to 10 weeks of the program’s implementation. These decreases are due to insufficient training in using the remote workstation, isolation from the workgroup, and inexperience in filtering out distractions at home. The productivity decline is generally temporary but can dishearten the telecommuter (and the enterprise), leading to high dropout rates. Solution: Minimize the impact and duration of the productivity decline with proper training. A telecommuter training lab can provide an excellent introduction to telecommuting, and lets employees practice setting up and maintaining remote equipment.

5 Overall Productivity Declines: Without sufficient workgroup tools to support on-line and off-line collaboration, overall productivity will decrease as the workgroup disintegrates. Solution: Encourage communication by publishing home office numbers and work-at-home schedules so that coworkers feel more comfortable calling the teleworkers. Longer term, modify workgroup processes to take advantage of collaboration tools.

4 Employee Morale Drops: Without formal policies that define employee eligibility, available equipment, the amount of telecommuting that will be supported, and other details, a telecommuting program can result in lower employee morale. Unevenly distributed telecommuting privileges can lead to frustration. Solution: Establish policies that outline eligibility requirements.

3 Budget Overruns: Although many think telecommuting can help reduce operating expenses, telecommuters are more expensive to support than their office-bound counterparts. According to GartnerGroup’s 1998 study of remote-access total cost of ownership, a full-time telecommuter can cost as much as 124 percent more than an office-bound worker in terms of equipment, support, and voice and data communications. Solution: Perform a thorough cost/benefit analysis at the beginning of the project and allocate enough money to support the program.

2 Legal Morass: When deploying telecommuting, the enterprise must ensure that it is in compliance with all local, regional, and national regulations. Solution: The legal department should provide guidance in all stages of the telecommuting program and should review all telecommuting policies.

1 Management Reprisal: Many telecommuting programs (even those initially driven by end-user demand) find a surprisingly small number of volunteers for the program’s pilot or deployment stage.

This is mostly due to employee fear that management will look harshly at people who do not work in the office. A lack of consistent productivity metrics enhances the fear that “out of sight” will mean “out of mind.” Without sufficient participation, telecommuting programs tend to be canceled after about a year. Solution: Managers must be convinced of telecommuting’s benefits and should be trained on how to work with remote employees. Management buy-in is the single most important prerequisite.

Stress Reduction Through Unit Testing

Big projects can have frustrating levels of complexity. Unit tests can help us cope with this frustration through two interrelated means:

  1. They force us to break the project down into small, testable pieces that are fun to code. In short, they help us find the simple parts of a big, complex project.
  2. The search for code that is easy to test can help us discover an architecture for our program that is simple and clean. Such architectures can be achieved by refactoring existing code.

In this article I will spend a few paragraphs explaining each of these ideas, and showing why they are important.

The Frustration of Complexity

Programmers get frustrated when faced with complexity. Frustrated people are not productive. If I’m frustrated by a project, I don’t want to work on that project. If I don’t work on a project, then I never get anything done.

To write a unit test, any unit test, a programmer should find a simple problem that can be easily tested. It is possible to write complex unit tests, but such tests are, by definition, poorly designed.

Even complex programs have some code in them that can be easily tested through the simple mechanism of writing a unit test. By finding this simple code, even a frustrated programmer can find a bit of work that can be done easily and quickly. Work of that kind is innately enjoyable, and innately satisfying. It is the kind of work that compels one to go on working.

After having found one simple unit test to write, a programmer can begin to see what it is their code ought to look like. Scanning through their code, they may find other simple tests that can be written.

Sometimes, however, a programmer can reach a point where it is no longer possible to find other simple tests that can be written. In such a case, the only way to move forward is to begin to refactor existing code that is overly complex or hard to test.

Reduce Complexity to Simplicity by Refactoring

It is always possible to refactor code to make it testable. Even the most recalcitrant code can be rewritten to be tested. It only takes patience, and a will to succeed.

Over and over again, we hear people come up with excuses for why they cannot test their code. The most common reason for claiming code is untestable is the inability to separate the interface of a program from the business rules that drive a program. This is a serious problem, but fortunately it is one that is usually easy to solve.

When faced with such a problem, one can always find the code that needs to be tested. In almost all cases, it will be wrapped up inside an interface event handler such as a button click method. Once you find the code, you can move it into a discreet object that can be called from either the event handler, or from a unit test.

Discovering what objects need be created to handle the code that was part of your interface is not always easy. In fact, it is in this stage of programming that a developer’s skill is revealed. Not everyone has the ability to create simple, easy to use objects that can be called from either an event handler or from a unit test.

Though the skill may be a difficult one to master, it can also be an enjoyable task to perform. Once the creative work is done, the actual work of creating a testable object can be relatively simple and enjoyable.

While engaged in creating such objects, a developer may feel that no real work is being accomplished. However, this is not true. One is taking complex, hard to understand code, and creating in its place code that is easy to use. One is taking a program that is innately frustrating to work on, and converting it into a program that is easy to understand, and hence enjoyable.

After creating one set of testable objects, it is often possible to perceive how these existing objects can be decomposed into yet simpler objects. If a particular programmer lacks skills in such an area, then often they can spend a few hours with a better programmer who can give them tips on how to proceed. In this way a well designed team can work together to use everyone’s skills to convert poorly written code into well written code.

Well written code is easy to test, and fun to program. If programming is enjoyable, then programmers will put in long fruitful hours engaged in a task that is both pleasing and productive.


This process of creating simple code described in this article is driven by the desire to write unit tests. Programmers like to write such tests because the act of creating them is both enjoyable and satisfying. In fact, most programmers enjoy their work — so long as it is not overly frustrating.

We can happily write unit tests so long as we can find code that is simple to test. When we find that there is no code left that is simple to test, then that means either that the job is done, or else that it is time to begin refactoring code so that it can be made easy to test, and therefore also easy to understand.

When we are stymied by a project that seems overly complex, overly frustrating, the way out is to begin writing unit tests. The act of writing the tests will lead us, slowly but surely, to the solution to our problems.

Stuff that Bugs Me

For some reason, despite the fact that it’s the Christmas season, I’m feeling a bit grumpy. I don’t know why, but stuff has really been bugging me lately. So naturally, I have to fire up the word processor and list them!

  1. People who try to tell you that C# is better than Delphi. Now that Delphi 2005 is out, with its new language features, Delphi can easily hold it’s own against C#, and actually surpasses it in lots of ways.

  2. People who come on the newsgroups and act like TeamB “runs off” people who criticize Borland. This is silly. TeamB might disagree with your criticism but we don’t “run off” anybody.

  3. The use of “with” statements. That really bugs me.

  4. People who think that Borland has gone out of business

  5. The really bad support for streams in the FCL. There are no methods on the Stream class to copy from one stream to another, for crying out loud. Dealing with streams in .Net is just a pain in the ass when compared with the ease of using them in the VCL.

  6. Mail-in rebates for computer items. These things are a pain in the rear, and you invariably miss some key instruction and thus don’t get the rebate. I hate that.

  7. People who think that we are “forced” to buy stuff. There are actually people in the world who think that Microsoft forces them to buy Windows. Here’s a fact: it is against the law for /any/ company to force you to buy /anything/. Unless you live in some crazed totalitarian dictator ship, you don’t have to buy anything that you don’t want to buy.

  8. While we’re at it, people who think that Microsoft is some sort of dictatorial, evil empire bug me. All Microsoft can do is offer things for sale. They can’t and don’t do anything more than that. See #7 above for the response that virtually everyone in the world can have to Microsoft’s offer. The notion that Microsoft is some sort of governmental entity that controls people’s actions and dictates what people have to do is silly.

  9. I hate cryptic and semi-cryptic variable names like ‘lSt’ and ‘buf’. Jeez, work your fingers a little bit and use “Buffer”. And while you are at it, use capital letters in your variable names.

  10. People who type in all the same case, whether it be upper or lower. We have upper and lower case for a reason, people.

  11. Case-sensitive programming languages bug me. (Case is really bugging me today, eh?) It bugs me when people declare variable with the same name as their type, but different case.

  12. Code that tries to do three things in one line. Break out that code into multiple lines. It’s easier to read, and you’ll thank yourself when you go back and try to read it six months later.

  13. The fact that in .Net, the getters and setters for a property have to be in the same visibility as the property bugs me. I understand why, but it still bugs me. It also bugs me that they have to be prefaced by case-sensitive(!) prefixes.

  14. Okay, C# bugs me. It bugs me that all these C# dudes out there think properties and “partial classes” and all the other stuff Delphi has been doing for years is so new and cool. It bugs me that we’ve been doing object-oriented programming for fifteen years, and now tons of VB guys are doing C# and now think true object-oriented programming actually is cool now. It bugs me that C# programmers think that theirs it the “definitive” language against which all other languages should be measured.

And the really scary thing is that I quit there. I could have kept going. 😉

Fixed price and scope contracts

You’re the CIO in a company. You identify that you need some software for a particular business reason. What do you do? Well, you could try and get it written in-house (either by your own developers, or by a group of contractors working under one of your IT project managers), or you could farm it out to a software consultancy company. Your internal IT staff are swamped, so you decide to go outside. You’re worried about expenses, so you push for a fixed price contract that’s also fixed in scope and time: here’s the spec, here’s $X thousand, see you in Y months. Is this really the best strategy for getting the best software?

You’re the CEO in a software consultancy, and you use agile methodologies throughout the development process since it means that you produce better software and thereby increase your reputation. You get approached to write an application. The fixed price/fixed scope contract comes with what looks to be a comprehensive spec, a check for $X thousand, and Y months to complete it. You accept. Is this really the best deal for your company and for the customer?

My personal viewpoint these days tells me to answer no to both questions. In an agile development process, you never plan too far ahead. There’s no way you can understand a comprehensive spec well enough to be able to bid on a fixed scope/time/price contract because the spec is never comprehensive enough. Never. It’s a fool’s errand to think any one will be able to spec a system out to the last dotted i and crossed t. Requirements always change. New ones arrive, old ones get morphed into something else or get dropped. The customer’s business or environment change constantly, and so would requirements for software for that business or environment. Instead you view a spec as being a road-map showing where you want to be with the full awareness that the path may change as you proceed.

Having a fixed price contract (which generally also means fixed in scope and time as well as price) is a recipe for a disaster. The customer will get software that doesn’t suit his changing expectations. And, make no mistake, he will change his expectations: either the business will change, or interim versions of the software will trigger unforeseen but attractive possibilities that doing A instead of B will be better. Every change he wants will be a negotiation battle to decide whether it’s in the spec or is extra to the spec and therefore needs extra payment or requires that another piece of the spec be dropped. This will be even worse if the spec were ambiguous, or leaks like a sieve, or not fully developed.

Note that fixed price contracts that are not fixed in scope (or necessarily time) have more of a chance of succeeding, providing that the customer agrees to some of the tenets of agile methodologies such as regular contact with the development team and the ability to provide timely feedback.

The essential strategy for these fixed price/flexible scope contracts is different: here’s $X thousand and you have Y months, now how can we work together to ensure that the best software is written with compelling enough features by the end of that time? In essence, by working together, issues, expectations and requirement changes are dealt with earlier and it will be possible to produce better software (that is, providing more value to the business) for the time and budget.

On the other hand, if some intractable problem does occur, the customer can either change the overall scope of the project, or make the (admittedly) hard decision to cancel the project, and do it earlier.

But, and here I reiterate, fixed price/flexible scope contracts will require the customer to participate, and participate regularly and often. Timely feedback on incremental revisions means that the project as a whole can be better guided. Regular contact with the evolving software implementation gives the customer confidence that the project is on track, or, confirmation that the consultants are hopelessly bad and should be ditched.

So, all in all, I believe that fixed price contracts that are also fixed in scope and time are never the best deal for either the customer or the developers. The customer wants the software that will provide the best value for the business, but won’t get it because he’s fixed the scope. The developer wants to provide good software that is well tested, but can’t because the original spec is deficient in some way and requirements-creep has not been taken into account. Removing the requirement for fixed scope does enable customer and developer to produce the best software for a fixed price contract, but it will require greater interaction between the players.

How Free Software Helps Small Business

By providing low cost, easily accessible resources, free software helps small business gain a foothold in markets that would otherwise be dominated by large corporations.

In the computer industry, it has become increasingly hard for small businesses to compete against big corporations. The resources and marketing clout of big corporations make it difficult for small companies to make competitive bids for big projects. Free, open source software is one way small companies can fight back.

The Corporate Resource Advantage

Leaving marketing and legal issues aside for now, one of the advantages big corporations bring to the table is substantial resources. If a big corporation wants to move into a particular market, they can afford to invest in developing a reusable solution custom made for that market. Then they go to clients that need the resource, and simply plug in their solution, thereby solving the problem quickly and easily.

In this scenario big corporations have three advantages not available to small corporations:

  • They have both the human and financial resources necessary to assign their best people to the difficult task of creating an easily reusable solution.

  • Once the solution is created, they can pay lower wages to employ average programmers to perform the much simpler task of plugging in their solution so it solves the needs of a client.

  • They can quickly finish the project. Despite the fact that the programmers onsite may be less skillful than those found in small companies, corporate programmers have the advantage of working with a pre-built solution easily customized to meet a client’s needs. This allows less talented developers to quickly finish the job.

When a small company comes in to bid on the same job, they have to look at solving the problem from scratch. This means they have to take months to create software that a large corporation might already have available. As a result, they have to work under extreme financial and time constraints. Their solution may therefore tend to be both more expensive to build, and less robust.

The Free Software Solution

Free Open Source projects released under reasonable licenses provide a solution to this problem. In particular, these projects create free software that comes with source, that can be plugged in to projects to help solve complex problems.

Let’s take two perhaps over-simplified examples that can help show this process in action. In projects that I have been working on recently, there has been a need for tools that aid in FTP transfer, and in AS2 communications. Big corporations are likely to have the resources to have solved both these problems while working on similar projects. Their teams therefore have the luxury of simply plugging in pre-built solutions left over from previous projects.

A small company, on the other hand, may not have such resources in house. As a result, they might need to add a month or more of in house development in order to create these tools.

Enter the world of free, open source software. In both the cases mentioned here, the following free, open source projects can be used by businesses to solve these problems quickly and easily:

  • The Open AS2 Project: Released under BSD License

  • C# FTP Library: Released under LGPL

With the aid of these libraries, a small business can save at least one to three months of programmer time, thereby helping to bring a project in on budget, in time and at a competitive price.

Big Ticket Software

Another way that big corporations can come to dominate a market is through the use of big ticket software. A few years ago, big corporations had an advantage on the Application Server market because only they could afford the huge bills associated with using such expensive software.

The classic way this system worked was for a big corporation to create its own application server and assign it a huge price tag in the six figure range. Most small companies want to bring in a whole project for about the same price that these big corporations wanted to charge for their tools. A big corporation could therefore come into an account, give away their expensive application server for a nominal fee, and charge only for programmer time. This meant they had three advantages:

  1. They could underbid the competition because they "got" the application server at a bargain price.

  2. They could claim superior expertise in a tool which they, after all, developed in house.

  3. They had the source to the product, and could modify it or fix bugs if needed.

These advantages could be multiplied several times in the case of some companies. For instance, Oracle can promise to deliver their database at a reduced rate, they can promise to provide their development environment, JDeveloper, at a reduced price, and they can provide years of experience and other libraries at no additional cost.

Given all these advantages, how can a small business hope to compete? Well, one answer is to use open source:

  1. Small business can save money by using the free, open source MySQL database rather than Oracle. These savings can be passed on to the customer, thereby lowering the cost of the whole project. MySQL has the same performance characteristics as Oracle, thereby costing the client nothing in terms of performance.

  2. Small business can use the free, open source Eclipse development environment, thereby lowering expenses, and helping the small business to be able to afford a lower bid.

  3. Small business can use the free, open source JBoss application server, again saving money for the client, and helping to lower the overall cost of the bid. It is arguable that JBoss is the best tool of its kind, therefore giving small business an advantage over the clunky tools used by big corporations.

The end result is that free, open source software can help make a small company more competitive. Of course, the big companies can also use free software. But if both the big company and the small company are using the same tools, then the playing field is considerably more level. This makes it possible for clients to choose solutions based on the talents of the developers, rather than the relative clout of the two companies.

On a level playing field, talented developers will tend to break away from big corporations, thereby fostering the growth of small business, and promoting the democratic ideal as represented by truly free markets.


Some, but not all, mature industries tend to metastasize around a few, large corporations that gain control of markets. This occurs both because of market forces, and because of recent trends in the American economy. The end result is that market forces make it difficult for small companies to get sufficient leverage to compete in many mature markets.

This article has helped to show one way in which free, open source software can help to promote competition, foster the free market, and create a more open economy that provides room for individual talent to emerge.

Intelligent small companies will not only use free software, but they will help develop it. By doing so, they provide themselves, and other small businesses with tools that can be used to compete against the dehumanizing effects of big business and corporate culture. If a small company is lucky and talented enough to take the lead in developing a successful open source project, then they can have the best of both worlds: They get to use the software for free, and they get the publicity associated with having publicly demonstrated their expertise.

In future articles, I will take up the subject of legal hurdles that have been placed in the way of small business. In particular, the Sarbanes-Oxley Act and other legal developments have pushed many departments to standardize on big business. Future articles will show how open source projects can be used to help level the playing field. A future article will also give a few simple tips to help guide developers who are concerned about the licensing issues involved with using free software.

Iterative Design

Test-driven development teaches you not to do all your design up front. Your implementation is organic, it evolves as you write test cases and implement code. Refactoring functionality in your IDE helps a great deal. By taking small steps and rigorously refactoring out duplicate code and other code smells, you polish your implementation. Seldom does the end product look like what you might have designed up front.
The opposite of this is the waterfall method of design: capture all your requirements, design the heck of the proposed system, and then code it.
I’ve always found this latter methodology to be extremely suspect: it’s not very resilient when confronted by changes brought about by learning about the problem space as you go along, or to changes brought about by the business need evolving, and so on.
I’ve been involved in several projects where the users captured all their needs for the software as requirements. They used either Word or products like CaliberRM. Of course, some requirements were deemed no longer necessary half way through the project, some were ambiguous in the extreme and resulted in a lot more work, some weren’t properly fleshed out, and others changed because a new approach to development was deemed superior.
Let’s face it, writing requirements is ruddy hard. Thinking about a problem in a virtual vacuum always is: sometimes you just need to see the different possibilities before you can make a decision. All of this experience in the school of hard knocks has taught me to be wary of projects where too much design was done up front.
Better software is written if there is just enough design at the start to provide an overall goal for the software. You should gather enough requirements to indicate whether or not to use a multi-tier type implementation, or whether you should be using web services, or whether you need to support a web farm, or whether a simple Win32 app will suffice. These kinds of decisions are hard to change once a substantial amount of code has been written.
But once this goal has been agreed on, once the path seems fairly well indicated, you should concentrate on building well-designed classes that cooperate well and that are well supported by unit tests, and do it in a step-wise manner. This way you’ll be able to quickly and easily provide simple prototypes to the users. Faster and more regular prototypes elicit better feedback. Better feedback will help guide you towards better design. The code you write will be easier to adapt as you (and your users) learn more about the system.
Having worked for a library and tools vendor in the past, I can firmly say that this is a much better way to develop component library code. The exercise of writing a unit test, as if the library code already existed, is a great way to visualize how your library will be used. By writing different unit tests, you can experiment with different ways of calling the library without having reams of code creating inertia, or causing resistance to change. It also allows you to refine how the interface to your library should look.
There are several downsides to working this way, of course. One is that it requires a lot of user involvement. "I know that lots of things don’t work yet, but is this along the lines of what you wanted? What’s wrong with it? I can produce a different prototype tomorrow, could you test it and comment on it?"
Another downside is that it’s hard to estimate how long a particular feature would take, which makes it hard to decide how to write a fixed-price contract, and so on.
Nevertheless, I’m of the firm belief that better software is produced by an iterative design methodology as well as iterative implementation through TDD.

Creating Projects in Subversion: Trunk, Tags, Branches

This article explains how to create projects and repositories in a free, open source version control system called Subversion. The online documentation for Subversion is excellent, but the information found here may help people who are still getting up to speed with the product. I will assume that you have already installed Subversion and TortoiseSVN.

I run Subversion on a Linux box, but frequently use it from Windows. However, in this article I will describe how to create both the repository and your projects while running on Windows or on Linux. In general, any commands you give from the command line will work on both Wndows and Linux, while the commands you give from inside TortoiseSVN will work only on Windows. This is not a shortcoming in TortoiseSVN, but simply reflects the fact that TortoiseSVN is a Windows only tool which runs as a shell extension to the Windows Explorer.

If you want to learn both how to create a repository, and how to create a project in the repository, then read this whole article. If your repository already exists, and all you want to do is create a project inside the repository, then please skip ahead to the section of this article on adding projects to a repository. There you will learn about the trunk, tag and branch directories and why they should be part of all Subversion projects.

This article also describes the svn, svnadmin and svnserve command line programs that ship with the default installation of Subversion. If you are running on Linux or if the Subversion\bin directory is on your path, you should have no trouble running these programs from the command line. I discuss none of the programs in detail, but the context in which they are discussed in this article should make their purpose clear to you. In particular, if you want to access Subversion from the client side, then you should be sure to read carefully the sections of this text that describe the svn program. The other two programs are server side tools.

Creating the Repository

Repositories are created by a user who has direct access to the server. If you are using Windows as a server, then the simplest way to create the repository is with TortoiseSVN. Open the Windows Explorer, browse to the place where you want to create the repository. Use the explorer to create the directory where you repository will reside. Right click on the new directory and choose TortoiseSVN | Create repository here. The Create Repository dialog will appear, as shown in Figure 1. Select the default Native FileSystem (FSFS) option and click the OK button. The repository will be automatically created.


Figure 1: Choose the Native file system for best performance. If you like working with databases, then you can choose Berkeley DB.

If you want to create the repository from the command line on Windows, simply type something similar to the following:

svnadmin create c:\MyRepository

This command will first create the directory called MyRepository, and then set it up properly as a Subversion repository. On Linux, you might issue this command:

svnadmin create /usr/local/repository

Regardless of whether you created the repository with TortoiseSVN or with svnadmin, the end result is a directory with the following structure:

10/26/2005  15:39         <DIR>    conf
10/26/2005  15:39         <DIR>    dav
10/26/2005  15:39         <DIR>    db
10/26/2005  15:39         <DIR>    hooks
10/26/2005  15:39         <DIR>    locks
10/26/2005  15:39               2  format
10/26/2005  15:39             388  README.txt

To start the Subversion service running on your machine, go to any directory on the drive on which you created the repository, and type svnserve -d. For instance, if you created your repository on the C drive, then move to any directory on the C drive, and type svnserve -d. It might make a certain sense to start svnserve from inside your repository, but that is not necessary.

Now go to another machine and try to access your repository. Assuming you created your repository in C:\MyRepository, then you might type something like this:

svn info svn://MyServer/MyRepository

If you created your repository in the D:\Temp\Info\MyRepository directory, then you would start svnserve on the D drive, and type something like this:

svn info svn://MyServer/Temp/Info/MyRepository

The point is that svnserve will look on the drive in which it is started for your repository. There is no need to specify a drive letter. In fact, I have never had any luck trying to pass drive information via svn. Instead, I just start svnserve on the appropriate drive, and then assume the path to the repository automatically references the relavant drive. Needless to say, all this talk about drives is not relevant if you are running the server on Linux.

The result of the svn info command should be something at least vaguely like the following:

[D:\temp\Tort2]svn info svn://rohan/usr/local/repository/CodeFez
Path: CodeFez
URL: svn://rohan/usr/local/repository/CodeFez
Repository Root: svn://rohan/usr/local/repository
Repository UUID: b062d7d2-2303-0410-96a2-dd3f728f4100
Revision: 13
Node Kind: directory
Last Changed Author: Charlie
Last Changed Rev: 13
Last Changed Date: 2005-10-16 16:28:59 -0700 (Sun, 16 Oct 2005)

If you get an error when you run svn info, then it is possible that your firewall is blocking port 3690. If you are using the standard Windows built in firewall, then the install of Subversion should have opened up the port for you. However, if you have an external firewall, then you may need to punch a hole in it.

NOTE: If the info command returns absolutely nothing, then the repostory is probably fine. Subversion is simply saying that there is nothing to report. Had there been an error to report, it would have been reported. If you are querying a repository that already has at least one project in it, you can try issueing the list command instead. If there is something in your repository, then you should get a list of the contents. In general, however, Subversion is a well behaved Linux application that returns nothing if everything is okay, and an error message if there is a problem. Immediately after you create a respository, however, you should get a listing like the one shown above.

At this stage, you should have your repository set up correctly and you should be able to access it. The next step will be to set up permissions in your repository.

Setting up Permissions in Your Repository

By default, you will be able to read your repository, but not write to it. Here are the steps necessary to give password protected, read or write access to your repository. In the conf directory of your repository, you will find a file called svnserve.conf. You should edit this file so that it looks like this:

anon-access = read
auth-access = write
password-db = users
realm = My First Repository

The actual file on your drive will have some comments demarcated by ### signs. I have omitted those comments here so as to make it easy for you to see the important parts of this file. In your version of the file, you will want to keep the comments, as they are useful. But be sure to remove the comments in front of the lines shown above.

NOTE: Change anon-access=read to auth-access=read if you want the user to have to sign in before reading from the repository. Otherwise, anyone will be able to read from the repository, but only those with proper rights will be able to write to it.

The next to the last line in svnserve.conf looks like this:

password-db = users

This line represents a contract with Subversion in which you promise to create a file called users, placed in the conf directory, with the following format:

user1 = foobar
user2 = foobar

The name of the subversion user is on the left, and the password for the user is on the right. When you attempt to write to this subversion repository, you will automatically be prompted for a user name and password. In this case, if you entered user1 as the name, and foobar as the password, then you would be granted permission to write to the repository.

Adding Projects: Branches, Tags and Trunk.

Subversion supports advanced version control features called branching and tagging. In this article, I will not have room to explain the simple steps for branching or tagging your source in Subversion. Nevertheless, I will take a moment to make sure you understand what a branch is, and what it means to tag a version of your source.

  • Tagging: You might tag your project when you reach Version 1.0. Then you can go on making changes to your project, but if you need to get back to Version 1.0, you can always use the tagged version of your project to retrieve all the source files exactly as they looked when your reached Version 1.0.
  • Branching: If you are working on a project, and want to try some experiments, but you aren’t sure you want to keep them, then you can branch the project. After branching, you will have two copies of your project: the main trunk, and the branched version. Work can then continue on both the main trunk, and the branch. Changes to the branch will not be seen in the main trunk, and changes to the trunk will not appear in the branch. Both branches of the code will have acecss to any changes that occurred before the project was branched. As I will explain below, branching is handled in a way to insure that only a minimum amount of server side disk space will be used.

Now that you understand branching and tagging, you are ready to create a project. Here are the basic steps to add a project to a repository when working from the command line:

$ mkdir MyProject
$ mkdir MyProject/trunk
$ mkdir MyProject/branches
$ mkdir MyProject/tags
svn import MyProject svn://rohan/usr/local/repository/MyProject -m "info"

As you can see, you create three directories under your main project branch. These directories are called branches, trunk and tags. You then create the repository itself by issuing the svn command shown in the last line.

svn import MyProject svn://rohan/usr/local/repository/MyProject -m "info"

The last bit of this code, that reads -m "info," is simply a place for you to enter a comment that will be recorded in the log. If you don’t provide this information, then Subversion will prompt you to load an editor and write a comment inside of it. I definitely prefer to add the -m command to my import statements so as to avoid any unnecessary fuss with an editor.

It is nearly as simple to create a project using TortoiseSVN as it is to create it from the command line. First open up the Windows Explorer. Now create an empty directory. Give the directory the name of your project, such as MyProject. Inside the directory create three subdirectories called trunk, branches and tags. Now select the MyProject directory, right click, and choose TortoiseSVN/Import from the menu. A dialog like the one shown in Figure 2 will appear. In this dialog, type in the URL of the project you want to create in the repository. Add a message that explains something about your import. This last step parallels the -m "info" portion of the command line import statement shown above.


Figure 2: Importing a project into the repository using TortoiseSVN. Specify the complete URL where you want to place the repository, and add a brief comment in the Message section.

In the URL section, I have included both the path to the repository, and the name of the project that I want to create in the repository. Don’t be fooled into thinking that because you right-clicked on the name of directory you wanted to import, that a directory of that name will be created for you automatically. Instead, specify the name of the project in the URL.

NOTE: In the examples I show, I prefer to put the name of the repository in the URL. However, you could also create a series of nested directories, and end up with much the same result. In short, there is more than one way to achieve the result shown here. It seems to me that neither technique is perfect, but the one shown here is most intuitive to me. In any case, if you don’t like the structure you created in the repository, just delete those directories and try again. It is easy to delete directories in the repository browser, shown in Figure 3. Later in this article I will describe how to launch the Repository Browser.

When you are done with your import, the repository should have a structure that looks like the image shown in Figure 3.


Figure 3: Viewing the structure of your project in the repository. I will show you how to pop up this dialog later in this article.

Don’t put the files directly in the MyProject directory, instead them in MyProject/trunk:


If you want to branch or tag your project, then you will use the directories called branches or tags:


NOTE: Subversion makes a "light" copy for the branched and tagged versions of your source. It doesn’t really copy all the files, it only copies the changes to the branches directory. So that keeps the repository small. From the client point of view, however, it appears that the branches and tags directories contain full copies of your project.

Checking Files into the Repository

You now have access to both a repository and a project. Unfortunately, your project is currently empty, and contains no files.

To put files in your project, you need to take a step which is somewhat counter-intuitive: you need to check out the project that you just created. The MyProject directory that you created in the previous section is no longer useful. To get your hands on blessed, completely approved, and fully loaded Subversion directory, you need to check it out from your repository.

From the command line, navigate to the place in your system where you want to place your project. For instance, if you want the project to be in C:\Src\MyProject, then you should navigage to C:\Src. Now type the following command:

svn co svn://rohan/usr/local/repository/MyProject 

The project will be checked out into the C:\Src\MyProject directory.

If you prefer to work from TortoiseSVN, the process is equally straightforward. If you want your project to be in D:\temp\MyProject, then use the Windows Explorer to navigate to D:\temp. Right click on the temp directory, and choose SVN checkout. A dialog will appear, as shown in Figure 4. Fill in the URL of your repository: svn://rohan/usr/local/repository/MyProject.Also fill in the name of the directory where you want the project to reside.


Figure 4: Checking out from the repository. But the URL of the project you want to check out on top, and the directory where you want the project to reside beneath it.

When you are done, you should have directory called C:\Src\MyProject that contains three subdirectories called tags, branches and trunk.

To copy files into your repository, either create the files one by one in the c:\Src\MyProject\trunk directory, or else copy the whole source tree of an existing project into the trunk directory. You are now ready to check your source files into the repository.

Suppose you added a complex directory called dbo that had many subdirectories to your c:\Src\MyProject\trunk directory. From the command line, navigate to the C:\Src\MyProject\trunk directory and issue the following command:

C:\SrcMyProject\trunk]svn add dbo 

NOTE: In the command shown here, you no longer need to specify the path to your directory when calling svn. The information about the path to your server and your repository is kept in a series of hidden directories inside your project called .svn. You need not concern yourself with those directory in most cases. But if you want to explore them, just go the command line, enter a directory of your project, and type cd .svn. You will find yourself in a directory where the information about your subversion repository is stored in a series of files and directories.

After add the files to the repository, you need to commit your work:

[C:\Src\MyProject\trunk]svn commit -m "" 

Now the dbo directory, and all the files in it, will be added to your repository.

You can also add files to a repository from inside the Windows Explorer by navigating to the C:\Src\MyProject\trunk directory and right clicking on the dbo directory. Choose TortoiseSVN/Add from the pop up menu. The dialog shown in Figure 5 will appear. You can check or uncheck the files listed in this dialog depending on whether or not you actually want to place them in the repository. When you have things set up they way you want, click the OK button.


Figure 5: Adding files to the repository using TortoiseSVN.

After adding the files to the repository, right click again in the Windows Explorer and choose SVN commit. The dialog shown in Figure 6 will appear.


Figure 6: The commit dialog prompts you for a short string that will be added to the repository as a log message. In the lower window you can specify which files you wish to commit by ensuring that there is a check mark before them.

You now know two ways to add files or directory trees to your repository. It probably comes as no surprise to learn you could have added files during the initial import process described in the previous section. However, I wanted to show you the process of adding files to a repository after you had created a project.

If you make any changes to the files that you have added to the repository, then they will have a little red icon next to their name in the Windows Explorer. Just choose SVN Commit from TortoiseSVN to post your changes. Choose SVN Update from TortoiseSVN to check files out of the the repository that may have been changed by other users. Needless to say, the command line versions of these commands are svn commit and svn update. More details on managing files that are part of a repository will be explained in future articles. But these simple commands should be enough to get you up and running.

You now know how to create a repository, add a project to it, and how to add one or more files to the project in your repository. In the final section of this article. I will add a few useful tips, most of which concern viewing the files in your repository.

Viewing the Repository, A Comment on the Repository

To view your repository and its projects, use the list command:

[D:\]svn list svn://rohan/usr/local/repository/trunk/MyProject32trunk

Here we see the listing of the MyProject2/trunk directory. As you can see, it contains one directory, called dbo. If you do the same thing in TortoiseSVN, you get a somewhat clearer view of what is going on. Select the directory on your system that you want to browse in the repository. Right click and choose TortoiseSVN/Repo-Browser, the dialog shown in Figure 7 appears.


Figure 7. The TortoiseSVN/Repo-Browser option gives you good view of your repository and its contents.

In the image shown in Figure 7, you can see the URL input control at the top of the dialog. In this case, the information in this control was filled in automatically because I picked a directory on my system that was checked out from a Subversion repository. Had I just brought up the browser from a random location on my harddrive, I probably would have had to type in the URL, or else pick it from the drop down list.

I want to end by explaining one peculiarity of Subversion. Each time the repository is changed, the version number of all the files in the repository is updated. In other words, the version number of your project as a whole gets updated, not just the version number of a particular file. This behavior is a bit odd, but there is at least some form of reason behind it. In a project, if one source file changes, it can affect the whole project. So the version number of the whole project is updated if even one file changes. For detailed information about the history of a particular file, right click on it in TortoiseSVN, and choose TortoiseSVN | Revision Graph.

Here is quote from the TortoiseSVN manual: "For unrelated projects you may prefer to use separate repositories. When you commit changes, it is the revision number of the whole repository which changes, not the revision number of the project. Having two unrelated projects share a repository can mean large gaps in the revision numbers. The Subversion and TortoiseSVN projects appear at the same host address, but are completely separate repositories allowing independent development, and no confusion over build numbers."


In this article you have had a chance to see how Subversion works. You have learned how to create repositories, how to create projects, how to check in code, and how to view the code you have checked in. I also briefly discussed updating files that have been changed. I have tried to lay things out here as clearly as possible, if you have more questions, the docs give a good overview of these subjects and provide more detail than I give here. In future articles I will discuss subjects such as branching and tagging, as well as techniques for updating your source and running diff operations on files that have been changed by two users.

Subversion is an excellent version control system that provides all the tools that most programmers will need to manage their code. This article hopefully gives you enough information to get over some of the initial hurtles that can confuse new users.

Delphi 2005, first class!

Last week in Santa Clara, CA I conducted my first class on Delphi 2005. Thanks to the students from Sacramento for making the week so pleasant.

So, you ask, how was the week with Delphi 2005? How did it stand the test of exercising most of its features during the week? Did it crash at any time? Did we find weak areas? Did we find superb areas? Read on to find out.

First Impressions

It was so great to find myself teaching Delphi again. When I tried teaching Delphi 8 shortly after its release, the process was painful and demoralizing. We quickly made the decision not do that anymore so as to eliminate the sense of frustration experienced by the students and their instructor.

But working with Delphi 2005 was a very different experience. In fact, I have to say the week of teaching Delphi 2005 was pleasant! We found a lot of problems together but I could honestly say that the product was usable and productive.  Frustrating at times, but hey, it looks like we are getting a Patch any day now in December 2004.


ECO is a cool product. I like it a lot! It is very powerful! We ran two different exercises during the class:

  1. First we generated the model, code and UI based on the Northwind Database in MS SQL Server 2000. This worked beautifully.
  2. Then we built the whole model from scratch. We generated the code and the UI, and then exercised the Persist to XML option. It also worked beautifully!

My only comment on ECO is that the product is screaming for some wizards: there are way too many steps to remember. If you open the wrong model or file it is easy to get lost, and it can be difficult to get back on track.  I am grateful to Anthony Richardson for writing tutorials that saved the day!

Project Manager

The Project Manager is flaky! During the .NET Remoting chapters, having four projects open under one Project group caused weird problems during compilation. We kept getting errors that one project could not see the assembly of another in the same group. This happened several times.  Closing the Project group completely and reopening it fixed these problems, but it was annoying having to close and reopen the entire Project Group so many times during that exercise.

Import …where?

The students were very interested in Interop because of the amount of ActiveX and Win32 code they have. They knew that their migration would occur slowly over time; that it would be too difficult to do a complete rewrite all at once.  I looked for the menu item “Import Type Library” everywhere, under .NET and under Win32 — no GO! I can’t believe there is no “Import Type Library” in Delphi 2005! It is especially important in Win32, although it is much needed in .NET as well. 

The bigger surprise was not finding “ActiveForm” under the ActiveX tab in Delphi Win32. That was disturbing to the students, especially for those who use ActiveForms a lot.  But I quickly cooled the flames by reminding them that: “COM is like smoking! If you are doing it, you need to stop! And if you are not, then you don’t need to go there!”


The Editor was nice and behaved well during the week. The one exception was CodeSnipets. I would have liked to be able to highlight some code and drag that code to the CodeSnipet window to create a new entry.  It seems that this was intended to be a feature, but it is not working yet in the IDE. Personally, I believe it is a potentially great feature, and it would be very nice to have it working.


The Borland Data Providers appeared to us to be very nice indeed and to work well. We did not stress them out by any means. We did, however, successfully work through several examples that deal with the Connection, DataAdapter and Command Objects.


This feature is MUCH MORE stable than in Delphi 8. I like the Deployment Manager a great deal. It allows the developer to easily synchronize and deploy projects to any directory or FTP site. It even has pieces of the old check-in tool we used to use on the team at Borland to show diffs and visual representation of changes.


The debugger was very stable. I was impressed by the fact that the CPU window, now docked into the editor, can show IL, ASM and Delphi code all at once. This really gives the developer a better understanding of what is happening in their code.  I also found the inplace editing of Breakpoint conditions to be a cleanly executed and highly useful feature.


I am still not convinced that the changes from Delphi 8 to 2005 in the namespace area are sufficient to bring Delphi up to speed as a first class citizen in the world of namespaces.  The language is screaming for a new keyword “Namespace” that will fix the problems once and for all. Danny Thorpe, explained to me during our trip to Toronto that he is considering that approach. He said, however, that it will be a major job to implement that feature in the compiler. Furthermore, he was worried that it would not work well with the FAST one pass compiler we have now. I just don’t like to see all the stuff I see when I open a Delphi written assembly with the reflector.  When viewing an Assembly in C#, I would like to see my namepaces exactly as I declared them.


All in all, both the class and I were pleased with the experience we had while playing and working with Delphi 2005 for the whole week. There were a few rough spots, but the product stood up well to our fairly extensive testing.