Thinking about dynamically-typed languages

Every now and then I browse the Delphi newsgroups, including the notorious b.p.d.non-technical. In visiting this one you either lurk and never respond, or you wade in wearing rubber boots to your thighs and flame-proof jacket: the one thing you can say about lots of Delphi developers is that they’re fiercely loyal to their language. Valid arguments of any shape or form against Delphi are mercilessly trampled on without warning.

The same thing happens, to a lesser extent, with C# and VB developers. They have a constant battle about which language is better, on which language Microsoft should be spending more time, and, let’s not forget, in which language the .NET Framework should be developed.

However from the sidelines I have been watching another development taking place, the rise of dynamically-typed languages; a development that I think renders some of these turf wars moot.

Like many readers, and like the majority of developers everywhere, I cut my teeth on statically-typed languages. In these languages (C++, Delphi, Java, C#), strong typing is the way you save yourself from shooting yourself (and innocent bystanders) in the foot. Many times during my career, I’ve passed untyped pointers around and forgot what they were pointing to, with the consequent crashes and nasty debugging sessions deep into the night.

OOP seemed to help a lot here: it forced us to think about class hierarchies, about abstraction, about inheritance, and so on. Interfaces continued the process. Strong typing became more and more our friend, and the only problems seemed to stem form null pointers and referencing objects after they’d been disposed.

Strong typing brought us safety. The compiler made sure that if routine A could only accept an instance of Foo, you could only pass an instance of Foo to it. You were weaned off untyped pointers by the lure of safety-through-the-language. Once strong type-checking came in with the compiler and was enforced by the run-time, we had to really jump through hoops to break our code, at least as far as type bugs were concerned. Safe at any speed, right?

Oh, how we scoffed at bizarre VB code from the pre-dotNet era that enabled you to write routines that either returned an integer or a boolean value. Heh, those wacky VB-ers, eh?

But at the same time as all this strong typing infrastructure was coming into being there were a couple of other movements happening, one in a language-neutral dimension altogether, the other orthogonal to strongly typed languages.

The former was test-driven development (TDD), or the practice of writing unit tests at the same time as writing your code. This methodology helped us ensure that our code worked the way we intended. No implementation code without the tests to support it.

The latter was a new set of languages, Python, Perl, Ruby and the like. Originally conceived as languages for quick prototyping, for writing simple text-file analysis tools, and the like, they’ve now grown into languages for writing major applications. And they’re dynamically typed. Casts are a thing of the past, var blocks are so Victorian.

Recipe for disaster, right? Not if you use TDD and write unit tests. If you write your code using TDD, then I can guarantee that type-safety will be a non-issue for you. Your unit tests will impose another kind of safety on your code, run-safety for want of a better word. Your code works and you can prove it works by running the tests.

And you can reap the other main benefit of dynamically-typed languages: their flexibility. Writing code in these languages is just easier; you don’t have to explicitly declare variables, you don’t have to up- or downcast (or worry about the difference). Co- and contravariance hold no terrors for you.

Reading dynamically-typed code is also a lot easier. No casts for a kick-off. No extraneous keywords or helpful hints to the compiler to assure it that, yes, you really know what you’re doing. The intent of the code is revealed in a much clearer manner.

(Note: if you don’t believe me, try thinking about all that code you’ve written where you cast an object of some description into a type and then check that the object you get is null or not. In my recent ASP.NET project, I took the time to check on my usage of the Session object. I was amazed at the amount of casting code I’d been writing. Generics in .NET will help in certain cases of this: for example, removing the vague possibility of inserting a Bar object into a List at compile-time. But, being honest now, when was the last time you added a Bar object into an ArrayList or a TList that contained just Foos? Answer: never, right? Besides which I would hope that your unit tests would preclude this remote possibility anyway.)

Think of it this way: there’s always going to be a line in the sand where your problems transcend the language syntax and become program semantics instead. So, by accepting that you already dynamically check your application code anyway through the medium of your unit tests, why not defer type-checking to run time as well? Move the line in the sand so that run-time checking becomes more important and compile-time type checking less. You’ll find that your code is easier to write and a lot easier to read (and, remember, code is read more than it is written). Dynamically typed languages are the only players here.