Category Archives: Technology

Just general comments about technology past, current, and future.

How do you start a new project with TDD?

Bobby Johnson (@NotMyself on Twitter) wrote a post detailing how and why he “tests reality” when he starts a new project and gives two examples of what he means.


'use strict';

var assert = require('assert');

exports.test_reality = function(test) {
    test.equals(true, true, 'true should still be true');


using NUnit.Framework;

namespace Simple.Data.SqliteTests
    public class RealityTests
        public void true_should_be_true()

At first glance, you look at those and think they are silly because all they are doing is testing that the test framework is working correctly. But Bobby has a different reason for including them.

So when I am setting up my basic project structure and automation, I like to have at least one unit test ready to run. This allows me to test the automation script and ensure failures in tests cause failures in builds. In .NET I want to confirm that the compilation step fails properly and the unit tests fail properly. In node.js, I want to ensure that linting errors fail as well as unit tests.

That’s a valid point, however this is more of a configuration test of your infrastructure. I personally wouldn’t write a test like this, I’d want the test to have some value before I committed it to the main repository or build/CI server. I strongly feel that developers should use the same build procedure on their desktops to build the application as they do on their CI server. IDE’s like Visual Studio abstract away the building of the application, so we often have to write scripts, rakefiles, psake files, whatever to do all the things that Visual Studio doesn’t do when it builds your application. Things like running the tests and reporting the results, building an installers or setting test/staging/production values in config files. I like to have a repeatable build system setup for projects I’m working on so I tend to re-use scripts or keep it really simple. Which means that I have confidence that I have configured the build script correctly so that it will work properly when I put it in my CI server. Which allows me to start focusing on the design of my application sooner.

Must reads for new programmers

Iris Classon has a nice post up listing her top ten books to read this year.

The Little Schemer – It’s not about the Scheme language at all, it’s about teaching recursive thinking. Once you “get it”, it changes the way you think about programming.

Code by Charles Petzold – Explains how computers work by starting with Boolean arithmetic and working towards RAM and video cards. Short read, but essential in today’s throw-away culture.

Don’t Make Me Think – I’ve bought this book 3 times, every time someone borrows it they keep it. Essential if you are doing ANY kind of UI or UX work.

Test-Driven Development: By Example – Nice and practical. Resolves a lot of the questions that surround TDD like “How much should I test?” and “What should I test?”. Shows how TDD is less about the tests and more about the design of your code.

Writing Secure Code (2nd Edition) – You won’t work for me or with me if you don’t own this unless there is a threat involved.

Clean Code – Robert Martin – An excellent book for learning to recognize bad software.

Working Effectively with Legacy Code – Michael Feathers – Useful refactoring techniques combined with useful testing patterns. As a programmer, new or veteran, most of your time will be spent working on code you didn’t write.

My Git, Mercurial and Powershell setup

I’ve been using both Git and Mercurial for a while and I’ve been fine with the standard command line tools for both. Last year, prompted by a co-worker, I started to look at using alternative consoles on Windows. I tried using just a standard Powershell prompt and that worked for a while, but I wanted a little bit more power and configuration. So I looked at Console2 and ConEmu. I’ve settled on ConEmu starting a Powershell prompt for now, but I wanted to focus on how I customize the Powershell prompt, and my .gitconfig, to allow me to work more efficiently with Git and Mercurial.

First I looked into custom Git prompts, I started with Posh-Git. Kieth has done a wonderful job creating a custom Powershell prompt, as well as enhancing the overall Git experience through tab completion. I personally found it to be too slow on most of the repos I work with so I ditched it (for now). If you like posh-git, I’d recommend these two posts by Phil Haack as excellent starting points for installing and configuring posh-git.

I’ve been using the combined Mercurial and Git Powershell prompt written by Matthew Manela and I really like it. I like the way it displays un-staged changes better than posh-git and it’s been really fast no matter how large the repository. The only line I added to my Powershell profile is a call to load the Visual Studio Environment vars.

cmd /c """C:\Program Files `(x86`)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat""" ""x86""

I mentioned that I use Git a lot at work, well that’s half-true. I use the Git-SVN bridge a lot at work. It allows me the flexibility to create local branches, but still lets me connect to my groups SVN repository. Someday, we’ll move to a full Git repository but we just moved a lot of developers off of TFS and onto Subversion and we want to wait a little while to shake up their entire world again. I use a lot of custom aliases in my .gitconfig and set up a global .gitexcludes file.

    aa = add --all .
    st = status
    br = branch
    cl = clone
    co = checkout
    ci = commit
    sr = svn rebase
    sci = svn dcommit
    fu = reset --hard
    lg = log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --
    pu = !"git fetch origin -v; git fetch upstream -v; git merge upstream/master"
	tool = kdiff3
    autocrlf = True
    excludesfile = "~/.gitexcludes"
	guitool = kdiff3
[difftool "kdiff3"]
    path = "C:/Program Files (x86)/KDiff3/kdiff3.exe"
[mergetool "kdiff3"]
	path = "C:/Program Files (x86)/KDiff3/kdiff3.exe"
	rmdir = true

Most of the aliases are self-explanatory and are pretty common in .gitconfig files. The two most useful aliases I have in that file are the “pu” and the “fu” aliases.

git fu – Does a hard reset, effectively dumping all of my changes and putting me back at the HEAD.
git pu – Does a pull request from my origin repo, then the upstream remote repo, and finally merges the two together. It’s a handy shortcut for getting my forks up to speed.


That is the contents of my .gitexcludes file. It covers most of the usual suspects I’ve run into. The “CI” blocker is custom to our work environment, that’s where our Continuous Integration builds end up when you run our build script. and .DS_Store is a blight on humanity that Apple needs to eradicate. There is a work-around for network drives at least. But if you plug in a USB drive, OSX will still create the .DS_Store directories on them and the work around will not erase existing .DS_Store directories on network shares.

I haven’t set up any aliases in my Mercurial config, I haven’t really found the need to. I find the Mercurial commands to be much more intuitive and easier to remember than the Git commands. Mostly because “Git hates developers

* I’ve hesitated writing this for a while because I may seem negative about a very popular Powershell module for Git called PoshGit. I understand that it works well for some people, and it has gotten faster since I first used it, but it still is too slow for my daily usage at work. I used it at home for a while, but I started to notice the LARGE pauses more and more when I would enter a Git directory when using PoshGit. So I switched to the module I talk about above. There is a way around the prompt performance issues on a per-repo basis. But if I have to turn off the custom prompt completely, posh-git looses some of it’s appeal to me. We had Keith, and some other Git folk, on the podcast to discuss the entire Git on Windows situation a while back and included a discussion of posh-git.

How far behind are Microsoft developer frameworks in terms of design?

From the article “Design Patterns 15 Years Later: An Interview with Erich Gamma, Richard Helm, and Ralph Johnson


Erich Gamma: Yes, and it is funny that you mention the iPhone. The iPhone SDK is based on the NeXTStep object-oriented frameworks like the AppKit. It already existed when we wrote Design Patterns 15 years ago and was one source of inspiration. We actually refer to this framework in several of our patterns: Adapter, Bridge, Proxy, and Chain of Responsibility.

Richard: Which is a great example of the enduring nature of good design, and how it survives different technical manifestations.


Emphasis mine.

In the Microsoft developer community, we are just now getting around to implementing patterns like MVC, Adapter, and Observer. People still argue over whether or not the MVC pattern is “necessary” to build a “working application”.


I guess it depends on whether or not you like good design?

Mocks versus stubs and fakes

I dislike using mocks I dislike using dynamic mocking/stubbing frameworks. because it means my tests have an extra dependency beyond just the SUT (System Under Test). I often find myself spending more time getting the mock to work correctly rather than my app code. The lambada + generics based Mock suites (Moq, RhinoMocks, etc), IMO, complicate the test and make them unreadable in some situations.


Compare the two examples in this post. One uses RhinoMocks to create a stub of IDataReader and the other uses the DataTableReader to create a stub for the test. Which example is simpler and has less chance to fail due to the stub?

Using RhinoMocks

IDataReader reader = MockRepository.GenerateStub<IDataReader>();
            reader.Stub(x => x.Read()).Return(true).Repeat.Times(1);
            reader.Stub(x => x.Read()).Return(false);
            reader.Stub(x => x["ID"]).Return(Guid.Empty);
            reader.Stub(x => x["FullName"]).Return("Test User");

Using DataTableReader

DataTable table = new DataTable();
            DataRow row = table.NewRow();
            table.Columns.Add(new DataColumn("ID"));
            table.Columns.Add(new DataColumn("FullName"));
            row["DirectoryUserID"] = Guid.Empty;
            row["FullName"] = "Test User";
            DataTableReader reader = new DataTableReader(table);

Stubs/Fakes allow me more control over HOW the test fails and results in a test/fixture that is easier to read. I’m not saying that mocks aren’t useful in certain situations, but I would favor a stub over a mock IMO, your test should only fail because of the code it is testing, not because of a mock.


Although it is fun to say "Mock ME? No mock YOU!".

update: I forgot to link to Rob’s post that inspired this post. “Using Dependency Injection and Mocking For Testability

update to the update: Jeremy Miller and Nikola Malovic both pointed out that I’m using the terminology incorrectly. It turns out I don’t specifically hate mocks themselves, I dislike use dynamic mocking/stubbing frameworks due to the extra dependency they introduce into my tests. Thanks for the corrections. Back to reading Fowler for me!

JavaScript: Not for the faint at heart?

JavaScript: A tool too sharp?

Script# (Script Sharp) – writing javascript in C#

Both Jimmy and roy have great posts discussing JavaScript. Roy is looking at it as a C# developer lured by the many, many articles about how jQuery is the only thing that makes JavaScript worth using and using Script# to abstract away some of the messiness and pain usually associated with writing JavaScript. Jimmy discusses the merits of JavaScript itself and how it has changed how he approaches writing C# code.

One thing I like to point to is a great quote I heard on Twitter

Java is to JavaScript as ham is to hamster


JavaScript actually has more in common with Scheme or Lisp than it does Java or C#. I first realized this when I saw that Douglas Crockford had re-written all of the examples in The Little Schemer
in JavaScript. It’s easy to miss that fact when you see all of the pseudo OOP noise like “var foo = new Foo();”. But when you see how trvial it is to implement something like a map method in JavaScript, you realize how powerful the language can be. Most of the hatred for JavaScript comes from two things I’ve found:

  1. Broken DOM implementations – every browsers implementation of the DOM is broken in one respect or another.
  2. A misunderstanding of either scope or inheritance.

Roy has a great point about the lack of good tooling surrounding JavaScript. There are excellent libraries like jQuery and PrototypeJS, but the usual tool support, intellisense, re-factoring, profiling, is a little more difficult to come by. I’ll address this in another post as I feel a lot of people are new to JavaScript and are struggling along with some substandard tools.