Category Archives: Tutorials

Modern Javascript Development:The arguments object, overloads and optional parameters

Reassign JavaScript Function Parameters In Reverse Order, Or Lose Your Params – In which Derek discovers that arguments object is a data structure with an identity crisis. 😉

That’s just bad, and it’s not specific to node.js. You should use the short circuit in the || operator to assign default values to your parameters instead of relying on ordinal indexes in the args pseudo-array in my opinion.

Modern Javascript Development: constructors and objects

The premise that someone would pass in b and c but not a is also weird. But, whatever. It does cause a problem due to the weird nature of the arguments object The big gotcha here is that JavaScript doesn’t support overloads.

In most languages that do support overloads, you would just define two different functions. One that takes three arguments and one that takes two arguments. But that won’t work since JavaScript is interpreted top-down. The last function definition clobbers the previous definition. So in the Fiddle posted above, the two argument function is the only one that exists.

Derek is correct that the arguments object is funky. It’s an array, but not really an array. It only has a “length” property. But it doesn’t have any really useful methods like pop or push. So assigning the variables in reverse order does work, for the reason you would expect knowing that JavaScript is interpreted top-down. I think a better way would be to convert the arguments objec to a REAL LIVE BOY ARRAY and access the values from that.

It has the same effect, but in my opinion it’s a little easier to understand.

Methods should include the state needed to execute them as parameters

I’ve been seeing the following pattern in code that I’ve been working on lately.

var muhClass = new MuhClass();
muhClass.MuhProp = "foo";
muhClass.MuhOtherProp = "bar";

muhClass.MuhMethod();


public class MuhClass {

public string MuhProp {get; set; }
public string MyOtherProp {get; set; }

public void MuhMethod() {
  Console.WriteLine(MuhProp + MuhOtherProp);
}

This annoys me, I don’t like having to set properties before I call a method I would prefer the method signature to contain all of the data needed to execute the method.


public void MuhMethod(string muhProp, string MyOtherProp)

In fact, if you can use defaults use them, or use overloads.

public void MuhMethod(string muhProp, string MyOtherProp = "BallZacks")

public void MuMethod(string muhProp) 
{
    MuMethod(muhProp, "NutButter");
}

How do you start a new project with TDD?

Bobby Johnson (@NotMyself on Twitter) wrote a post detailing how and why he “tests reality” when he starts a new project and gives two examples of what he means.

node.js:

'use strict';

var assert = require('assert');

exports.test_reality = function(test) {
    test.equals(true, true, 'true should still be true');
    test.done();
};

C#:

using NUnit.Framework;

namespace Simple.Data.SqliteTests
{
    [TestFixture]
    public class RealityTests
    {
        [Test]
        public void true_should_be_true()
        {
            Assert.That(true,Is.True);
        }
    }
}

At first glance, you look at those and think they are silly because all they are doing is testing that the test framework is working correctly. But Bobby has a different reason for including them.

So when I am setting up my basic project structure and automation, I like to have at least one unit test ready to run. This allows me to test the automation script and ensure failures in tests cause failures in builds. In .NET I want to confirm that the compilation step fails properly and the unit tests fail properly. In node.js, I want to ensure that linting errors fail as well as unit tests.

That’s a valid point, however this is more of a configuration test of your infrastructure. I personally wouldn’t write a test like this, I’d want the test to have some value before I committed it to the main repository or build/CI server. I strongly feel that developers should use the same build procedure on their desktops to build the application as they do on their CI server. IDE’s like Visual Studio abstract away the building of the application, so we often have to write scripts, rakefiles, psake files, whatever to do all the things that Visual Studio doesn’t do when it builds your application. Things like running the tests and reporting the results, building an installers or setting test/staging/production values in config files. I like to have a repeatable build system setup for projects I’m working on so I tend to re-use scripts or keep it really simple. Which means that I have confidence that I have configured the build script correctly so that it will work properly when I put it in my CI server. Which allows me to start focusing on the design of my application sooner.

Must reads for new programmers

Iris Classon has a nice post up listing her top ten books to read this year.

The Little Schemer – It’s not about the Scheme language at all, it’s about teaching recursive thinking. Once you “get it”, it changes the way you think about programming.

Code by Charles Petzold – Explains how computers work by starting with Boolean arithmetic and working towards RAM and video cards. Short read, but essential in today’s throw-away culture.

Don’t Make Me Think – I’ve bought this book 3 times, every time someone borrows it they keep it. Essential if you are doing ANY kind of UI or UX work.

Test-Driven Development: By Example – Nice and practical. Resolves a lot of the questions that surround TDD like “How much should I test?” and “What should I test?”. Shows how TDD is less about the tests and more about the design of your code.

Writing Secure Code (2nd Edition) – You won’t work for me or with me if you don’t own this unless there is a threat involved.

Clean Code – Robert Martin – An excellent book for learning to recognize bad software.

Working Effectively with Legacy Code – Michael Feathers – Useful refactoring techniques combined with useful testing patterns. As a programmer, new or veteran, most of your time will be spent working on code you didn’t write.

My Git, Mercurial and Powershell setup

I’ve been using both Git and Mercurial for a while and I’ve been fine with the standard command line tools for both. Last year, prompted by a co-worker, I started to look at using alternative consoles on Windows. I tried using just a standard Powershell prompt and that worked for a while, but I wanted a little bit more power and configuration. So I looked at Console2 and ConEmu. I’ve settled on ConEmu starting a Powershell prompt for now, but I wanted to focus on how I customize the Powershell prompt, and my .gitconfig, to allow me to work more efficiently with Git and Mercurial.

First I looked into custom Git prompts, I started with Posh-Git. Kieth has done a wonderful job creating a custom Powershell prompt, as well as enhancing the overall Git experience through tab completion. I personally found it to be too slow on most of the repos I work with so I ditched it (for now). If you like posh-git, I’d recommend these two posts by Phil Haack as excellent starting points for installing and configuring posh-git.

I’ve been using the combined Mercurial and Git Powershell prompt written by Matthew Manela and I really like it. I like the way it displays un-staged changes better than posh-git and it’s been really fast no matter how large the repository. The only line I added to my Powershell profile is a call to load the Visual Studio Environment vars.

cmd /c """C:\Program Files `(x86`)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat""" ""x86""

I mentioned that I use Git a lot at work, well that’s half-true. I use the Git-SVN bridge a lot at work. It allows me the flexibility to create local branches, but still lets me connect to my groups SVN repository. Someday, we’ll move to a full Git repository but we just moved a lot of developers off of TFS and onto Subversion and we want to wait a little while to shake up their entire world again. I use a lot of custom aliases in my .gitconfig and set up a global .gitexcludes file.

[alias]
    aa = add --all .
    st = status
    br = branch
    cl = clone
    co = checkout
    ci = commit
    sr = svn rebase
    sci = svn dcommit
    fu = reset --hard
    lg = log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --
    pu = !"git fetch origin -v; git fetch upstream -v; git merge upstream/master"
[merge]
	tool = kdiff3
[core]
    autocrlf = True
    excludesfile = "~/.gitexcludes"
[diff]
	guitool = kdiff3
[difftool "kdiff3"]
    path = "C:/Program Files (x86)/KDiff3/kdiff3.exe"
[mergetool "kdiff3"]
	path = "C:/Program Files (x86)/KDiff3/kdiff3.exe"
[svn]
	rmdir = true

Most of the aliases are self-explanatory and are pretty common in .gitconfig files. The two most useful aliases I have in that file are the “pu” and the “fu” aliases.

git fu – Does a hard reset, effectively dumping all of my changes and putting me back at the HEAD.
git pu – Does a pull request from my origin repo, then the upstream remote repo, and finally merges the two together. It’s a handy shortcut for getting my forks up to speed.

*.DS_Store
*.idea
*.dbmdl
*.user
*.suo
*.cache
*.log
*.log.*
[Oo]bj
[Bb]in
ErrorLogs
*~
*.swp
_ReSharper*
*.db
*.orig
*.rej
*.vs10x
CI
*.docstates

That is the contents of my .gitexcludes file. It covers most of the usual suspects I’ve run into. The “CI” blocker is custom to our work environment, that’s where our Continuous Integration builds end up when you run our build script. and .DS_Store is a blight on humanity that Apple needs to eradicate. There is a work-around for network drives at least. But if you plug in a USB drive, OSX will still create the .DS_Store directories on them and the work around will not erase existing .DS_Store directories on network shares.

I haven’t set up any aliases in my Mercurial config, I haven’t really found the need to. I find the Mercurial commands to be much more intuitive and easier to remember than the Git commands. Mostly because “Git hates developers

* I’ve hesitated writing this for a while because I may seem negative about a very popular Powershell module for Git called PoshGit. I understand that it works well for some people, and it has gotten faster since I first used it, but it still is too slow for my daily usage at work. I used it at home for a while, but I started to notice the LARGE pauses more and more when I would enter a Git directory when using PoshGit. So I switched to the module I talk about above. There is a way around the prompt performance issues on a per-repo basis. But if I have to turn off the custom prompt completely, posh-git looses some of it’s appeal to me. We had Keith, and some other Git folk, on the podcast to discuss the entire Git on Windows situation a while back and included a discussion of posh-git.

A third option for using jQuery templates

Dave Ward has a great post about defining jQuery templates. There’s a third method that he doesn’t mention in his post. The “embedd-and-grab/clone” method. I’ve used this method before for simple element cloning of templates.

<div id="templates">
    <div id="hello">
        <p>Hello, ${name}.</p>
    </div>
</div>

We can create a div, or really any element you want, to hold our templates. What does this gain us? Well if we are using a design tool, we can see what the template will look like before we have to render it. That may make it easier for a designer on your project. We don’t have to make an AJAX call out to retrieve the external template, although Dave talks about how this really isn’t an issue if you have caching set up correctly on your server. And frankly, for the amount of bytes that are in a typical template I can’t imagine any successful AJAX request taking very long.

To use them, you simply grab the templates div and detach it from the DOM. If you assign the detached elements to a var, you can just use jQuery selectors to find the one you want to use. Because, remember most of the jQuery methods return the jQuery object itself.

var templates = ​$("#templates")​​​​​​​​​.detach();
$.tmpl(templates.find("#hello").text(), person);​​​

Why do you want to use the detach method rather than the remove method? The detach method removes the elements from the DOM but keeps any jQuery data associated with them intact. Meaning you can use the $.data() method to add data to your templates and access the data before you compile your templates.
Dave Ward points out that I need to use “tempates.find()” rather than the default selector method on the jQuery object. Noted and updated