Tuesday, June 24, 2008

Crap4J, Hudson and Windows


There is now a Crap4J plugin for the Hudson continuous integration server, thanks to Daniel Lindner.

For those of you unfamiliar with Crap4J, it is a metric that "combines cyclomatic complexity and code coverage from automated tests to help you identify code that might be particularly difficult to understand, test, or maintain".

The Hudson Crap4J plugin maintains "Crappyness Trends", which is a feature missing from the standard Crap4J reports. It also shows details of any methods that exceed the crap threshold.

In order to use the plugin, you must first download the Crap4J Ant task and integrate it into your Ant build file. If you're running Windows, this is where you're likely to run into a road-block. On Windows, a bug stops the Ant task from figuring out the Crap4J home. The workaround is to set the ANT_OPTS environment variable:
set ANT_OPTS="-DCRAP4J_HOME=c:\java\tools\crap4j-ant"
, where c:\java\tools\crap4j-ant is the location of your Crap4J ant tasks.

Once your Ant build is producing the Crap reports, you're ready to integrate it into Hudson.

After downloading and installing the plugin, you'll need to add the Ant target that is running the Crap4J task. In the example, I've set up a crap4j target.

Then, on Windows, you'll need to add the Ant options to detect the Crap4J home. Click on the "Advanced..." button in the Build section. In the Java Options, enter
-DCRAP4J_HOME=c:\java\tools\crap4j-ant
, where c:\java\tools\crap4j-ant is the location of your Crap4J ant tasks.

Then, in the Post-build Actions section, specify the location of the output from your Crap4J ant task.


The next time Hudson runs your build, look for the toilet roll on the left of the dashboard for your Crap details.

A few comments on the current plugin (currently at v0.2):
  1. It would be great to be able to adjust the CRAP threshold. The default threshold of 30 is really too high for new code, and I typically set it to 15.
  2. I'd also love to be able to view the complexity, coverage and CRAP scores for all methods, not just the scores for the methods over the CRAP threshold.
  3. When the crap method percentage is less than 1, the Crappyness Trend chart does not show any %age figures on the scale.

Monday, June 23, 2008

Static imports for Hamcrest and Theories

The Hamcrest and Theories features of JUnit 4.4 rely on a number of static imports.

Code completion for static imports is tricky. For example, if I want to use the both Matcher, I firstly have to remember that it is in the JUnitMatchers class, type in JUnitMatchers (or JUM), then ctrl-space to get code completion to fill in the import, then type in .both and remove the JUnitMatchers.

As mentioned in the JUnit 4.4 release notes, Eclipse provides a Favorites preference that automatically includes your favourite classes in code assist. For example, after setting up JUnitMatchers in your Favorites, you can type in both, ctrl-space, and Eclipse will import JUnitMatchers.both.

Adding the static types shown here to your Eclipse Favorites will make your JUnit 4.4 journey a lot smoother.



PS. The both Matcher allows you to create assertions such as:
assertThat(e.getMessage(), both(startsWith("Invalid environment")).and(containsString(environmentName));
as opposed to the more common:
assertThat(e.getMessage(), allOf(startsWith("Invalid environment"), containsString(environmentName)));
It's debatable which is cleaner. The top statement reads closer to the English language, but is longer, more complex to construct and can't have additional matchers added in the same way that allOf can.


Sunday, June 1, 2008

On the eradication of software defects

I loved Andy Glover's hip comparison of mousetraps to testing tools and mice to defects.

I live in a country that had no mammals, other than a few bats, and no software defects, until man arrived around 1000 years ago. The introduced mammals have had a devastating effect on the native biodiversity. A desire to protect the remaining species has led New Zealand to be world-renowned in the removal of introduced species. I wish I could say the same for software defects.

K.P. Brown and G.H. Sherley, in their paper describing the eradication of possums on Kapiti Island, show the following success rates for different phases of the trapping programme.



The first phase, Commercial Trapping, resulted in a 24% success rate of trapping possums. Once trapping stopped being a commercial proposition, 4 trappers were paid to set up to 1500 traps per night and had a success rate of 0.7%. The final Eradication phase introduced dog teams in addition to the trapping program. At this stage, the trapping success rate was down to 0.007%, and dogs caught the remaining 32 trap-shy possums.

I suspect that similar success rates would be encountered in finding software defects. A good unit testing program, at a cost of 3-4 lines of test code for each line of code under test, might yield a success rate similar to Commercial Trapping. Some of the remaining bugs would be detected in intensive system testing. Many escape undetected into production.

Most development shops don't even make it through the Commercial Trapping phase.

The true cost of the $1 mouse trap is in the time, and cheese, taken to set it.

Agitar changed the equation. At the press of a button, Agitator would set the traps for me hundreds if not thousands of times. This would drive my unit testing past the Commercial Trapping into Intensive Control territory. With a bit more tuning, I could even start dreaming of bug eradication.

My hat goes off to those hardy souls who rid Kapiti Island of possums (and mice, rats, stoats etc.) and to the folks at Agitar who wanted to make it easier for us to eradicate those damn electronic vermin.