Saturday, May 13, 2017

Improving a Concordion specification

Katrina Clokie will be running a workshop on three styles of automation at Let's Test. The 3 styles use different test runners and have different patterns of using WebDriver. With one of the styles using Concordion to drive the tests, I'd like to suggest a few areas for improvement with the Concordion specification while retaining the original feel. (Katrina has acknowledged that the Concordion suite is using a traditional approach).

Markdown

Concordion 2.0 introduced Markdown support, in addition to HTML and Excel. Markdown makes it easy to create rich documents using plain text and is easier to read and write than HTML. We can replace the HTML specification:



with:



When previewed, for example on Github, or in an IDE, the Concordion commands are shown as links with the command displayed when you hover over the link:


Use of assertTrue

I view the assertTrue command in Concordion as a "spec smell" ("a surface indication that usually corresponds to a deeper problem in the system" [1]).

At the very least, assertTrue should be used sparingly since it only displays true or false on failure, which necessitates additional debugging whenever an issue occurs.

Acceptance Criteria and Examples

As described in Seeing the wood and some trees, including both the acceptance criteria and some examples helps us discover gaps and gain a deeper understanding of the requirements.

The IRD PAYE specification above is documenting the acceptance criterion as an example which, in my view, is too high-level for an example. By omitting the concrete details of the example, we fail to ask the "what-if" questions which limits our discovery of additional interesting examples. 

Writing the specification as follows, pulls the test up to be an acceptance criteria and shows the concrete example. 



This also removes the assertTrue spec smell. Note that we still only show relevant details, eg. user type and tax year are not relevant so are handled in the fixture.

Seeing this example, I start questioning "what-if" about the comma in the input amount field? Do we need it? If used, should it be restricted to separate groups of three digits? Or might our Indian customers want to express in Crores and Lakhs, for example "1,00,00,000"? 
  • Would "10,24.00" be a valid amount? (Yes currently)
  • What about "1024.0,0"? (No)
  • Or "1,2,,0,4.00"? (Yes currently)

These are all valid questions that result in an additional acceptance criteria about comma placement. We tend to only need to ask these questions once per system, and can then reuse any decision across the system. (This is also what tends to make those first few spec workshops / 3 amigo sessions / example mapping sessions slow. Once we have those points straightened out and part of our team's shared language, they don't need to be discussed in detail each time.)

If we were to automate these examples, we would focus the example to only contain the context for the behaviour we want to show:



Conclusion

The refactored specification is smaller, easier to read and write with fewer errors. Any failures of the test now clearly display the actual and expected values on failure. 

By cleaning up the specification, we have discovered other acceptance criteria that may be worth exploring, such as the placement of the comma in the input amount.

Other improvements could still be made to the specification, such as rewording the Given-When-Then to make it less clunky.

The updated code is available on Github.





Post a Comment