Sunday, September 12, 2010

What's happening in my acceptance tests?

Agile Acceptance Testing allows us to describe desired behaviour using examples that describe the business intent. Good acceptance tests are written as plain language specifications, not scripts. Implementation details are coded in a separate test "fixture" class.

One downside of this approach is a loss of transparency of what the tests are actually doing. The "fixtures" are often written by developers, which may need a leap in faith for testers to trust. On a recent project, this trust was dented when the tests didn't do what they were supposed to be doing.

With this in mind, I set out to provide more insight into what our tests are actually doing, without undermining the principles of acceptance testing.

The result is a logging extension for Concordion. This adds a "tooltip" to the Concordion output HTML that shows log output when hovered over:


This tooltip is proving useful not only to testers, but also for developers to gain insight into what is happening in their tests and to find performance improvements. For example, in the above example we were surprised to see the web page being loaded twice, and a number of element lookups being duplicated.

This approach could also be used for user documentation of the steps required to complete an action, potentially with screen shots, or even as an embedded screen cast.

Implementation Details
We've added a new extension mechanism to Concordion 1.4.1 to make it easy to add features such as this.

This extension is available on Github. The extension captures java.util.logging output and has a number of configuration options. You'll need to set the concordion.extensions system property to use it - see the README for details.

For a example using this Concordion extension with WebDriver, see the demo project .

UPDATESOct 4  2010. Changed github link to new concordion-extensions project
Oct 6  2010. Source code moved to trunk of Concordion
Oct 24 2010. Updated to reference Concordion 1.4.1 and concordion-extension-demo project
Jan 03 2011. Project moved to Google Code and docs to Concordion site.
Oct 26 2014. Updated links to new Github projects.

3 comments:

Unknown said...

Nigel, Nice work on the screenshot extensions. Something we found in our implementation, is that sometimes screenshots is not enough ;) If you are GUI testing an application that has a lenghty page, then the screenshot only captures the current state of the page. It is also static. Our implementation has the ability to toggle between png or html, which uses two different mechansisms to save the current state of the application. The upside of HTML is that it allows the user to interact with the report - which is important for us as errors are thrown at the top of our page, which may not be visbile in the browser window depending on the flow of the test.

Nigel said...

Thanks for the feedback.

One of the nice things about the WebDriver implementation is that it captures the full page, even if it is lengthy and portions are not currently on the screen. (I've truncated the 3rd image above, but you do get the full page image).

Is this what you were meaning? Initially I read "interact" to mean mouse-over etc, but this would mean capturing JavaScript, CSS etc.

Unknown said...

Its a bit of both. The full page is definitely a plus! In our implementation, we find it easier to deal with the html to see what values drop downs have etc... and you're right - you do need to capture the css/images/js. For the time being - we just grab the html which links back to the server / application under test for additional resources. This is equivalent behaviour from the Canoo WebTest framework which we are migrating away from. I've raised a bunch of story cards to upgrade our framework and to spike brining in the concordion-extensions, with the idea of pushing back to the project any enhancements we make :)