By Brad Schoening, Doran Jones Managing Director
A new development team was formed and tasked with re-building a risk management system on the banks strategic platform. The project involved feed processing; that is taking in various feeds, aggregating and enhancing the data. Goals of the re-write include new behavioral features and performance benefits.
Development work was already underway in Python with a team of four developers and one lead. A Doran Jones Agile consultant was brought in to help facilitate developer-facing testing. During an initial review of the application with the development team, it was learned that the team had already built an integration test suite consisting of ten test scenarios. These existing scenarios were all written as python unit tests.
However, there were some problem with building upon and expanding this test suite:
- Single tests: Since they were implemented in Python code as individual tests, the unit tests were not easily separated from tests cases. Adding N new test cases would require adding N new unit tests.
- Ad-hock context: Each unit test setup its own test context and it was difficult to understand from the unit test code what part of the setup was test data or what was test context configuration. Creating new test scenarios required a good deal of knowledge of the application code.
- Readability: The tests themselves were not transparent regarding what business rules were being tested. Only a developer familiar with the code base would be able to interpret and explain the relevance and details of each test.
The code could obviously be re-factored to support multiple test cases per scenario, separate test fixtures from unit test cases, and separate test data from the code, but this would require building more test infrastructure and also retrofitting it to each of the existing ten scenarios.
Adopting Behavioral Driven Design
It was thus apparent that a new integration test infrastructure was needed. One logical choice would be to re-factor and extend the existing test infrastructure. We also wished to consider open source tools that would provide a framework we could leverage. The BDD tools Lettuce & Fitness on the short list to evaluate.
We next investigated in-house support for either of these tools, and found significant existing support for Lettuce. In the customer’s environment, Lettuce had the following benefits:
- Zero install: the firms platform already integrated the Lettuce open source libraries
- Usability: an in-house development environment provided support for Lettuce test files and allowed developers to execute tests using quick keys (F9)
- QA dashboard: the firms existing quality metrics for code coverage was configured to support Lettuce and include these tests
- UI: an in-house application had been built to display Lettuce feature files to a business analysts and software testers. It provides users with the ability to sign off on features, review test data and success of test scenarios.
To complete our evaluation, we setup conference calls with two separate development teams in London who had experience using Lettuce. We asked them about their experience using the tools, the benefits as well as potential issues and/or risks. The feedback here was overwhelming positive. We also developed a small proof-of-concept to understand how to build and run Lettuce test in the bank’s environment.
Next, we formulated plans for a BDD pilot project. The goal was to re-write the existing ten scenario unit tests in Lettuce, integrate the coverage with the QA dashboard, and setup a UI for use by non-programmer staff.
The entire development team is pleased with the Lettuce integration tests and plans to enhance and extend them going forward. Future work will also include engagement with business analysts and software testers on the project to leverage their ability to create tests.
Key benefits of the project were:
- Separation of test cases from code
- A 50% reduction in python test code
- Nearly identical test coverage (<0.16% variance)
From initial engagement to completed execution of the BDD test was 11 weeks of elapsed time which broke down as follows: evaluation, investigation and planning (5 weeks), development (2 weeks), integration with QA dashboard and UI configuration (4 weeks) and one agile testing coach (11 weeks, 2 days per week) for a total of approximately 70 man days.
BDD with Lettuce is an effective tools for designing integration level tests. While the development team benefits from the clarity of separating the test cases from the test fixtures, we expect even more benefits can be derived from collaboration with business analysts and software testers using the English language syntax of BDD test cases.