In a lot of ways, developer-time testing is a solved problem. Not to say that test-driven development is always easy. There are still plenty of people and technical issues to sort through when you decide to start testing your application aggressively. That said, I think that bringing testing into every aspect of your software development lifecycle presents a much greater challenge, and therefore fun to tackle.

I get asked frequently how to automate testing for every team member. The reasoning usually goes something like this, “If test-driven development is good for programmers, why wouldn’t it be good for business analysts?” Feel free to replace “business analysts” with anyone on the project team – customer, project manager, quality assurance engineer, etc. It may seem a little radical. Fortunately, I agree with them in principle.

In fact, I like to say that tests are the primary artifacts that drive the communication from one team member to another. More precisely, I mean an executable test that gets run with every build and fails when the application no longer does what we want. In this post, I describe how my team and I got this working on a very successful project.

I’ll set the stage by describing the application, team makeup and development process. The application was a “back-end” system, meaning that it had no graphical user interface. Messages flowed into the system, which in turn updated the state of the database and orchestrated various interactions with other external systems. After much processing, the application spit out a message to indicate success, failure or some variation of the two. The entire project ran fulltime for a little more than one year. The team consisted of about 15 people consistently, sometimes going as high as 25. We had a fantastic war room, large enough for everyone on the team. And, we had a very well defined workflow for our Stories. At each step in this process, there was a heavy focus on testing.

Story_process_small

Yep, that’s right, two passes through Quality Assurance. I’ll come back to why. And, just in case you think this sounds a little heavyweight, a Story would flow through this process in less than 2 weeks with lots of opportunity for the all-important feedback loop. We usually had about 6 Stories flowing in parallel to keep everyone on the team fully utilized.

Step one was prioritization with the customer. As the business customers reviewed and prioritized Stories, they would conduct meetings to determine what each Story really meant. Inevitably during this meeting interesting edge cases would arise. If the edge case was in scope for the Story, it would be documented in the body of the Story card. Those edge cases were the first stab at testing. Each case was essentially a comment from the customers saying “…and don’t forget to test that the application will handle this scenario”. Over time, the customers learned that by documenting these interesting variations right alongside the Story, they were helping to make the application better.

Next, the Story would get a thorough analysis. That meant looking into the guts of the functionality to make sure that every aspect was written down for the developer. As a part of that, the analysts would create tables of test scenarios. In fact some Stories consisted mostly of test cases. It’s surprising how well one can communicate complex functionality by giving examples. The amount of verbiage in the Story decreased as the analysts started to think more about how to test the functionality, instead of how to describe the functionality.

The analysts would also incorporate those edge cases identified by the business customers into the test cases. Frequently, the analysts would realize even more unique variants, and those too…would be documented in the Story as an example test case.

Ok, so far, this doesn’t sound much different than what everyone does. The difference was, those tables of test cases identified by the analysts were executable! Ok…you caught me…they were nearly executable. We taught the analysts how to create test cases consumable by the testing framework Fit, and FitNesse. The output from the analysis process was a textual description of the desired functionality with embedded executable (nearly) example test cases.

The third step in the process was a stop by the quality assurance folks. They would review the Story and think about how to test the functionality. They frequently discovered even more variations and potential impact to the rest of the system functionality. They possessed an intimate understanding of the custom FitNesse fixtures defined by the team. Therefore, they were able to streamline the testing, by using meaningful variables, random number generators to increase the test’s completeness, etc.

At this point, the Story was truly executable. Meaning, we could run it in FitNesse. It would of course fail, because the functionality hadn’t been implemented by the development team, but we would get a red bar. Red bars make a good starting point for writing code.

And here we are! Developers get to write some code to make that test pass.

We referred to our development approach as “Fit-First Development”. Just like test-driven development, where you start by writing a broken unit test, we would start with a broken Fit test and write just enough code to make it pass. Actually, we would drill down from the Fit test into the code, writing tests all the way along. We would sometimes expand the Fit test. We would always write unit tests, in this case using JUnit. As we got the unit tests passing we would pop our way back up to the Fit test. Sometimes, we would get the Fit test working one table row at a time until the correct functional implementation emerged.

As the tech lead on the team that pair programmed to write code all day long, I found the approach to be exhilarating. Never before had I felt so assured that the code my pair and I wrote actually did what the business wanted.

After the developers completed the code, i.e. made the Fit test green bar, they would pass it back to the QA team. One point here…The QA team was sitting in the same room. As mentioned, everyone including the customer was sitting in the same war room. Although my description might make it sound like chunks of work were being lobbed over the wall, that wasn’t the case. We talked back and forth continuously. There were times when the developers would encounter a contradiction between the plain English description and the test scenarios. In those cases, it was a matter of minutes before the analysts and QA people were hunched over the developer work station determining the correct answer.

So, the Story, along with the code flowed back into the hands of the QA team. They would do a thorough review of the functionality. Sometimes they would add more test scenarios that had become obvious, which might in turn result in the Story heading back into development (the aforementioned feedback loop). Once QA was satisfied, they would call the customer representative over to do a final functionality walk-through. Since it was a back-end system, the Fit test plan provided a visual mechanism that even a technically challenged person could relate to. When the customer representative was satisfied, the final sign-off was provided. In this manner, each and every Story went from start to finish.

A side-effect of this approach was that a huge regression test suite was created step-by-step throughout the course of the project. The executable Stories were stored in Subversion, right alongside the code. The Stories, and the tests contained therein, became first-class artifacts that warranted ongoing maintenance to ensure that functionality developed months ago didn’t inadvertently get broken.

The other aspect of this was that we included the entire FitNesse wiki in Subversion, too. We used lightweight, in-memory containers to simulate the production environment. That meant that anyone on the team could checkout the project from Subversion and run the entire regression test suite, locally. Nobody had to connect to a central server. Nobody needed their own database schema. You could checkout from Subversion, double-click a batch/shell script, and off you go. Everything just worked!

Was it easy to make everything just work? Yes. It was easy because we started the project knowing that we wanted to make everything just work. It was the culmination of a couple years of trying to get better and better about automating the entire process. But now, I would seldom knowingly do it any other way. Automate everything. Check in everything. Make your Stories an executable description. And treat those Test Scenarios just like you would your application’s code.

Originally authored by Paul Julius at testearly.com