Ramblings from the Warrior's Den
Monday, April 05, 2004
Is there such a thing as a perfect test case?
OK, I was hoping to make it two months without updating this thing, but it looks like that's not going to happen. I suppose I could post another load of the Lileksian fluff you've come to expect if you read this blog (actually, I'm not sure if anyone who reads this expects anything, including any updates ever) but lately, I have been reading a lot of the various technical blogs coming out of the MS community, in particular those relevant to the field of work I currently am employed in, software testing. Consequently, I will probably move my blog to another location before too long, so stay tuned (or just check back in a couple of months, probably won't make a whole lot of difference either way.) As such, I should probably include the disclaimer that the contents of this article do not necessarily reflect the views of Microsoft and/or my current employer. You may also want to look elsewhere if technobabble gives you headaches.
Administrivia aside, I've been reading the recently started Software Test Engineering @ Microsoft, a blog run by several Microsoft testers. Seeing as how I'm not cool enough to get a Blog on MSDN yet, I'll have to stick to posting over here, at least for the time being. In particular, I have a few comments about this posting, and my thoughts on test cases in general. If you aren't all that famililar with software testing as a whole, read through some of the earlier postings on that Blog for a decent overview.
As the article states, there really isn't a "pefrect" way to write a test case, but that doesn't stop people from trying. In the teams I've worked on, each one has had a different format of test case. Some of them work better than others. From the information in the article, you can get the basic gist of what a test case is supposed to accomplish. The idea is to create a readily reproducible set of steps to check a behavior in your program. While the semantics vary from team to team, all test cases will generally have at least the following steps:
- Description (Title)
- Expected Result
This format seems straightforward enough, but I have found that test cases are rarely written this way. Sometimes, a section of pre-conditions will be added to establish the scenario in which the test case is being written. Other times, teams will decide to write the entire procedure that you would go through, complete with the expected result for each step. To compare the approaches on these, I will take an example of a case you might come up with f you were testing a toaster (I have found this to be a very common example used at Microsoft to get an idea of one's testing skills. Other people prefer to use a saltshaker as an example, but oddly enough I've never heard that one:)
Description: Verify that a bagel is toasted appropriately with the dial on the medium-dark setting
Going by the two different approaches above, a case written using the description/pre-conditions/steps/result format would look something like this:
- A sliced bagel is available.
- The toaster is plugged in to a working power outlet.
1. Set the darkness setting on the dial for the left side of the toaster to medium-dark.
2. Insert the bagel halves into the two left slots.
3. Push down the lever and wait for the toasting cycle to complete.
-Verify that the toasted bagel is within the standards for the medium-dark setting.
This particular format of test case is one that I like, mainly because it is relatively concise and easy to read. As long as you have a good idea of what you are doing, this format works well. On the other hand, it akes some assumptions about the knowledge of the tester, and although a toaster is about as idiot-proof a device as you can find, a debug build of pre-release software generally isn't. Particularly if you are bringing in a number of extra testers to help out with the final release push of a product (a scenario in which I have worked once) they may not have time to learn all the ins and outs of the product and will need to be able to ramp up quickly. In that situation, a case like that one may end up looking more like this:
Action: Locate a bagel.
Result: A bagel is available.
A: Use a knife to slice the bagel.
R: The bagel is sliced.
A: Locate a working electrical outlet.
R: A working outlet is available.
A: Plug the electrical cord of the toaster into the wall socket.
R: The toaster is plugged in.
A: Wait for the toasting cycle to complete.
R: The bagel will pop up to end the toasting cycle.
A: Remove the bagel, and compare to the known example of medium-dark bagel.
R: Verify that the toaster bagel is within the standards for the medium-dark setting.
To be honest, I'm not a big fan of this particular test case format. As you can see, it adds a whole lot of extra steps to the same procedure that is executed by the first test case. Yet I see some teams use this format on a regular basis. To exacerbate the problems further, if this test case is part of an area where a number of dfferent settings for the dial are being tested, there is a tendency to put in all of the pre-condition steps into each case. If you are working with a complex scenario, this can mean that you are looking at cases with 30 or more steps each. This also leads to the possibility that if a tester is seeing the same steps show up in fifteen test cases in a row, their brain will tend to go into auto-pilot on those steps, which means that they can easily miss a step that differs from the others in the area, and may not notice that they missed that step until the case fails and they have to go back and recheck what they were doing.
On the team I currently work on, the format we use is basically the second one, although since the testers we have on our team are relatively experienced (and because a significant portion of our test cases are automated) we tend to dispense with the pre-conditions for most cases, assuming that the testers know how to set the cases up. The primary reason that this format is used is because our test harness tool was written that way (both manual and automated cases are handled and logged by the harness.) The cases themselves are actually written in an XML format which Harness uses to parse the steps for each case. On one hand, I think that if you can omit a good portion of the pre-conditions from the cases, the action/result format works out reasonably well, as you can more easily determine at which step in the process a failure is occurring. On the other hand, it provides no way to easily put in pre-conditions without adding that extra bunch of steps. Without going into too much detail about how everything works (I tend to prefer avoiding a visit from the legal team if at all possible) this is far from an ideal solution, but it works well enough. It makes it difficult to read the cases outside of the test harness. On the other hand, from what we have right now it wouldn't take much more than a moderate amount of work on the user interface to make what I would consider an excellent soultion for managing and running test cases.
I'll probably write more on this subject later on. Check back in a month or two...