So you have a bunch of test plans and test cases, and you are clicking along testing things out, when someone pops their head in and says that they found an issue with something you just said was good to go. Hmmm, all of your test cases passed, so what happened, and more importantly, how do you avoid it going forward?
One of my favorite things about QA is the chance to break software. This is typically referred to as ad-hoc, or exploratory testing, but in the end, that is what you should be doing, breaking things. No matter how good your test cases are, you will always have issues that are not covered by a test case, either because it got missed, the users did something in a way that was never intended, or some other crazy unexplained phenomenon occurred. Regardless of why this happened, you need to figure out how to identify these things before they happen.
So how do we do we predict the future? There are several ways you can approach this, but there are two ways that really seem to work best (at least for me).
- Dedicated resource: This is probably one of the most common ways that I have seen, and while having someone trying to break your system 100% of their time is good, there is a potential drawback here of them seeing the same things over time, and only having one set of eye balls looking at things.
- 20% Rule: This is my preferred method, and it is exactly what it sounds like…each tester dedicates 20% of their time to trying to break the system. I try and dedicate Fridays to this type of testing, with all of the testers joining in on the fun. The advantage here is that you have multiple different people trying things, and everyone is trying something different, so you get a much greater level of overall coverage.
In the end, what is important is that you do take the time to do it on a regular basis, and not wait until you get that dreaded “I found a problem” email.