One of the initiatives that we have currently going on is to implement a new test case management tool. In doing the research for this, I’ve started thinking more about what needs to truly go into delivering a quality product. It’s very easy to fall into the habit of building an app, writing test cases, passing said cases, shipping product, and then feeling like you just won the lottery because you got a release out the door. If only this worked, we would all be in a much better place in terms of the products we are responsible for.
One of the most critical things I have found is to make sure everyone involved in a release has a very clear understanding of what the success criteria is, not only for the release as a whole, but also the individual features. If we take the time up front to make sure that we understand what is necessary, then it makes it much easier later on to understand whether what we are working on is good to go or not.
So what do we need to know up front?
One of the toughest things for multiple teams to agree upon is the acceptance criteria, and the test cases that go along with them. Because everyone has their own ideas of what constitutes acceptance criteria, test cases are usually very disjointed, and not very consistent. I have a single rule of thumb: there should only be a single expected result per acceptance criteria. This has a very nice side benefit of making the test cases extremely easy to write, as they are effectively a one-to-one relationship with the acceptance criteria (and I have even used the AC as the test cases). I don’t go crazy making sure every visual element is in the absolute correct place, and I certainly don’t write test cases specifically around the various UI elements. The only exception to this is when a UI element is part of the functionality being tested. This is not to say that you shouldn’t be checking the UI at all, but rather that it should not be the thing that you focus on to determine whether functionality works. A lot of people get so sucked into the habit of worrying about whether everything looks right, and they forget to make sure that it works right. If you can get on the same page as to acceptance criteria, then 90% of the upfront work has been taken care, since this would include information from a number of areas.
What about during the construction phase?
Once you get the acceptance criteria defined, and test cases created (even if only roughly), development will start. Early on during this phase I have found it best to focus more on trying to break the application than trying to satisfy test cases. The reason for this is that it will be rare that a feature is complete enough that it can be completely signed off on. As you move through the construction cycle, you will be able to sign off on more and more areas, but for those times when you are “just waiting”, wait constructively, and try to break things. A lot of peope like to focus on negative testing, and I think this is great and all, and you can call it that if you like, but I prefer to refer to it as trying to break the app. The reason for this is that mentally you will be a lot more creative in trying to break the app than you would if you just tried to figure out all of the negative scenarios. Will you come up with workflows that are not realistic? Absolutely, but you will also stress test the app in ways that you can’t predict ahead of time. Trust me, whether you think it is realistic or not, one of your users is more than likely going to try it, so it would be better to know ahead of time what happens, and then either fix the issue, or better still, prevent it from happening to begin with. This will result in much tighter application that does it’s defined function correctly each and every time.
Ok, construction is over, now what?
So you have made it to the end of the construction phase, all of your acceptance criteria has been passed, and you feel like the app is ready to roll to production…what’s next? I like to stand up the app in a production-like environment just like you would for your end-users. Whether this is as a beta test, or something similar, the end result should be the same…does the app deploy appropriately? By appropriately, I mean that you should make sure it comes up just like you expect it to for your end users. Sometimes this is pretty straightforward, but other times there will be integration points that you need to worry about. Making sure this is all operational is key, and you should treat this the same way you would if you were doing this as an end user. This means no fixes directly in that environment (should something be found), and running the app exactly how it would be ran “in the wild”. Your testing should switch from making sure the functionality works as intended, to using the app the same way your end users would. You have to look at the app through a slightly different lens, and you should be more concerned about workflows and usage patterns, rather than if the app’s functionality is working.
These are not hard and fast rules, and should certainly be tailored to your specific environment, but I have found that by following these basic guidelines, I have been able to consistently improve the overall quality of the product I am responsible for. I know that a lot of the things I describe sound like they are specific to a Waterfall environment, but really, if you shrink this down to the scrum level, it works just as well (if not better in some cases) for Agile. The key thing is the overall principles, and if you are following those, chances are you are going to end up in a pretty good place.