In my last post, I referred to QA professionals getting caught in the trap of just following test cases and steps, and I thought I should elaborate on that a little bit more. This is actually a giant pet peeve of mine, and it always drives me nuts when this is either unseen, or willingly encouraged.
The primary reason I have an issue with this approach is that is discourages people from truly learning the application, and instead puts the onus on them (or someone else entirely) to come up with test cases (and steps) that accurately reflect the requested deliverable. Not only that, but it also prevents any discussion around usability during the course of testing. If you have a list of steps to perform, so long it was written that way, that is how it will be tested, and rarely is that questioned. What we do is end up building robots that are quite good at following the directions they are programmed to do, but are incapable of actually using the product. I have actually seen QA testers that had no idea how to use the very same system they were testing…they were only able to use the system if they were following their test cases! I’m sorry, but if you HAVE to have those steps clearly defined for you, whether for how to use the system, or even to ensure that you do perform those test steps, then you are probably not the right person for the job.
I understand that QA professionals come in all shapes and sizes, but at some point we need to take a step back and revisit what it is exactly that we are trying to accomplish. When we start building robots, you are most likely headed for a world of hurt once your product gets in front of the end users.
So how do we combat this? One of the things that I am a big advocate of is simplifying the test case process, and make it more focused on satisfying the actual acceptance criteria instead of checking that each pixel is in the exact perfect spot. Development should be building the product that the business asked for, and QA’s job is to make sure that the right product got built. Sometimes that is exactly what was designed and spec’d out up front, and sometimes it isn’t…being able to differentiate those two scenarios is the key to great QA. You should have a test case that is based around the acceptance criteria that the development team is using to build the product, and that does not have to be very detailed to get the job done. Even in an industry where there is a federal guideline or requirement to follow can use this method…if you have a requirement, it should be clearly spelled out up front for the dev team, because if they don’t know about it, how do you expect the QA team to test for it?