One of the major advantages for automation that is often touted is that you can perform regression testing for your releases. After having implemented (or attempted to) an automation strategy for multiple employers, there are a few guidelines I have learned along the way that help in determining what you can use for regression purposes, and what you should pass on entirely. Remember, regression testing shouldn’t be used to help determine code quality, or to take the place of unit or smoke tests…they are a different animal altogether.
Trying to determine the scope of regression testing is tough, primarily because it differs for every environment. There are, however, a few guidelines that I like to use:
- Tests that validate functionality
- Tests that are needing to be executed often (the definition is different for each environment, as we’ll cover soon)
- Tests that have a single expected outcome
- Items that take a long time to execute manually
Out of Scope
- Tests that check UI elements
- Tests that only take a few minutes to execute manually
- Tests that only need to be run 1/few number of times
- Tests that have multiple results (this is a whole different issue, that I will cover in more detail soon)
Using these guidelines will get you a pretty good bucket of tests that need to be used for regression purposes, all of which should be able to be automated.
Now that you have a bucket of tests, you can begin working on the process of scripting the actual tests. In general, you should try to wait to start scripting a given test until all functionality covered by that test case has been completed, and marked ready for production. This does not mean that it has gone to production, but rather that everything necessary to do so has occurred, and the feature will not be changing any more. This will allow you to only work on items that are not changing, therefore reducing the amount of churn (maintenance work/unusued scripting) you have. There are certainly exceptions to this rule, and it will be slightly different in each unique environment, so be sure to consider all aspects of your situation in making this determination.
One important thing to keep in mind, is that the scripts you are working on will probably not be very useful for your existing release, but rather, every release following. This is due to the timing of when things are scripted, and can be run. Ideally, you will be able to execute your regression suite on a weekly or daily basis (depending on it’s size), but the number of times you execute should absolutely be taken into account when it comes to determining what makes it into your scope. Once you have a regression suite ready to go, make sure you get it on a consistent execution plan, whether this is nightly, weekly, b-weekly, etc. In addition to the actual execution, make sure you have a place to record results, whether it is on a wiki, in a spreadsheet, or your test case management suite. Where isn’t as important as making sure that information is captured and available to everyone.
These guidelines have served me well to this point, and I think you can get some use out of them as well. One of the best things you can do for your company is to make sure automation is actually adding value, and not just becoming a resource sink. You can show what appears to be a tremendous amount of production, but it might have cost you far more in the long run than just executing manually. You have to make sure that you are saving time /andor money for the company at the end of the day…if you are not doing that, then automation isn’t going to really benefit you.
As I mentioned earlier, each environment is different as to what frequent execution means. Some places may think that once a week is pretty frequent, while others are daily, and someone else is once a month. It all depends on what you are testing, and how long it takes to execute manually. Take the below scenario…
If a given script take 15 minutes to execute manually, and takes 8 hours to script out, you are looking at a minumum number of runs of 32 times in order to justify scripting. As I said, however, this is a minumum number, and a few other other factors should be considered as well. If your automation team makes more than the manual testers doing the work, that should be factored in, as should expected maintenance for a given test case (maintenance could be at any point in the lifetime of the script). Take the above example with these new factors included. If your automation team makes on average 1.5 times the manual testers, then the 8 hours of scripting needs to be adjusted accordingly, and becomes 12 hours. Using this as the input above, we get 48 runs. Now if we factor in some maintenance work, say 50% over the life of the script, then your minimum number of runs will jump up to 72 times. If you are doing weekly execution of your regression suite, this means it will a year and a helf before you get any payback on that particular script. Granted, these are rough metrics, but the formula is pretty simple, and can be used to give you a pretty good idea of what to expect:
((scripting time * salary difference)/manual execution time) * expected maintenance %) = # of executions before payoff
Another metric that is useful to know, is how much each hour scripting will add to the number of runs:
1 hour / manual execution time = amount of runs to add per hour of additional scripting
Using this metric on the above scenario, each additional hour of scripting we do means we need to add 4 runs to the # of executions needed.
This example actually uses a fairly long manual execution cycle…if you drop the manual execution time down, the numbers go up pretty quick. If we had a script that only take 5 minutes to execute manually, and 4 hours of scripting, then the above example would result in a whopping 180 executions before payoff!