TDD != QA


A large misconception in the Agile world is that Test-Driven Development covers the QA process. True Agile shops understand that this is not the case, but for those organizations that latch on to TDD because it is Agile, and they want to be Agile don’t always get this.

What is TDD?

From Wikipedia:

Test-driven development requires developers to create automated unit tests that define code requirements (immediately) before writing the code itself. The tests contain assertions that are either true or false. Passing the tests confirms correct behavior as developers evolve and refactor the code.

Also from Wikipedia:

  • Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.
  • Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted.
  • Unit tests created in a test-driven development environment are typically created by the developer who will also write the code that is being tested. The tests may therefore share the same blind spots with the code: If, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify these input parameters. If the developer misinterprets the requirements specification for the module being developed, both the tests and the code will be wrong.
  • The high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing.
  • The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. This is especially the case with Fragile Tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above.
  • The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.

As you can see from this list, all but one of these items has serious QA implications. Let’s walk through them on a case-by-case basis.

Full Coverage

Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.

This is probably one of the biggest dangers you can face relying solely on TDD. As TDD does not take into account the UI, significant testing needs to happen around the UI in order to ensure proper implementation, and customer/client acceptance. The same goes for database and network testing. These need to be separate testing cycles that are completely independent from the TDD-based tests that the developers write. Think of it this way…TDD tests cover things the user will most likely never see, while QA-based tests will cover the things that the user will see. That may be an over-simplification, but you get the point. They are completely different, and should be treated as such.

Developer Written Tests

Unit tests created in a test-driven development environment are typically created by the developer who will also write the code that is being tested. The tests may therefore share the same blind spots with the code: If, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify these input parameters. If the developer misinterprets the requirements specification for the module being developed, both the tests and the code will be wrong.

I have personally experienced this scenario, and it is one of my biggest pet-peeves. When you are writing something, and you know it is broke, it is very easy to fall into the trap of ignoring or overlooking the failure. This is incredibly harmful when it involves the writing of the tests themselves as that may be a critical piece that should be tested, but isn’t.

False Sense of QA Security

The high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing.

This is a big gotcha, and really goes to the point of this whole post. TDD isn’t the end-all, be-all for testing purposes. It should be used in conjunction with a fully developed QA plan in order to ensure full test coverage across all of the QA needs. Your app may work flawlessly on your local dev box, with a single user accessing it, but does it work in a fully networked environment? Will it work with multiple users hitting it at the same time? Will the database scale appropriately? These are among the myriad of questions that should be asked when defining your QA plan for the project.

Test Maintenance

The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. This is especially the case with Fragile Tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected.

While the maintenance of TDD based tests is something that is very important to keep in mind and plan for, I think the key statement here is the last one. When you start ignoring known failures, you run a very real risk of ignoring a legitimate error, not to mention the scenario where one you think is a false failure turns out to be legitimate. You should never make assumptions in regards to QA. It makes for a very dangerous situation, and leaves you open to releasing software that has very real bugs in it. Coming from me, who puts serious value on the end-users experience, that’s a very big no-no.

Lifespan of TDD Tests

The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.

Every app goes through an evolutionary process. Apps change significantly over time, and unless the TDD-based tests are constantly maintained, they run the risk of become less than useful, and in some cases harmful. A fully developed QA test plan should provide coverage for this scenario, and prevent it from becoming a major problem for you. If you have ignored QA, and have instead relied solely on your dev team to produce TDD-based tests, you could be asking for real trouble down the road. What was an incredibly stable and useful app can quickly turn into a problem child the more it is developed and evolves.

Conclusion

So how do you avoid this trap? By taking the time to consider the overall QA process, and putting the necessary team in place to ensure complete coverage. As I mentioned previously, QA is more than just testing…it is the commitment to ensuring the quality of the product. Ideally, someone that is not involved directly with the development of the application should head this up. They need to be intimately involved in all aspects of the development lifecycle, and should be a champion for quality at every step along the way. They need to be willing to raise objections at times, even when deadlines are at stake. The business may still decide the risk is worth it, but at least the issues are known, and can be addressed.

Advertisements

One thought on “TDD != QA

  1. Pingback: Faysal Ahmed

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s