Choosing which test cases to automate

Choosing which test cases to automate

Why to automate

Before new feature is developed, the testing team is mainly focused on creating the test cases for the feature. The test cases that are going to be written are going to cover the acceptance criteria for the feature that could be confirmed as done. Depending from tester to tester there could be various test cases, that could require to verify the feature.

Going through the test cases manually to verify the feature is fine, because that is when most of the issues are found. But what is annoying is that once the feature is done, we would have to re-test the test cases over again when regression arrives.

Here comes the part for automating, the tests cases. The test cases are not automated because they are going to become annoying, rather because the automation is going to save us a lot of time.

What to automate

When choosing which test cases to automate we need to think in a smart way. The reason for this is that the automation of the test cases takes a lot of time, and therefore this time has to be effective in the future.

When we have performed the manual tests, we have initial picture of the difficulty of the test cases, so we are going to separate them in three categories:

  • Easy (tests that are going to verify the initial state of the feature, i.e. test cases that do not have many steps)
  • Medium (tests that are going to verify functional, non-functional cases)
  • Hard (tests that require specific configuration)

Perfect candidates are the tests in the easy and medium category, and the reason for this is that we would not have to spend too much time developing the tests. This does not mean to focus 100% percent of the test cases, because 100% test case coverage does not mean 100% bug free software.

The target here is to automate the test cases that would have to be verified repeatedly over time, i.e. Smoke tests, Sanity tests, Regression tests, etc. This would allow us verify faster the old features, when new feature is developed, and validating that the old features are not broken.

What not to automate

As I mentioned 100% test case coverage, does not mean 100% bug free software, we would have to avoid certain test cases to be automated.

So firstly we are going to avoid the test cases that could be easily verified with manual testing, these could be tests that verify the UI of the feature. Having test cases that verify that UI element is only present / displayed should be avoided, because you may end up with broken UI design, and passing tests. The reason for this is that the UI location of the element is not considered. This could be achieved with other tools, but this is going to be covered in another topic.

You should also avoid test cases that require specific configuration, or they take too many steps to be executed. I’m going to consider that these type of tests are not going to be required to be executed frequently, and therefore going through them manually is totally fine.

Duplication of test cases should be avoided as well. This is similar in programming having different implementation of a function that does the same job. In the testing world we are having two different test cases with different title that are performing the same final output.


Before starting with any automation you should have prepared plan for covering the cases manually. Once you have covered the cases manually you can easily determine which test cases are good candidates for automation, and the time required for automation. Because if the automation of test cases costs more than the time for development, the investor will avoid the automation and is going to invest in manual testing. Think smart 🙂