Skip to content

Test Metrics

This blog post is about how to measure success of testing.
As a rule of thumb, as many defects eg. failed test cases are found during testing, as less defects will be passed to production.
But most important is that the requirements for a given application are very precise and the tests derived very robust. An application can never be tested at a 100% level, as otherwise you could never release an application. But when it is safe to release? Categorize your tests in critical / medium / nice to have and create a treshold which must be achieved before releasing.

There are several metrics that can be used to measure testing of an application. These metrics can help to evaluate the effectiveness and efficiency of the testing process, and can be used to identify areas for improvement. Some of the key metrics that may be relevant for testing include:

  1. Test execution speed: This measures the time it takes to run a test or suite of tests for a given application and one release. From this you can derive how long you would need for the next release and whether you need to run tests in parallel.
  2. Test coverage: This measures the percentage of the application's functionality that has been tested.
  3. Defect detection rate: This measures the number of defects found per per application and test run for a release. This rate should decline by every new set of test runs for a release. You can set a treshold when it is safe to release a new version of the application.
  4. False positive rate: This measures the percentage of test results that are incorrectly identified as defects, and can be used to compare the reliability of testing. As a result, these test cases should be looked at and possibly reworked to make them more stable.
  5. Maintenance costs: This measures the time and resources required to maintain the testa, infrastructure and scripts, and can be used to compare the ongoing costs of testing.
  6. Test ratio Passed / Failed. This is comparable to defect detection rate. On a timeline, the passed rate should increase.
  7. Failure rate of each test case planned for a release. The most failed test cases need to be run more often tp ensure the quaöity of the release. Also it may be a good hint what parts of the application should be reworked as they cause tests to fail.

Derived Metrics

- Test execution speed
- Test coverage
- Defect detection rate
- False positive/negative rate
- Maintenance costs
- Ratio of Passed / Failed Test cases
- Overall time to run all test cases for a release
- Pass / Fail rate over Time before release
- Which are most failed test cases
- Percentage of failed test cases before release
- Number and Run time of test cases overall for a release
- Number of executions for a test case
- Number of tests executed vs. Planned
- Requirements coverage: What test cases cover what requirements
- Number of bugs found for given requirements
- Number of bugs found by test case / in production
- Bug distribution by cause / feature / Severity
- Bug distribution over time
- Bugs reported / Bugs fixed