I’ve always found metrics in QA to be a tricky subject, as I’ve generally found it difficult to identify and attach meaningful numbers to performance in a role based around providing information.

I’m dead against keeping track of statistics like personal bug counts, numbers of tests executed and so on. They bring about pointless bugs, endless raising of non-issues, and underhanded tactics all over the place, so they don’t give a true measure of an individual’s performance in the QA field. As I alluded to before, the real measure of QA’s effectiveness is in the information they provide to their customers, whether those customers are the rest of their development team, the product and business teams they work with, or indeed, anyone else who is a stakeholder in the work the team carries out.

Even when trying to compare the performance of one person against another, the nature of our field means that, due to pressures such as time constraints, the relative state of the system under test and so on, you will never get to see different folks running the same test in exactly the same set of circumstances. So it’s unfair to consider using that sort of thing as measure of performance too.

But I do understand the need to monitor performance, particularly for new hires, and there are a few things I use to measure the performance, throughput and relative value of folks in QA. While this particular set of metrics is geared more towards the performance of new members of a team, they could easily be adapted to track the progress and performance of established team members too.

Bug Quality

The general quality of bugs raised should be spot checked, with closer attention being paid to bugs raised due to issues missed in testing (indicative of areas where testing and detection methods should be improved) and bugs returned as ‘Will Not Fix’ (indicative of areas where priorities and understanding of requirements / product needs / customer needs should be improved). For new hires, I’d expect numbers of such issues to decrease over time as the QA ramps up in their new domain. Also, keep an eye open for any issues which have failed to detect or report incorrect system behaviour, have been assigned an inappropriately low priority, or that otherwise understate the significance of a problem. These will highlight areas where coaching is required to improve understanding of the system under test.

Critical Bugs in Test vs Production

Keep an eye on the ratio of critical bugs (>=P2) raised in Test vs Production. Customer satisfaction is the True North of Quality, and if there are more than a handful of instances of critical bugs being identified post-sprint, this could be indicative of a coaching need.

Test Coverage for Applications

Whenever a new hire fills a vacancy, I’d expect to see an increase in test coverage. Establish the current base line for the areas the team currently cover, and track for increases — but, importantly, in areas where increases are expected. Don’t forget that, particularly with automation, there are upper limits for test coverage, so don’t make the mistake of setting a coverage target without first discussing and identifying the areas it is actually possible to cover, or you risk setting an unachievable target.

Load Shift

When a new QA hire comes on board, overall team output should increase as the new member of the team takes on more of the testing load. This one is a bit arbitrary, and not entirely dependent on the new QA hire, but it’s still worth monitoring as an identifier for potential issues and bottlenecks in your workflow.

Overall increase in Story Turnaround & Completion

Keep track of the team’s commitments for each sprint, and of how many of those commitments were delivered with a high standard of quality. Again, this isn’t always going to be directly in the hands of QA, but where a team has recently filled a vacancy, I’d expect month-on-month increases in the number of stories committed to, percentage of commitments met, and an increase in the speed with which stories are completed. Take the current averages as a baseline, and monitor for the expected increases.


Not a ’numbers’ metric, but arguably the most important one. Are the QA team contributing to Retrospectives? Planning & Estimation? How are they communicating the information they’re finding during the course of their work? For new hires, I’d look for their engagement to increase as they ramp up in their new domain and adapt to the team and company culture, but as a QA professional bringing a fresh pair of eyes to the team, I’d expect there to be some level of insight and engagement from the very beginning. I’d also expect the QAs to be actively involved in the solutions to any bugs / issues they raise — Conferring with developers working on solutions, discuss how the fix should be retested etc.

Ultimately, regardless of how you decide to measure performance in QA, it is worth considering that any metric should be used as an informational tool rather than any kind of absolute measure. There is no real substitute for getting to know what your QA folks are doing, how they’re communicating with the people around them, the problems they encounter, and how they handle those problems.