The Metrics: Number of Ignored Tests
We're all guilty of having some skipped tests in out test suites. In this concise statement, Gil Zilberfeld clearly says what you should do with such tests!
Background Coding Agents: Predictable Results Through Strong Feedback Loops (Part 3)
Two weeks ago, I highlighted Spotify's story of creating AI coding agents that autonomously support their development. This is the third (and the most interesting to me) part outlining the challenges in testing that solution by Max Charas and Marc Bruggmann.
Furthermore, Irfan Mujagić shares The Complete Guide to RAG Quality Assurance: Metrics, Testing, and Automation.
Feel like your executives don't get what testing actually takes?
You're not alone. New research from Sauce Labs reveals that 61% of testers agree that leadership doesn't understand what it takes to test software successfully. Even worse? When AI makes mistakes (and it does—a lot), 60% of leaders say employees bear the blame, not the AI itself.
Sauce Labs' latest blog breaks down concrete strategies you can use to help leadership actually get it — from reframing metrics in business language they care about to getting a seat at the table earlier in the process. Whether you're trying to justify automation investments or just want your executives to stop asking "why testing takes so long," this one's worth your time. Get the tips now.
Our Bug Reports Are Ignored… Until a Customer Says the Same Thing
This post drew some attention from software testers on Reddit, and rightly so. I mean, who hasn't faced such a problem at least once?
In a related thread, someone asked: How do you handle "won't fix" / known issues in your team?, while others are discussing Building a QA Dashboard in Jira to show the metrics.
Why Your 97% Test Coverage Is a Lie
Ran Algawi shares a good reminder that test coverage metrics might be misleading, and why embracing a culture of thinking about the system and questioning it may be more effective in mitigating more risks.
Moreover, Lakindu De Silva gives a few tips for tackling flaky tests — When "Failed" Doesn't Mean Broken.
If It Cannot Be Measured, It Cannot Be Improved
Alex Shurov shares a few stories and examples of how metrics can help improve the quality of software products.
Additionally, Neil Matillano explains The Power of Metrics: How Data-driven Insights Can Supercharge Your Testing.
How to Determine OKRs for Software Quality Assurance
If you want to set objectives for your QA team, Putra Agung Pratama advises how to do that, using metrics and examples.
Speaking of metrics, Melissa Fisher came up with an interesting one: Test coverage — how about quality coverage?.
Test coverage is one of the most popular metrics. But it's not the only one, and Thomas Shipley gives five great examples of other metrics that can help you assess how valuable your tests are.
Quantitative Software Quality Management
In this interesting read, Peter Lupo explains how to use statistics for quality metrics. There's also the second part about metrics of features and releases.
What are the good KPI for QA Engineers in Agile Software Development?
Looking for quality metrics and KPIs? Someone asked about it on Reddit and people responded with a lot of examples. Interestingly, most of them around what not to measure.
Additionally, Robert Cui wrote about Value-Driven Software Performance Testing and suggests some metrics, too.