Test coverage is one of the most popular metrics. But it's not the only one, and Thomas Shipley gives five great examples of other metrics that can help you assess how valuable your tests are.
Welcome to the 68th issue!
Last week, someone reached out to me on LinkedIn asking if I can share some insights about quality metrics.
I remembered there were a few great articles that I included in the previous issues of Software Testing Weekly. I typed metrics in the search bar, quickly found them and sent them over.
So here's the tip, if you want to learn more about a certain technology, tool or approach, try the search bar. With nearly 1,500 links included since the beginning of this project, you'll likely find some useful information.
Hope that helps. 😊
Happy testing everyone!
How to Determine OKRs for Software Quality Assurance
If you want to set objectives for your QA team, Putra Agung Pratama advises how to do that, using metrics and examples.
Speaking of metrics, Melissa Fisher came up with an interesting one: Test coverage — how about quality coverage?.
How to write good Test Cases with examples
If you write test cases, why not do it right? Roselyne Makena shares a few solid tips and shows examples of well-written and badly-written test cases.
And once you have them, do you think it's a good idea to implement a number of test cases created and executed as the main metric to measure QA work?
Breaking the Test Case Addiction (Part 11)
Consider thinking in terms of testing, rather than test cases. And if you are applying test cases, please don’t count them. And if you count them, please don’t believe that the count means anything.
In another brilliant post of the series, Michael Bolton continues to explain why the number of test cases is not a good metric for measuring quality.
Hungry for more? There's already the next part on why it's so hard to tell when testing will be done.
How Do Employers Measure a QA Tester/Engineer's Proficiency?
Both managers and testers may ask themselves this question. It's hard, and there's probably no one straightforward answer, but it's interesting to see what people suggest.
This discussion continues in another thread about Metrics to assess testers.
How to get started with Performance Testing
This is a great guide by Johanna South that can help you get more familiar with performance testing. There are sets of definitions, tools, questions, metrics and best practices to learn from.
Easy measures to observe our software quality efficiency
Julien Barlet describes the famous DORA metrics and suggests two more focused on defects that helped them track quality improvements.
Welcome to the 246th issue!
It's hard to believe that DORA metrics have been around for a decade now.
They're considered an industry-standard way of tracking the software delivery throughput and stability.
So it's great to see Google's Highlights from the 10th DORA report — compiled by Nathen Harvey and Derek DeBellis.
What got my attention is that nearly 75% of responders use AI for code writing and 60% for test automation.
However, as we learn, it has negatively impacted both the software delivery throughput and stability.
No wonder that 39% of responders don't fully trust AI-generated code.
So what is there to learn for us?
Let me answer with this megathread on Reddit — Manual testers are ABSOLUTELY needed.
Happy testing! 🙂
Cypress 14 with Lighthouse — Part 2
Kishor Munot demonstrates how to automatically measure website performance metrics using two open-source tools — Cypress and Lighthouse from Google. You can also read the first part, which explains how Lighthouse works.