Sponsored Link
Breakpoint 2026: Where QA Teams Figure Out AI Together
Most teams are navigating AI-driven testing without a playbook. Breakpoint 2026 is where QA leaders from NVIDIA, Mastercard, KPMG, Deloitte, and Microsoft share what's actually working — live sessions, real Q&A, and hands-on workshops. Join from wherever you are on 12–14 May 2026. Save your spot!
NEWS
Drop the test column. Now what?
Ah, the famous "In Testing" column! There are pros and cons to it, and if you're thinking about removing it, Jitesh Gosai suggests which signals to put in place to keep work visible.
In relation to that, Shawn Vernier wrote a good article on Why software testers should write documentation even if no one else reads it.
Should QA exist?
Controversial! Jade Rubick wrote on whether engineering orgs should have QA at all, sparking an intriguing discussion among the testing community.
On that note, someone also asked: With AI advancing so fast… is Developer or QA the safer career now?
Testing the "Yes-Man" in Your Pocket
This is a sobering read from Jeff Nyman on why AI is biased towards agreeing with you and why the evaluation should be broader than just user satisfaction scores.
That's why Huib Schoots says we should value Critical Thinking and Bram van den Reijen explains why Being right is not enough: testing, truth and responsibility.
The Code Tsunami
Vernon Richards reflects on how being a tester is becoming less about providing information and more about designing the decision framework that enables others to release safely.
Similarly, Andrii Cheparskyi talks about The Hidden Cost of AI-Generated Test Code, while Gil Zilberfeld explains why The Knowledge Void Eats Quality for Breakfast.
You're Not Meta: Build Fast Teams with Fast Tests
An interesting read from Sayo Oladeji who advises investing in faster local test execution, fewer flaky tests and a balanced test pyramid, based on lessons learned at Google and Meta.
Moreover, Sanjeev Kumar tells us How I measure quality without looking at bug counts.
AUTOMATION
AI in Test Automation: Real Limitations vs. User Error
David Mello reviews the most common AI testing complaints and confirms or debugs them, noting whether they're caused by AI or people.
Moreover, Pramod Dutta explains Why AI Made QA Engineers Write MORE Code, Not Less.
AI-Powered API Testing at Scale
Wondering how to develop and run API tests across several repositories? Nikhil Gupta shows how to achieve that with an AI-powered approach.
Similarly, Olamide Adebayo describes how We Replaced Hours of Manual API Testing With an AI Agent Running Integration Tests in Real Time.
The Bottom-Up Secret to 100% Test Coverage at Enterprise Scale
Is 100% coverage even possible? Alexandre Mendes also doubted it because it feels impossible when you think top-down, but becomes easier when you compose upward from smaller pieces.
On top of that, Doğukan Aydoğdu shares insights into How We Build Bulletproof Integration Tests.
Writing More Readable Tests
Solid post from Ürgo Ringo on emphasising what matters and what doesn't in test code. Especially handy now that LLMs benefit from readable tests too.
Speaking of that, someone asked the testing community: Performing Assertions within the Page Object itself — is it a good idea?
TOOLS
Designing Realistic Load Test Scenarios with k6
Indra Aristya shows how to design k6 load tests that mimic actual user behaviour with gradual ramp-up, sustained peak, ramp-down and think time between actions. Much more meaningful than a flat 100 VUs.
What's more, Bob Chen describes in three parts why K6.js API Health Tests Are Not a Testing Problem.
I Built a QA Quality Gate System With Claude Code Hooks. Every AI-Generated Test Now Passes My Standards Before It Exists.
Wondering how to stop Claude from writing tests with hardcoded credentials and CSS selectors? Pramod Dutta demonstrates how he set up four hooks as predictable quality gates.
Playwright Fixture Scopes: Know When to Use What
I wonder if you're aware of this handy feature in Playwright fixtures. Gurudatt S A covers how and when to use the test, worker, and file scopes.
Moreover, Viatsheslav Pashanin advises on Eliminating Authentication Overhead in Playwright Automation.
Testing LLM Outputs: A Hands-On Guide to DeepEval Metrics
Serhii Smetanskyi shares what each DeepEval metric actually does, what surprised him, and what he wishes someone had told him before starting. Great reference if you're testing LLM features.
Similarly, Katja Obring points out that The Hard Part of AI Evals Isn't the Tooling.
WICK-DOM-OBSERVER: The Deterministic Cypress Plugin for Fast Spinners, Blinking Toasts, Optional Overlays
Tired of flaky Cypress tests caused by quickly changing web elements? Sebastian Clavijo Suero built a plugin that uses MutationObserver to catch those tricky UI elements in a deterministic way.
At the same time, Vitaly Skadorva describes how to do Accessible web testing with Cypress and Axe Core.
BOOKS
Essential reading for people who are still responsible and serious in IT
Maaike Brinkhof shares five books that helped her think more critically about IT, AI hype, and productivity culture. From cognitive biases to the history of the Luddites, this is an interesting and somewhat unconventional reading list.
Put quality at the centre of what you do — A Review of 'Out of the Crisis' by W. Edwards Deming
A short, useful review from Mike Harris of a classic quality management book Out of the Crisis that's still relevant decades later.
VIDEOS
QA Job Market Reality Check
Once again, Alex Khvastovich is back with a regular overview of the QA job market, sharing insights and advice.
Additionally, Karthik KK shares a bit more about The AI Layoff Lie: What Big Tech Isn't Telling You.
Welcome to the 308th issue!
How should we approach testing in the age of AI?
It's an increasingly important question we should be asking ourselves.
So today, I want to highlight this great, thought-provoking article by Carlos Arguelles, Senior Principal Engineer at Amazon:
The Outer Loop is Dead: Rethinking how we test.
Happy testing!
Dawid Dylowicz