Sponsored Link
AI in QA: what's working, what's not, what's next
With one day to go for Breakpoint 2026, most QA teams have moved past the question of whether to use AI and are sitting with the harder one of how to use it well. Avinash Ahuja from NVIDIA, Keith Klain, and Ashley Hunsberger are on stage talking about scaling AI, where agentic workflows actually deliver, and what it takes to move past the pilot. The conference runs free and virtual from May 12 to 14. Save your spot!
NEWS
Adventures in 30 Years in Engineering Productivity
Carlos Arguelles looks back on three decades of working on developer tools, testing infra, canary monitoring and bug dedup at Microsoft, Amazon and Google. A great read packed with stories, lessons learned and practical advice.
On top of that, Stuart Thomas describes The forgotten part of quality: paying attention to production.
How Junior Engineers Can Build Real Skills While Using AI
AI is a huge enabler for all engineers, including juniors. Dennis Martinez explains how they can leverage it, sharing simple advice like slowing down and writing code by hand. Applicable to testers, too.
The good news is that "Silly Questions" Are Almost Never Silly in the age of AI, as Emna Ayadi correctly points out.
Shift Left Did Not Fix It
It's a thought-provoking article by Brijesh Deb about the actual benefits of shifting testing left and why getting testers involved earlier doesn't always lead to reduced risk later in the process.
Moreover, Lisa Crispin and Janet Gregory share some good thoughts on DORA and AI Capabilities.
Testing from first principles: when there is no guide to follow
Peter Wilson shares lessons from testing ML, mobile and CI/CD pipelines, advising on how to rely on fundamentals when testing.
Also, Alan Page tells us to beware of making The Five Reasonable Mistakes — How leaders build low-quality systems without trying.
The Human in the Loop Isn't Going Anywhere: They're Just Moving Up
While AI can write the test code, humans should still decide what to test and why. Simon Prior explains why business context, risk judgment and independent validation are best done by people.
At the same time, Faris Kurnia shares a reflection about Rethinking QA: Are We Still Testing, or Just Tracking?
AUTOMATION
Anyone Can Build. Almost No One Can Maintain: The Real Cost of AI Coding
While it's trivial to create code nowadays, Maksim Laptev points out that the challenges of maintaining it are still there, based on a story of vibe coding a trading bot.
Also, someone asked on Reddit: Are you actually using AI in test automation?
Claude Code For QA — The Agentic Workflow That Will Save You 100+ Hours
Art Krylov demonstrates a Claude Code setup with sub-agents for creating test cases, manual runs and API automation, using MCP for Jira, Postman and more.
Furthermore, Swati Seela shares valuable Lessons Learned from Building an AI-Enabled Test Automation Repo.
Common Mistakes in Performance Testing (And How to Fix Them)
Doing load tests? Oleh Koren lists seven mistakes that can impact your results, with checklists and fixes you can apply on your next run.
Moreover, Tito Irfan Wibisono describes Chaos Engineering: How Software QA Engineers Test Resilience in Distributed Microservices.
Testing LLM Based Products: A Practical Guide for Delivery and Quality Teams
If you test LLM-powered systems, Alejandro Sierra gives advice on using a four-layer evaluation stack with code examples in PromptFoo, DeepEval, Ragas, LangSmith and Braintrust.
Similarly, Max Vornovskykh shares an insightful piece about Building a Robust Evaluation System for 32 AI Agents Across 4 Platforms.
TOOLS
Confessions of a Recovering Selenium Developer
Wondering why Kevin Roe moved from Selenium to Playwright? He walks through the things that piled up over the years, leading to the decision.
On the other hand, Puja Jagani describes how to use WebDriver BiDi for Test Automation: Preload Scripts and Test Efficiency.
Creating a Playwright framework with AI
Callum Akehurst-Ryan describes building a Playwright end-to-end framework with Claude Code and getting broad coverage of high-priority workflows and why keeping human-in-the-loop was key in making it work.
Also, Ekki Syam Sugiardi demonstrates an example of How I Built an AI Agent That Writes Playwright Tests From a GitHub Issue.
Modeling UI Flows as JSON: A Data-Driven Approach to Cypress Test Architecture
Want to make your Cypress tests easy to maintain? John Ringler shares a three-layer setup where each UI flow lives as JSON and a single dispatcher runs every step type.
On top of that, Gleb Bahmutov compares Cypress cy.prompt Vs Recording Vs Coding.
@playwright-labs/reporter-slack: Rich Slack Notifications for Playwright Test Runs
If you want to get your Playwright test results in Slack, Vitali Haradkou shares how to achieve that with a @playwright-labs/playwright-slack extension.
Your First Appium Test: A Complete End-to-End Guide for Android and iOS
Mayvin Ramasawmy shows what it takes to run the same Appium tests across Android and iOS. Login, swipes, scrolls, drag-and-drop, plus a side-by-side look at every spot the two platforms split.
You can also learn more about using the Appium Inspector and Appium Execute Methods.
Welcome to the 312th issue!
While I was browsing Reddit, this discussion caught my eye:
What's one QA career move you made that gave the biggest ROI?
There are a lot of different points of view, examples and lessons on investing in specific skills, changing companies and going into management.
Hope you'll find some inspiration there, as well as in the rest of the news I'm sharing with you today.
Happy testing! 🙂
Dawid Dylowicz