Sponsored Link
Feel like your executives don't get what testing actually takes?
You're not alone. New research from Sauce Labs reveals that 61% of testers agree that leadership doesn't understand what it takes to test software successfully. Even worse? When AI makes mistakes (and it does—a lot), 60% of leaders say employees bear the blame, not the AI itself.
Sauce Labs' latest blog breaks down concrete strategies you can use to help leadership actually get it — from reframing metrics in business language they care about to getting a seat at the table earlier in the process. Whether you're trying to justify automation investments or just want your executives to stop asking "why testing takes so long," this one's worth your time. Get the tips now.
NEWS
How We Boosted Quality and Team Velocity by Automating Tests Directly From Jira
Tracking testing across releases isn't easy, especially when information is spread across different tools and channels. Marina Jordão describes how they automated that process based on the Jira status and suggests how AI can help further.
Similarly, Sudeep Patra considers using The QA Assistant Trained on Bug Backlog to help with triaging.
Rethinking Metrics
This is a great, thought-provoking article by Vernon Richards about why DORA metrics might not be perfect and an alternative approach to metrics.
Furthermore, Veronika Moran shares how they implemented QA metrics from scratch and Abhishek Verma gives practical advice on How I Use Data Analytics to Find Gaps in Test Coverage.
The Evolution of Quality: From Testing to Quality Engineering
Mona M. Abd El-Rahman takes us on an interesting journey, describing in detail how Quality Engineering was born and what practices it involves.
Furthermore, Pritanshu Dwivedi says that Quality Is Everyone's Job — Thanks to the Platform Team that helps put the structure in place.
When Tests Start Drawing the Map
When it comes to testing, we often find ourselves in imperfect environments, with some information missing here and there. Charlie Kingston comes up with an analogy to explain how good testing practices can help.
Similarly, Mark Nicoll offers guidance on Working With No Requirements.
AUTOMATION
Building a Solid Foundation for Performance Testing
Interested in running performance tests? Yanming Zhai gives a few helpful tips on how to prepare well for that, regardless of the tools you're planning to use.
Similarly, Alireza Ghorbani explains why it's good to take into account The Truth About "Simultaneous" Requests in Load Testing.
Debug like a boss: 10 debugging hacks for developers, quality engineers, and testers
Whether you're a tester or developer, debugging will inevitably be part of your job. Hanisha Arora gives several concise tips for doing it well.
In relation to that, Maaret Pyhäjärvi explains why it's important to focus on Feedback and Actions as testers.
Selenium tests breaking constantly after every UI change. Is test maintenance really supposed to take this much time?
Someone shared their problem with flaky automated tests and asked for advice. There's been a lot of insightful responses from the community on how to address this specific problem, as well as test flakiness in general.
And, in another discussion, someone points out why Structural XPath locators are killing your test stability (and what to do about it).
Testing Application in Next.js — Part 4: E2E tests & CI environment
Continuing the series on setting up a test framework for a web application, this time Dmitry Ivanov demonstrates how to implement Cypress end-to-end tests running in the GitHub Actions pipeline.
Furthermore, Mrinal Maheshwari gives an overview of How to Write Unit Test Cases for Any Application.
Testing in production?
It's a good practice, but only when done right. It requires a strong culture and preparation, as outlined by Deepesh Mohan.
Similarly, Kiryl Dubarenka describes Why Good Engineers Skip QA if there's a high confidence, which means going beyond just testing in production.
TOOLS
Authentication in Playwright: You Might Not Need Project Dependencies
Nearly every web application requires some sort of authentication to test most flows. Vitaliy Potapov shows some smart features to achieve that in Playwright.
FunnelPeek: A Modern Tool for Exploring Android UI Elements
Anyone who has tested mobile apps knows that getting the right locators can be tricky at times. Saeed Roshan demonstrates FunnelPeek — an open-source UI inspector for Android to find paths more easily.
What's more, Josphine Job shares a story of How We Fixed Android WebView Automation in Appium 2.
How Playwright Test Agents Are Changing the Game in E2E Automation
Playwright definitely stirred up excitement with their recent release of natively supported test agents. Kostiantyn Teltov gives a detailed overview.
Moreover, Eleonora Belova shows an in-depth example of applying Model-Based Testing with Playwright.
Optimising Cypress Video Artifacts
Cypress has a helpful feature that records video of test executions, helping you debug failures. However, this can quickly add up to excessive storage consumption, so Shubham Sharma advises on how to optimise it.
Running Lighthouse CI in a Lightweight Docker Container
Google's Lighthouse is a helpful and easy-to-use tool for assessing web app performance. Pradap Pandiyan demonstrates how to set regular runs in CI.
VIDEOS
What is Context Engineering?
You may have already heard of prompt engineering, but what about context engineering? In a 16-minute recording, Daniel Knott explains what it is and how it relates to Context-Driven Testing.
Speaking of the importance of context, Gil Zilberfeld points out why Your AI's Answers Are Subjective. Your Tests Don't Have to Be.
        
Welcome to the 290th issue!
How is AI changing testing?
Some say it's only a helpful addition to our work, while others claim it may replace the majority of our responsibilities.
I enjoy learning different points of view, and today I want to share with you several opinions that I've found recently:
Happy testing! 🙂
Dawid Dylowicz