Day 4: Watch the AMA on Artificial Intelligence in Testing and share your key takeaway

On Day 4 of the 30 Days of AI in Testing challenge, we’d like you to watch this Ask Me Anything on Artificial intelligence in Testingwith the incredibly knowledgeable Carlos Kidman, a seasoned expert in AI and Testing.

During this AMA, Carlos shares his experiences and insights on applying Machine Learning to solve complex testing challenges, his transition to leading AI initiatives in testing, the future of AI in testing and much more!

Task Steps

  • Watch the “Ask Me Anything on Artificial intelligence in Testing” with Carlos. You can choose to watch the whole thing (highly recommend!) or choose questions of interest by using the chapters icon on the player or by clicking the chapters from the playbar as indicated with small dots. Take notes as you go.

  • After watching, reflect on the session and share the takeaway that had the biggest impact for you by click the ‘Take Part’ button and replying to The Club topic. For example, this could be a new understanding of AI’s potential in testing or any ethical considerations that stood out to you.

Why Take Part

  • Deepen Your AI Knowledge: Carlos’s experiences and the many topics covered in the AMA provide a great source of information to quickly increase your understanding of thw vast role AI can play in testing.

  • Engage with Your Peers: Post your key insights from the AMA and see what others think. This is a great way to get different views.

  • Free Access: Here’s an extra incentive to watch! The AMA recording, previously exclusive pro content, is now freely available to all members throughout March 24. Seize this chance to watch this valuable content for free.

My Day 4 Task

Roughly reading the whole video, topics such as how to test for AI biases, how to ensure user confidence in AI-powered software, how to use AI to help with day-to-day testing, how to use machine learning for testing, how to ensure data security and confidentiality, the role of AI in usability and UX testing, and the role of the software tester in the next decade, were discussed.

Carlos also shared his thoughts on the AI’s role in the future of software development and testing, suggesting that AI will play an important role in automated testing and that the role of the software tester will focus more on analyzing and evaluating AI-generated test results. He also touched on ethical and compliance issues when using AI and emphasized the importance of monitoring AI performance and data drift.

Finally, Carlos mentioned the potential of AI to help junior testers improve their testing capabilities. The entire interview touched on the use of AI and machine learning in software testing, the biases and limitations of testing AI, and how AI can help improve testing efficiency and quality.

The following topics are of more interest to me

  • Can you test for biases in AI?

  • How can you assess confidence your users have in your AI powered software?

  • What tool are you using for AI testing?

  • How can we use AI day to day testing?

  • How to get into AI testing?

  • How do you guard the quality of AI that changes how it behaves in production?

Regarding testing AI biases, Carlos Kidman mentioned that it is possible to test AI bias using the invariant testing technique. This technique involves replacing words to see how the AI reacts. For example, he mentioned replacing “Chicago” with “Dallas” in a sentence and observing the AI’s change in sentiment analysis. In this way, biases in AI models can be identified and corrected.

Regarding assessing user confidence in AI software, Carlos mentioned the use of observability techniques. He gave an example of how data can be collected through user feedback (e.g., likes or taps) and analyzed to assess user confidence and satisfaction with AI output.

In terms of AI testing tools, Carlos mentioned that they use a tool called “Ling Smith”, which is part of the “Ling Chain”, to observe the performance of AI systems. He also mentioned using “Pytest” to automate some test cases.

Regarding the use of AI in day-to-day testing, Carlos suggested trying to use tools like ChatGPT and Bard to inspire creativity and solve testing problems. He emphasized the need for tools to have enough context to be effectively applied to testing.

For how to get into AI testing, Carlos suggested that beginners use tools like ChatGPT and Bard to start exploring, which will help them discover the potential uses of AI in testing.

Finally, on how to safeguard the quality of AI performance in production environments as data changes, Carlos emphasized the importance of monitoring AI performance, referring to the concept of “data drift” and sharing a story about a real estate company that lost money by failing to monitor AI performance. He cautioned that as the environment changes, AI needs to be updated and adapted to maintain its performance and effectiveness.

The most impactful point for me is: how to better utilize the capabilities of AI rather than simply using it

Using AI is as much about improving efficiency and quality as it is about our testing work.

How to make greater use of AI’s ability to help us complete our work more efficiently and with higher quality through the provision of cue words and context may be the direction we need to think about in the future.

About Event

The “30 Days of AI in Testing Challenge” is an initiative by the Ministry of Testing community. The last time I came across this community was during their “30 Days of Agile Testing” event.

Community Website:

Event Link: