Day 18: Share your greatest frustration with AI in Testing

It’s Day 18! Throughout our 30 Days of AI in Testing journey, we’ve explored various applications of AI across different testing activities. While AI’s potential is undoubtedly exciting, we cannot ignore the personal frustrations that may have arisen as you experimented with these new technologies.

Today’s task provides an opportunity to share your personal frustrations or concerns you’ve encountered while working with AI during this challenge. By openly discussing these individual experiences, we can get a deeper understanding of the potential pitfalls and identify areas for improvement with AI technologies.

Task Steps

  • Identify Your Frustration: Think back to your experiences throughout the challenge. What aspect of AI in testing caused you the most frustration or concern? Here are some prompts to get you started:

    • Limited Functionality: Did you find that the AI tools lacked the capabilities you were hoping for in specific testing areas (e.g., usability testing, security testing)?
    • The Black Box Conundrum: Were you frustrated by the lack of transparency in some AI tools? Did it make it difficult to trust their results or learn from them?
    • The Learning Curve Struggle: Did the complexity of some AI tools or the rapid pace of AI development leave you feeling overwhelmed?
    • Bias in the Machine: Did you have concerns about potential bias in AI algorithms impacting the testing process (e.g., missing bugs affecting certain user demographics)?
    • Data Privacy Worries: Are you uncomfortable with how AI tools might use or store your testing data? Do you have concerns about data security or anonymisation practices?
    • The Job Security Conundrum: Do you worry that AI might automate testing tasks and make your job redundant?

Feel free to add your own frustration if the above prompts don’t resonate with you!

  • Explain Your Perspective: Once you’ve identified your frustration, elaborate on why it’s a significant issue for you in reply to this post. Does it relate to your experience working with AI in testing?
  • Bonus - Learn from Shared Experiences: Engaging with the personal experiences shared by others can provide valuable insights and potentially shed light on challenges or frustrations you may not have considered. Like or reply to those who have broadened your perspective

Why Take Part

  • Identify Areas for Improvement: By openly discussing our frustrations with AI in testing, we can foster open communication and a more balanced approach to its implementation and development. As well as identify areas where AI tools, techniques, or practices need further refinement or improvement.

https://club.ministryoftesting.com/t/day-18-share-your-greatest-frustration-with-ai-in-testing/75215

My Day 18 Task

My Concerns and Challenges Using AI Tools for Testing Activities

Data Privacy and Security Concerns with AI Tools

In the challenges of the past several days, I’ve mentioned my concerns about data privacy and security regarding AI tools. Due to these concerns, I’ve been cautious about using AI tools for testing activities, carefully filtering out any context related to the project. This cautious approach makes the process more difficult and results in some discrepancies between the outcomes provided by the AI tools and the expected results. Consequently, it’s challenging to directly apply these results to current project testing work, which hinders direct and real improvements in testing efficiency.

Functional Limitations of AI Tools

During the recent days of the AI testing challenge, I’ve experimented with various AI testing tools, including Applitools Eyes, Katalon, Testim, and Postman’s API testing AI assistant, Postbot. While most tools’ AI features can indeed enhance testing efficiency, the improvement is still limited. There is a significant discrepancy between the AI testing functionality and the descriptions in official promotional materials. It feels like the hype is greater than the actual performance.

Learning Curve Challenges with AI Tools

Here, I’d like to discuss the comprehension capabilities of different large AI models, such as ChatGPT-3.5, ChatGPT-4, Gemini Pro, and Claude 3. The results produced by these different AI models for the same prompts can vary, requiring time to adapt when applying these AI models to daily testing activities. It involves comparing and learning about different AI model tools to determine which testing activities are better suited for which AI models.

Difficulty in Accessing AI Tools

For many IT professionals outside of China, accessing the latest AI testing tools and large AI model tools is relatively straightforward. However, for IT personnel in mainland China, it is exceptionally difficult to access these tools. The first hurdle often encountered is in applying for an account and the subsequent challenge of paying for the service.

About Event

The “30 Days of AI in Testing Challenge” is an initiative by the Ministry of Testing community. The last time I came across this community was during their “30 Days of Agile Testing” event.

Community Website: https://www.ministryoftesting.com

Event Link: https://www.ministryoftesting.com/events/30-days-of-ai-in-testing

Challenges: