Fri. Sep 29th, 2023

The introduction of generative AI to software testing has made a significant difference in the approach, quality and productivity of software testing.

Before generative AI, software testing was about manual work that took time, hindered productivity, and didn’t always deliver the optimal result.

For example, let’s take test cases written manually, leaving open the risk of unidentified scenarios: what you expect is what you get.

Genative AI has automated multiple actions in software testing, including test case generation, code mutation (intentionally manipulating code), and automated bug detection with speed and accuracy, freeing software testers to focus on more critical and creative assignments.

However, generative AI has certain limitations. For example, ChatGPT cannot run tests on mobile devices; it does not have access to all devices or emulators to run tests on.

Generative AI has evolved rapidly and may soon be entering unexplored areas of software testing. However, it raises a question: what happens to the software testers? Where do they go from here?

Software testing before generative AI

Before Generative AI, software testing was a manual process, which included:

Testers would create test cases based on their understanding of the requirements and scenarios. The test cases provide the steps to test the software and the expected outcome.

Based on the test cases, the software tester would test the software and match the outcome with that in the test case.

Testers then wrote test cases for regression testing. When a new feature is added, updated, or removed, regression testing discovers the effects of the change on the software. It’s helpful to identify any issues resulting from the change.

Testers put themselves in the shoes of the software’s target audience and explored the software. The process produces flaws from a customer’s perspective.

The software would undergo different levels of user load to simulate performance under scenarios such as other user loads, page or screen load time, and resource consumption.

The tester attempted to test the security of the software in various ways, such as entering the wrong credentials and trying other ways to access the software.

READ MORE  Asure Software Sets September 2023 Financial Conference

Manual testing is a time-consuming process that is also prone to errors.

Software testing after generative AI

Generative AI has automated multiple areas of software testing and can significantly increase speed and productivity:

The availability of test data before generative AI was a problem, and the data took a while to generate. Generative AI can quickly generate quality test data for different scenarios and create edge cases and input combinations that are difficult or time consuming to perform manually.

While creating test cases manually, scenarios can be missed, while Generative AI allows you to enter the software details such as requirements, specs, and design and provide appropriate directions. Generative AI will generate an extensive set of test cases.

Mutation testing is about testing a mutated code and finding the difference between the original and the mutated code. Modifying code to manually create a mutated version is a difficult process, but generative AI can create multiple mutated versions of the code, with each test a step toward ensuring code quality.

  • Code review and analysis

Code review, analysis, and refactoring are substantial but necessary efforts, and manual efforts require more manpower and time. Generative AI can review code, provide suggestions and examples of improved code, and even refactor code.

Case study

This case study is about how generative AI enabled superior testing of Meta’s Messenger app.

Messenger is a popular app that needs no introduction. However, manual testing was challenging.

For example, users access the app in different ways and on different devices such as mobile, tablet and laptops, creating different scenarios for app access and use. Identifying so many scenarios and creating test cases manually is a big challenge, and failures impact the quality of the app and the user experience.

Genative AI simulated user interactions in real time and captured the data. This allowed the software to know how users think, rather than what the software testers assume when writing test cases. Based on the user interaction data, Genative AI generated test cases and data, then automated testing of the app.

READ MORE  Tesla FSD software has quite a personality. What Barron's test drives reveal.

The main benefits of using generative AI were a better understanding of user behavior, a superior testing process and automated testing. Meta saw improved productivity, higher user satisfaction, fewer bugs, and a better testing workflow.

It reinforced the idea that manual testing can capture user intentions to some degree, but that generative AI can continually learn from user behaviors and idiosyncrasies.

Limitations of generative AI

Despite all the benefits it brings and the excitement it generates, generative AI is still evolving in the niche of software testing because it still has limitations.

For example, ChatGPT has a hard time testing apps on mobile devices because it can’t access multiple different mobile devices. It cannot parse the scripts or details of the different mobile devices and write test cases.

For example, ChatGPT cannot load an app on a particular Samsung mobile device and test the app’s login screen.

Being an evolving field, we can’t yet say for sure that generative AI is the answer to the problems of manual software testing.

It comes down to

Generative AI is a powerful force and will continue to exist. It operates on a continuous learning model, and it is reasonable to assume that it will overcome its limitations one by one.

But will it replace manual testers? The answer depends on multiple factors, such as whether generative AI can run tests in extremely complex, high-stakes applications.

Will stakeholders trust generative AI enough to test extremely complex applications?

The answer may be a mix of automated and manual testing, as complex applications require human analysis and finesse.

Overall, the future is not set yet, but it certainly looks like manual engagement and automated testing will coexist.

By Admin