Generative AI is starting to help software engineers solve problems in their code. The impact of this on quality engineers is already being felt.
According to data from Stack Overflow’s 2023 Developer Survey, 70% of all respondents are using or are planning to use AI tools in their development process. Further, the study of 90,000 developers found that 86% of professional developers want to use AI to help them write code.
The next largest use for AI, at about 54% of professional developers, is debugging code. Next, 40% of that cohort said they’d use AI for documenting code. And fourth, 32% said they want to learn about code.
Each of these use cases actually creates significant opportunities for speeding creation and delivery of code, but according to Gevorg Hovsepyan, head of product at low-code test automation platform mabl, each also creates significant risk in terms of quality. The impact of AI on software quality is only just being assessed, but consumer expectations continue to rise.
Though AI can quickly produce large quantities of information, the quality of those results is often lacking. One study by Purdue University discovered, for example, that ChatGPT answered 52% of software engineering questions incorrectly. Accuracy varies across different models and tools, and is likely to improve as the market matures, but software teams still need to ensure that quality is maintained as AI becomes an integral part of development cycles.
Hovsepyan explained that engineering leaders should consider how — and who — AI is affecting their development pipelines. Developer AI tools can help increase their productivity, but unless QA also embraces AI support, any productivity increases will be lost to testing delays, bugs in production, or slower mean times to resolution (MTTR).
“We saw this trend with DevOps transformation: companies invest in developer tools, then wonder why their entire organization hasn’t seen improvements. AI will have the same impact unless we look at how everyone in the ecosystem is affected. Otherwise, we’ll have the same frustrations and slower transformation,” Hovsepyan said.
AI can also further lower the barrier to entry for non-technical people, breaking down long standing silos across DevOps teams and empowering more people to contribute to software development. For software companies, this opportunity can help reduce the risk of AI experimentation. Hovsepyan shared:
“No one knows your customers better than manual testers and QA teams, because they live in the product and spend much of their time thinking about how to better account for customer behavior. If you give those people AI tools and the resources to learn new technologies, you reduce the risk of AI-generated code breaking the product and upsetting your users.”
So if AI is not yet at the point where it can be fully trusted, what can quality engineers do to mitigate those risks? Hovsepyan said you can’t address all of those risks, but you can position yourself in the best possible way to handle them.
By that, he means learning about AI, its capabilities and flaws. First, he said, it’s “incredibly important for quality engineers to figure out a way to get out of the day-to-day tactical, and start thinking about some of these major risks that are coming our way.”
He went on to say that the use of intelligent testing can help organizations win time to focus on bigger picture questions. “If you do test planning, you can do it with intelligent testing solutions. If you do maintenance, you remove some of that burden, and win the time back. In my mind, that’s number one. Make sure you get out of the tactical day-to-day work that can be done by the same tool itself.”
His second point is that quality engineers need to start to understand AI tools. “Educate, educate, educate,” he said. “I know it’s not necessarily a solution for today’s risks. But if those risks are realized and become an issue tomorrow, and our quality engineers aren’t educated on the subject we’re in, we’re in trouble.”