Over the past year, software companies have worked hard to incorporate generative AI into their products, doing whatever it takes to incorporate the latest technology and stay competitive.
One software category that is particularly well-suited to being boosted by AI is low code, as that is already a market that has a goal of making things easier on developers.
Just as low code lowered the bar to entry for development, generative AI will have a similar impact because of such things as code completion and workflow automation. But Kyle Davis, VP analyst at Gartner, believes that the two technologies will interact in more of a collaborative effort than a competitive way, at least for citizen developers. “Even though you could use generative AI to generate code, if you don’t understand what the code is doing, there’s no way to validate that it’s correct,” he said. “Using low code, it’s declarative, so you can look at what’s there on the screen and say, ‘does that make sense?’”
RELATED CONTENT: A guide to low-code vendors that incorporate generative AI capabilities
However, Davis also says it’s really too new of a market to make any real predictions. “We’ve seen a lot of failure, we’ve seen a lot of success, because it’s so early days that, at best, you’re kind of experimenting with this now. But the hope is that it can offer a lot of potential,” he explained.
According to Davis, there are three main ways AI is being incorporated into low-code platforms.
First, there are generative AI capabilities that are designed to improve the developer experience.
Second, there are generative AI capabilities targeting the end users of the application created using low code. “So embedding like a Copilot or ChatGPT type control within the application. That way the user of the application can ask questions about the app’s data, as an example,” Davis said.
Third, there are features related to process improvement. “When you’re creating workflows or automation, there’s usually a lot of steps that are very human-centric, when it comes to generating data or categorizing data or whatnot,” Davis said. “And so we’ve seen a lot of those steps being not displaced by a generative AI step, but rather kind of preceded by a generative AI step.”
He gave the example of a workflow that is designed to help hiring managers create requirements for a job position. Usually the hiring manager has to go in and manually add information, like the name of the position, the description, and other requirements. But, Davis said, “If generative AI were to step in first and do a draft of that, it allows the hiring manager to come in and just make refinements.”
Davis believes that a major challenge experienced by these low-code vendors is the added work placed on them to enable this integration to work. Low code is very declarative and abstracted away, and the constructs that make up a low-code application are proprietary to the platform it belongs to, which requires the vendors to either have their own LLM or be able to take user prompts and create all the constructs within their platform to represent what was asked.
“There’s a lot they can leverage from existing LLMs and, and generative AI vendors, but there’s still pieces that they have to do themselves,” he said.
Using generative AI in testing is another promising area
Combining generative AI and testing is also a promising mashup, according to Arthur Hicken, chief evangelist at testing software company Parasoft. “We’re still at a relatively early stage, so it’ll be interesting to see how much of it is real and how much of it pans out,” he said. “It certainly shows a lot of promise in the ability to generate code, but perhaps more so in the ability to generate tests … I don’t believe we’re there yet, but we are seeing some pretty interesting capabilities that, you know, didn’t exist a year or two ago.”
The field of prompt engineering — phrasing generative AI requests in a way that will provide optimal results — is also an emerging practice, which will be crucial to how successful one is at getting good results from combining things like testing or low-code with AI, Hicken said.
He explained that those who have been working with tests for years will probably have a good chance of being a good prompt engineer. “That ability to look at something and break it into small component steps is what’s going to let the AI be most effective for you … You can’t go to one of these systems and say, ‘Hey, give me a bunch of tests for my application.’ It’s not going to work. You’ve got to be very, very detailed, and like working with a djinn or a genie, you can mess yourself up if you’re not very careful about what you ask for,” he said.
He likened this to how we see people interacting with search engines today. Some people claim they can find whatever they want in a search engine, because they know the queries to ask, while others will say they looked all over and couldn’t find what they were looking for.
“It’s that ability to speak in a way that the AI can understand you, and the better you are at that the better answer you get back … The fact that you can just talk and ask for what you want is cool, but at the moment you better be pretty smart about what you’re asking because with these AIs the emphasis is on the A – the intelligence is very artificial,” said Hicken.
This is why testing the outputs of these systems is crucial. Hicken said that he has spoken with folks who say they are going to use generative AI to generate both code and tests. “That’s really scary, right? Now we’ve got code a human didn’t review being checked by tests that weren’t reviewed by humans, like, are we going to compound the error?”
He advises against putting too much trust in these systems just yet. “We’re already starting to see people jump back, they’re being bitten, because they’re trusting the system too early,” he said. “So I would encourage people not to blindly trust the system. It’s like hiring somebody and just letting them write your most important code without seeing first what they’re doing.”