Ever since ChatGPT launched in 2022, developers have been bombarded with countless blog posts, news articles, podcast episodes, and YouTube videos about how powerful AI is and how it has the potential to do the work of developers.

Anthropic’s CEO and co-founder Dario Amodei made headlines a few months back when he claimed that “I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code.” 

It’s been 3-6 months since that statement, and it would be hard to claim that AI is now writing 90% of code. It’s not just Anthropic; leaders at other AI companies have made similar claims, and while there may be a day in the future where these claims come true, we’re not anywhere near that currently.

RELATED:
In MCP era API discoverability is now more important than ever
Postman introduces Agent Mode to integrate the power of AI agents into Postman’s core capabilities

Srini Iragavarapu, director of generative AI applications and developer experiences at AWS, told SD Times at POST/CON that AI is sort of in a messy middle right now, comparing it to the teenage experience. 

“There is a hormone rage that is happening. There’s a lot of potential. There is so much energy, but you have no clue where to channel it, and you’re trying to figure it out,” he said. He explained that he doesn’t have a teenager yet (his son is nine), but he has nieces and nephews and he sees this playing out. He knows these kids are going to go out into the world and solve real problems one day, but right now they’re battling teenage hormones, and they have a lot of energy and feelings but no idea of where or how to channel it. 

He believes we’re in those messy teenage years right now with AI. Enterprises know there is a lot to be gained from AI, but the question is how do we get there? 

Iragavarapu was part of a panel discussion at POST/CON talking about this “messy middle” era of AI, along with Rangaprabhu Parthasarathy, director of product for generative AI at Meta, and Sambhav Jain, agent product manager at Decagon, a company that creates AI agents for customer service. 

“When I think about the messy middle, I think about the space between the powerful capability of the models and their real utility and the real impact they can have on customers,” said Jain. “You have to trade off between speed, safety, the capability of the model, and the impact it’s going to have with customers.”

AI adoption gap correlates to company type

Parthasarathy said that digital native companies have engaged with AI rather quickly because they have the infrastructure needed to adapt to the technology. More traditional enterprises, however, are taking longer to figure out where AI can add value. 

He likened the current state of things to the early days of cloud. It took years for businesses to understand how to leverage the cloud, where compute comes in, where storage comes in, but once they figured all that out, they saw tremendous gain. 

“I think this is the age we are in today, where digital natives have quick turnaround, fast impact, and slightly larger, more established businesses are still in the experiment plus plus phase, where they’ve gotten past experimentation, but they’re still in a place where they’re not ready to deploy very large AI systems in the enterprise,” he said. 

Avoiding AI experimentation will lead to regret

Parthasarathy pointed out the fact that everyone has some sort of AI on their phone — something that did not exist two years ago. 

How much a company should invest into this experimentation depends on their specific use case, but everyone should be actively experimenting in some way, he believes.

For example, although Parthasarathy is a product manager who hasn’t written code in over a decade, he said he is vibe coding basically every weekend on some project. 

“It just feels like a moment in time that we’re gonna look back and say ‘I was there’ or ‘I missed it.’ You definitely want to be the ‘I was there’ person,” he said.

MCP is still a baby

If you haven’t heard about Anthropic’s Model Context Protocol (MCP), you’re not alone. While the people that are engaging with MCP are all in on it, they still represent a small minority of developers as a whole.

Sterling Chin, senior developer advocate at Postman, told SD Times that he was talking at a conference in London in front of around 200 developers, and asked the audience to raise their hands if they’d heard of MCP. Under 50 raised their hands. To those people, he asked how many have actually built an MCP server and only about six or seven people raised their hands. 

“I really think those of us who are working in it and building with it are in a bubble within a bubble,” he said. 

He believes that MCP is still in its infancy. “It seems like we’re moving so fast on it, and if you’re in Silicon Valley, if you’re in San Francisco, it’s all everyone’s talking about … In an enterprise setting, no one’s adopting it.”

Anthropic only released MCP last November — just seven months ago. As such, there are still things that need to be figured out with the specification and it’s still continually evolving.  

It won’t always be this way, however. Chin did emphasize that he predicts adoption to grow in the enterprise. One of the big reasons why larger businesses are hesitant to adopt AI is that they don’t want their proprietary information going out to an AI company like OpenAI or Google. 

“The moment the enterprises realize that not only can they put the LLM on prem, but now they can connect all of their internal services to an MCP server, I think we’re gonna see a faster adoption of MCP in the enterprise,” said Chin. 

Rodric Rabbah, head of product at Postman, said that at the company, they’ve been tracking MCP since it came out. “Sometimes you see something and it’s like “oh my God, everything is changed because of it,” he said.

He also admitted that there’s this echo chamber that Postman and a lot of other people are in when it comes to MCP. “If you peek outside that echo chamber, people don’t even know what this is yet,” he said. “It’s very exciting for us because of the transformational power this has. Fundamentally what it’s doing is connect your API to your AI, and that’s why Postman really jumped on it.”

He said that it really unlocks a lot of power for AI because it not only allows you to interact with an API, but also compose multiple APIs together into a new application.

“Once you start doing it, it’s like how many more APIs can I feed into this? What other things can I do?”

Vibe coding is another iteration of the attempt to bring coding to non-developers

Just as the low-code/no-code movement attempted to bring the power of software development to non-developers, AI has the potential to do the same. 

Rabbah is head of product for Postman Flows, which is essentially a visual interface for building workflows, integrations, and automations from APIs. He said it opens up access to people who aren’t developers, but who are experts in their own domain, to express a particular workflow or automation.

“We’re seeing increasingly in the world of vibe coding, people generating software without actually writing the software,” he said.

Talking on the term “vibe coding,” he says that’s basically what coding is. “I’ve been vibe coding for decades … You have an idea, you get it down, you look at it, and then you change stuff. The way people are interacting with AI and orchestrating the code generation — when you’re doing it with things that are visual, like a UI, you can see is the button in the right place? Is it the correct color? Is the layout what I expected? If not, I re-prompt the LLM to fix it.” 

Where this has the potential to break down is when you’re doing something much more complex, like on the backend, and not everyone will be able to vibe code their way through these deeper applications. “Code is a liability and understanding the semantics of a program requires me to understand Python or JavaScript or Go or some other language. And not only that, there’s things I need to understand like is the program thread safe? Is it concurrent? Is it satisfying data race conditions?”

Rabbah says that Flows hides this complexity and allows users to visually validate their architecture. He says this visual validation is what is different this time around compared to other visual programming languages that have been around for a while, like Scratch or Simulink.

“We’re in a world of vibe coders where you want to be able to visually validate,” he said. “That’s the beauty of the revolution we’re in. More access, more people, and are they building the right stuff?”


Disclosure: The reporter’s trip to POST/CON, including flights, hotel, and meals, was covered by Postman. The reporter also received a bag of conference merchandise.