
Google Cloud Next 25 is taking place today, and Google has announced a number of new announcements, mainly related to enhancing Gemini offerings and making it easier to build and adopt agents.
Here are a couple of highlights from the event:
New tools for building AI agents
The company announced the Agent Development Kit (ADK), an open-source framework that covers the end-to-end process of building and deploying agents and multi-agent systems.
Google also announced the Agent2Agent (A2A) protocol, an open protocol that allows AI agents to communicate with each other, exchange information in a secure way, and coordinate actions on top of enterprise applications. It was developed with support and contributions from over 50 other companies, including Atlassian, Box, MongoDB, Salesforce, ServiceNow.
Agentspace updated with new features for easier adoption of agents
Google Agentspace is a platform where developers can build agents combining Gemini models, Google-quality search functionality, and enterprise data.
The company announced that organizations will now be able to give their employees access to the platform’s unified enterprise search, analysis, and synthesis capabilities from within Chrome’s search box.
Other new capabilities include a no-code Agent Designer for building custom agents, and access to two new agents built by Google: Deep Research and Idea Generation.
7th generation TPU, Ironwood, is announced for late 2025 release
Ironwood is a Tensor Processing Unit (TPU) that is designed specifically for inference purposes. It will be available in two sizes: 256 or 9,216 chip configurations.
Compared to its 6th gen TPU, Trillium, it achieves a 2x improvement in performance/watt. It also features improvements in High Bandwidth Memory capacity and bandwidth, and Inter-Chip Interconnect (ICI) bandwidth.
“Ironwood represents a significant shift in the development of AI and the infrastructure that powers its progress. It’s a move from responsive AI models that provide real-time information for people to interpret, to models that provide the proactive generation of insights and interpretation. This is what we call the “age of inference” where AI agents will proactively retrieve and generate data to collaboratively deliver insights and answers, not just data,” Amin Vahdat, VP and GM of ML, Systems, and Cloud AI at Google, wrote in a blog post.
New capabilities in Vertex AI
Lyria—Google’s text-to-music model—has been added in Vertex AI as a private preview.
Other updates include:
- New editing and camera control features in Veo 2 (preview)
- Instant Custom Voice in Chirp 3 to enable users to create custom voices from 10 seconds of audio
- Improved image generation and inpainting capabilities in Imagen 3 for fixing missing portions of an image or removing objects
New Gemini capabilities in Google Workspace
The company is introducing Google Workspace Flows, enabling users to create agentic workflows to streamline processes and automate repetitive tasks across Workspace.
Audio capabilities are also being integrated into Google Docs to enable users to create audio versions of their documents, including generating podcast-style overviews.
Other new Workspace AI capabilities include an AI writing tool in Google Docs, Veo 2 image generation in Google Vids, new Gemini capabilities in Google Chat, and AI-powered data analysis and surfacing of key insights in Google Sheets.