Gemini 2.5 Flash-Lite is now generally available

The model is Google’s fastest and cheapest model, costing $0.10/1M tokens for input and $0.40/1M tokens for output (compared to $1.25/1M tokens for input and $10/1M tokens for output in Gemini 2.5 Pro).

“We built 2.5 Flash-Lite to push the frontier of intelligence per dollar, with native reasoning capabilities that can be optionally toggled on for more demanding use cases. Building on the momentum of 2.5 Pro and 2.5 Flash, this model rounds out our set of 2.5 models that are ready for scaled production use,” Google wrote in a blog post

GitLab Duo Agent Platform enters beta

GitLab Duo Agent Platform is an orchestration platform for AI agents that work across DevSecOps in parallel. For instance, a user could delegate a refactoring task to a Software Developer Agent, have a Security Analyst Agent scan for vulnerabilities, and have a Deep Research Agent analyze progress across the repository. 

Some of the other agents that GitLab is building as part of this include a Chat Agent, Product Planning Agent, Software Test Engineer Agent, Code Reviewer Agent, Platform Engineer Agent, and Deployment Engineer Agent. 

The first beta is available for GitLab.com and self-managed GitLab Premium and Ultimate customers. It includes a VS Code extension and JetBrains IDEs plugins, and next month the company plans to add it to GitLab and expand IDE support. 

Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

Google is adding several new features to its cloud-based AI workspace Firebase Studio, following its update a few weeks ago when it added new Agent modes, support for MCP, and integration with the Gemini CLI.

Now it is announcing updated workspace templates for Flutter, Angular, React, Next.js, and general Web that use the Agent mode by default. Users will still be able to toggle between the “Ask” and Agent mode, depending on what the task at hand calls for.

The templates now have an airules.md file to provide Gemini with instructions for code generation, like specific coding standards, handling methods, dependencies, and development best practices.

Google says it will be updating templates for frameworks like Go, Node.js, and .NET over the next few weeks as well.

ChatGPT now has an agent mode

OpenAI is bringing the power of agentic AI to ChatGPT so that it can handle complex requests from users autonomously.

It leverages two of OpenAI’s existing capabilities: Operator, which can interact with websites, and deep research, which can synthesize information. According to OpenAI, these capabilities were best suited for different situations, with Operator struggling with complex analysis and deep research being unable to interact with websites to refine results or access content that required authentication.

“By integrating these complementary strengths in ChatGPT and introducing additional tools, we’ve unlocked entirely new capabilities within one model. It can now actively engage websites—clicking, filtering, and gathering more precise, efficient results. You can also naturally transition from a simple conversation to requesting actions directly within the same chat,” the company wrote in a blog post.

YugabyteDB adds new capabilities for AI developers

The company added new vector search capabilities, an MCP Server, and built-in Connection Pooling to support tens of thousands of connections per node.

Additionally, it announced support for LangChain, OLLama, LlamaIndex, AWS Bedrock, and Google Vortex AI. Finally, YugabyteDB now has multi-modal API support with the addition of support for the MongoDB API.

“Today’s launch is another key step in our quest to deliver the database of choice for developers building mission-critical AI-powered applications,” said Karthik Ranganathan, co-founder and CEO, Yugabyte. “As we continuously enhance YugabyteDB’s compatibility with PostgreSQL, the expanded multi-modal support, a new YugabyteDB MCP server, and wider integration with the AI ecosystem provide AI app developers with the tools and flexibility they need for future success.”

Composio raises $29 million in Series A funding

The company is trying to build a shared learning layer for AI agents so that they can learn from experience. “You can spend hundreds of hours building LLM tools, tweaking prompts, and refining instructions, but you hit a wall,” said Soham Ganatra, CEO of Composio. “These models don’t get better at their jobs the way a human employee would. They can’t build context, learn from mistakes, or develop the subtle understanding that makes human workers invaluable. We’re solving this at the infrastructure level.”

This funding round will be used to accelerate the development of Composio’s learning infrastructure. The round was led by Lightspeed Venture Partners, with participation from Vercel’s CEO Guillermo Rauch, HubSpot’s CTO and founder Dharmesh Shah, investor Gokul Rajaram, Rubrik’s co-founder Soham Mazumdar, V Angel, Blitzscaling Ventures, Operator Partners, and Agent Fund by Yohei Nakajima, in addition to existing investors Elevation Capital and Together Fund.

Parasoft brings agentic AI to service virtualization in latest release

The company added an agentic AI assistant to its virtual testing simulation solution Virtualize, allowing customers to create virtual services using natural language prompts.

For example, a user could write the prompt: “Create a virtual service for a payment processing API. There should be a POST and a GET operation. The operations should require an account id along with other data related to payment.”

The platform will then draw from the provided API service definitions, sample requests/responses, and written descriptions of a service to generate a virtual service with dynamic behavior, parameterized responses, and the correct default values.


Read last week’s updates here