
Google’s new Opal tool allows users to create mini AI apps with no coding required
Google has launched a new experimental AI tool designed for users who want to build apps entirely using AI prompts, with no coding needed at all.
Opal allows users to create mini AI apps by chaining together AI prompts, models, and tools, using natural language and visual editing.
“Opal is a great tool to accelerate prototyping AI ideas and workflows, demonstrate a proof of concept with a functional app, build custom AI apps to boost your productivity at work, and more,” Google wrote in a blog post.
The tool consists of a visual editor to help creators see the workflows in their apps and connect different prompts together to build multi-step apps. It allows the user to describe the logic they want in the app and have Opal build the workflow for them. Users will be able to edit the generated workflow either in the visual editor or through additional prompts.
Gemini 2.5 Flash-Lite is now generally available
The model is Google’s fastest and cheapest model, costing $0.10/1M tokens for input and $0.40/1M tokens for output (compared to $1.25/1M tokens for input and $10/1M tokens for output in Gemini 2.5 Pro).
“We built 2.5 Flash-Lite to push the frontier of intelligence per dollar, with native reasoning capabilities that can be optionally toggled on for more demanding use cases. Building on the momentum of 2.5 Pro and 2.5 Flash, this model rounds out our set of 2.5 models that are ready for scaled production use,” Google wrote in a blog post.
GitLab Duo Agent Platform enters beta
GitLab Duo Agent Platform is an orchestration platform for AI agents that work across DevSecOps in parallel. For instance, a user could delegate a refactoring task to a Software Developer Agent, have a Security Analyst Agent scan for vulnerabilities, and have a Deep Research Agent analyze progress across the repository.
Some of the other agents that GitLab is building as part of this include a Chat Agent, Product Planning Agent, Software Test Engineer Agent, Code Reviewer Agent, Platform Engineer Agent, and Deployment Engineer Agent.
The first beta is available for GitLab.com and self-managed GitLab Premium and Ultimate customers. It includes a VS Code extension and JetBrains IDEs plugins, and next month the company plans to add it to GitLab and expand IDE support.
Google adds updated workspace templates in Firebase Studio that leverage new Agent mode
Google is adding several new features to its cloud-based AI workspace Firebase Studio, following its update a few weeks ago when it added new Agent modes, support for MCP, and integration with the Gemini CLI.
Now it is announcing updated workspace templates for Flutter, Angular, React, Next.js, and general Web that use the Agent mode by default. Users will still be able to toggle between the “Ask” and Agent mode, depending on what the task at hand calls for.
The templates now have an airules.md file to provide Gemini with instructions for code generation, like specific coding standards, handling methods, dependencies, and development best practices.
Google says it will be updating templates for frameworks like Go, Node.js, and .NET over the next few weeks as well.
ChatGPT now has an agent mode
OpenAI is bringing the power of agentic AI to ChatGPT so that it can handle complex requests from users autonomously.
It leverages two of OpenAI’s existing capabilities: Operator, which can interact with websites, and deep research, which can synthesize information. According to OpenAI, these capabilities were best suited for different situations, with Operator struggling with complex analysis and deep research being unable to interact with websites to refine results or access content that required authentication.
“By integrating these complementary strengths in ChatGPT and introducing additional tools, we’ve unlocked entirely new capabilities within one model. It can now actively engage websites—clicking, filtering, and gathering more precise, efficient results. You can also naturally transition from a simple conversation to requesting actions directly within the same chat,” the company wrote in a blog post.
YugabyteDB adds new capabilities for AI developers
The company added new vector search capabilities, an MCP Server, and built-in Connection Pooling to support tens of thousands of connections per node.
Additionally, it announced support for LangChain, OLLama, LlamaIndex, AWS Bedrock, and Google Vortex AI. Finally, YugabyteDB now has multi-modal API support with the addition of support for the MongoDB API.
“Today’s launch is another key step in our quest to deliver the database of choice for developers building mission-critical AI-powered applications,” said Karthik Ranganathan, co-founder and CEO, Yugabyte. “As we continuously enhance YugabyteDB’s compatibility with PostgreSQL, the expanded multi-modal support, a new YugabyteDB MCP server, and wider integration with the AI ecosystem provide AI app developers with the tools and flexibility they need for future success.”
Composio raises $29 million in Series A funding
The company is trying to build a shared learning layer for AI agents so that they can learn from experience. “You can spend hundreds of hours building LLM tools, tweaking prompts, and refining instructions, but you hit a wall,” said Soham Ganatra, CEO of Composio. “These models don’t get better at their jobs the way a human employee would. They can’t build context, learn from mistakes, or develop the subtle understanding that makes human workers invaluable. We’re solving this at the infrastructure level.”
This funding round will be used to accelerate the development of Composio’s learning infrastructure. The round was led by Lightspeed Venture Partners, with participation from Vercel’s CEO Guillermo Rauch, HubSpot’s CTO and founder Dharmesh Shah, investor Gokul Rajaram, Rubrik’s co-founder Soham Mazumdar, V Angel, Blitzscaling Ventures, Operator Partners, and Agent Fund by Yohei Nakajima, in addition to existing investors Elevation Capital and Together Fund.
Parasoft brings agentic AI to service virtualization in latest release
The company added an agentic AI assistant to its virtual testing simulation solution Virtualize, allowing customers to create virtual services using natural language prompts.
For example, a user could write the prompt: “Create a virtual service for a payment processing API. There should be a POST and a GET operation. The operations should require an account id along with other data related to payment.”
The platform will then draw from the provided API service definitions, sample requests/responses, and written descriptions of a service to generate a virtual service with dynamic behavior, parameterized responses, and the correct default values.
Slack’s AI search now works across an organization’s entire knowledge base
Slack is introducing a number of new AI-powered tools to make team collaboration easier and more intuitive.
“Today, 60% of organizations are using generative AI. But most still fall short of its productivity promise. We’re changing that by putting AI where work already happens — in your messages, your docs, your search — all designed to be intuitive, secure, and built for the way teams actually work,” Slack wrote in a blog post.
The new enterprise search capability will enable users to search not just in Slack, but any app that is connected to Slack. It can search across systems of record like Salesforce or Confluence, file repositories like Google Drive or OneDrive, developer tools like GitHub or Jira, and project management tools like Asana.
“Enterprise search is about turning fragmented information into actionable insights, helping you make quicker, more informed decisions, without leaving Slack,” the company explained.
The platform is also getting AI-generated channel recaps and thread summaries, helping users catch up on conversations quickly. It is introducing AI-powered translations as well to enable users to read and respond in their preferred language.
Anthropic’s Claude Code gets new analytics dashboard to provide insights into how teams are using AI tooling
Anthropic has announced the launch of a new analytics dashboard in Claude Code to give development teams insights into how they are using the tool.
It tracks metrics such as lines of code accepted, suggestion acceptance rate, total user activity over time, total spend over time, average daily spend for each user, and average daily lines of code accepted for each user.
These metrics can help organizations understand developer satisfaction with Claude Code suggestions, track code generation effectiveness, and identify opportunities for process improvements.
Mistral launches first voice model
Voxtral is an open weight model for speech understanding, that Mistral says offers “state-of-the-art accuracy and native semantic understanding in the open, at less than half the price of comparable APIs. This makes high-quality speech intelligence accessible and controllable at scale.”
It comes in two model sizes: a 24B version for production-scale applications and a 3B version for local deployments. Both sizes are available under the Apache 2.0 license and can be accessed via Mistral’s API.
JFrog releases MCP server
The MCP server will allow users to create and view projects and repositories, get detailed vulnerability information from JFrog, and review the components in use at an organization.
“The JFrog Platform delivers DevOps, Security, MLOps, and IoT services across your software supply chain. Our new MCP Server enhances its accessibility, making it even easier to integrate into your workflows and the daily work of developers,” JFrog wrote in a blog post.
JetBrains announces updates to its coding agent Junie
Junie is now fully integrated into GitHub, enabling asynchronous development with features such as the ability to delegate multiple tasks simultaneously, the ability to make quick fixes without opening the IDE, team collaboration directly in GitHub, and seamless switching between the IDE and GitHub. Junie on GitHub is currently in an early access program and only supports JVM and PHP.
JetBrains also added support for MCP to enable Junie to connect to external sources. Other new features include 30% faster task completion speed and support for remote development on macOS and Linux.
Gemini API gets first embedding model
These types of models generate embeddings for words, phrases, sentences, and code, to provide context-aware results that are more accurate than keyword-based approaches. “They efficiently retrieve relevant information from knowledge bases, represented by embeddings, which are then passed as additional context in the input prompt to language models, guiding it to generate more informed and accurate responses,” the Gemini docs say.
The embedding model in the Gemini API supports over 100 languages and a 2048 input token length. It will be offered via both free and paid tiers to enable developers to experiment with it for free and then scale up as needed.
Kong AI Gateway 3.11 introduces new method for reducing token costs
Kong has introduced the latest update to Kong AI Gateway, a solution for securing, governing, and controlling LLM consumption from popular third-party providers.
Kong AI Gateway 3.11 introduces a new plugin that reduces token costs, several new generative AI capabilities, and support for AWS Bedrock Guardrails.
The new prompt compression plugin that removes padding and redundant words or phrases. This approach preserves 80% of the intended semantic meaning of the prompt, but the removal of unnecessary words can lead to up to a 5x reduction in cost.
OutSystems launches Agent Workbench
Agent Workbench, now in early access, allows companies to create agents that have enterprise-grade security and controls.
Agents can integrate with custom AI models or third-party ones like Azure OpenAI or AWS Bedrock. It contains a unified data fabric for connecting to enterprise data sources, including existing OutSystems 11 data and actions, relational databases, data lakes, and knowledge retrieval systems like Azure AI Search.
It comes with built in monitoring features, error tracing, and guardrails, providing insights into how AI agents are behaving throughout their lifecycle.
Perforce launches Perfecto AI
Perfecto AI is a testing model within Perfecto’s mobile testing platform that can generate tests, and adapts in real-time to UI changes, failures, and changing user flows.
According to Perforce, Perfecto AI’s early testing has shown 50-70% efficiency gains in test creation, stabilization, and triage.
“With this release, you can create a test before any code is written—true Test-Driven Development (TDD)—contextual validation of dynamic content like charts and images, and triage failures in real time—without the legacy baggage of scripts and frameworks,” said Stephen Feloney, VP of product management at Perforce. “Unlike AI copilots that simply generate scripts tied to fragile frameworks, Perforce Intelligence eliminates scripts entirely and executes complete tests with zero upkeep—eliminating rework, review, and risk.”
Amazon launches spec-driven AI IDE, Kiro
Amazon is releasing a new AI IDE to rival platforms like Cursor or Windsurf. Kiro is an agentic editor that utilizes spec-driven development to combine “the flow of vibe coding” with “the clarity of specs.”
According to Amazon, developers use specs for planning and clarity, and they can benefit agents in the same way.
Specs in Kiro are artifacts that can be used whenever a feature needs to be thought through in-depth, to refactor work that requires upfront planning, or in situations when a developer wants to understand the behavior of a system.
Kiro also features hooks, which the company describes as event-driven automations that trigger an agent to execute a task in the background. According to Amazon, Kiro hooks are sort of like an experienced developer catching the things you’ve missed or completing boilerplate tasks as you work.
Akka introduces platform for distributed agentic AI
Akka, a company that provides solutions for building distributed applications, is introducing a new platform for scaling AI agents across distributed systems. Akka Agentic Platform consists of four integrated offerings: Akka Orchestration, Akka Agents, Akka Memory, and Akka Streaming.
Akka Orchestration allows developers to guide, moderate, and control multi-agent systems. It offers fault-tolerant execution, enabling agents to reliably complete their tasks even if there are crashes, delays, or infrastructure failures.
The second offering, Akka Agents, enables a design model and runtime for agentic systems, allowing creators to define how the agents gather context, reason, and act, while Akka handles everything else needed for them to run.
Akka Memory is durable, in-memory, sharded data that can be used to provide agents context, retain history, and personalize behavior. Data stays within an organization’s infrastructure, and is replicated, shared, and rebalanced across Akka clusters.
Finally, Akka Streaming offers continuous stream processing, aggregation, and augmentation of live data, metrics, audio, and video. Streams can be ingested from any source and they can stream between agents, Akka services, and external systems. Streamed inputs can trigger actions, update memory, or feed other Akka agents.
Clarifai announces MCP server hosting, OpenAI compatibility
Users will be able to upload and host their own tools, functions, and APIs as an MCP server that is fully hosted and managed by Clarifai.
The company is also introducing OpenAI-compatible APIs, which will allow users to integrate with more than 100 open-source and third-party models.
“Our MCP server hosting unleashes a new level of agent intelligence, allowing them to directly interact with an organization’s unique operational DNA and proprietary data sources. Paired with our OpenAI-compatible APIs, we’re not just accelerating deployment; we’re breaking down barriers, enabling developers to integrate these highly capable agents into their existing infrastructure almost instantly, driving rapid, impactful innovation,” said Artjom Shestajev, senior product manager at Clarifai.
Gemini API gets Batch Mode
Batch Mode allows large jobs to be submitted through the Gemini API. Results are returned within 24 hours, and the delayed processing offers benefits like a 50% reduction in cost and higher rate limits.
“Batch Mode is the perfect tool for any task where you have your data ready upfront and don’t need an immediate response,” Google wrote in a blog post.
AWS announces new features in SageMaker AI
SageMaker HyperPod—which allows scaling of genAI model development across thousands of accelerators—was updated with a new CLI and SDK. It also received a new observability dashboard that shows performance metrics, resource utilization, and cluster health, as well as the ability to deploy open-weight models from Amazon SageMaker JumpStart on SageMaker HyperPod.
New remote connections were also added to SageMaker AI to allow it to be connected to from a local VS Code instance.
Finally, SageMaker AI now has access to fully managed MLFlow 3.0, which provides a straightforward experience for tracking experiments, monitoring training progress, and gaining deeper insights into model behavior.
Anthropic proposes transparency framework for frontier AI development
Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety.
“As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post.
As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed, including restricting the framework to large AI developers only, requirements for system cards and documentation, and the flexibility to evolve as AI evolves.
Docker Compose gets new features for building and running agents
Docker has updated Compose with new features that will make it easier for developers to build, ship, and run AI agents.
Developers can define open models, agents, and MCP-compatible tools in a compose.yaml file and then spin up an agentic stack with a single command: docker compose up.
Compose integrates with several agentic frameworks, including LangGraph, Embabel, Vercel AI SDK, Spring AI, CrewAI, Google’s ADK, and Agno.
Coder reimagines development environments to make them more ideal for AI agents
Coder is announcing the launch of its AI cloud development environments (CDEs), bringing together IDEs, dynamic policy governance, and agent orchestration into a single platform.
According to Coder, current development infrastructure was built for humans, not agents, and agents have different requirements to be successful. “Agents need secure environments, granular permissions, fast boot times, and full toolchain access — all while maintaining governance and compliance,” the company wrote in an announcement.
Coder’s new CDE attempts to solve this problem by introducing features designed for both humans and agents.
Some capabilities include fully isolated environments where AI agents and developers work alongside each other, a dual-firewall model to scope agent access, and an interface for running and managing AI agents.
DigitalOcean unifies AI offerings under GradientAI
GradientAI is an umbrella for all of the company’s AI offerings, and it is split into three categories: Infrastructure, Platform, and Application.
GradientAI Infrastructure features building blocks such as GPU Droplets, Bare Metal GPUs, vector databases, and optimized software for improving model performance; GradientAI Platform includes capabilities for building and monitoring agents, such as model integration, function calling, RAG, external data, and built-in evaluation tools; and GradientAI Applications includes prebuilt agents.
“If you’re already building with our AI tools, there’s nothing you need to change. All of your existing projects and APIs will continue to work as expected. What’s changing is how we bring it all together, with clearer organization, unified documentation, and a product experience that reflects the full potential of our AI platform,” DigitalOcean wrote in a blog post.
Newest LF Decentralized Trust Lab HOPrS identifies if photos have been altered
OpenOrigins has announced that its Human-Oriented Proof System (HOPrS) has been accepted by the Linux Foundation’s Decentralized Trust as a new Lab. HOPrS is an open-source framework that can be used to figure out if an image has been altered.
It utilizes techniques like perceptual hashes and quadtree segmentation, combined with blockchain technology, to determine how images have been changed.
According to OpenOrigins, HOPrS can be used to identify if content is generated by AI, a capability becoming increasingly more important as it becomes more difficult to distinguish between AI-generated and human-generated content.
“The addition of HOPrS to the LF Decentralized Trust labs enables our community to access and collaborate on crucial tools for verifying content in the age of generative AI,” said Daniela Barbosa, executive director of LF Decentralized Trust.
Denodo announces DeepQuery
DeepQuery leverages governed enterprise data across multiple systems, departments, and formats to provide answers that are rooted in real-time information. It is currently available as a private preview.
The company also announced its support for MCP, and the latest version of Denodo AI SDK includes an MCP Server implementation.
Cloudflare now blocks AI crawlers by default, introduces pay per crawl model
Last year, Cloudflare introduced a setting that allowed website owners to block AI crawlers. Now, the company is announcing that this setting will now be the default rather than a user needing to switch it on.
The company explained that by switching to a permission-based model, it is eliminating the need for content owners to manually configure settings in order to opt out.
Additionally, Cloudflare is experimenting with a pay per crawl model (in private beta) for content owners to monetize content that is being used to train AI.
When an AI crawler requests content, it will be prompted to pay or will receive an HTTP response code 402 error message.
Perplexity launches Max subscription
Perplexity Max costs $200/month and includes everything that the $20/month Pro subscription comes with, in addition to an unlimited number of Labs, early access to new features, advanced model options like OpenAI o3-pro and Claude Opus 4, and priority support.
“Perplexity Max is our most advanced subscription tier yet, built for those who demand limitless AI productivity and immediate access to the newest products and features from Perplexity. With Perplexity Max, you can reach the maximum power of your curiosity,” the company wrote in a post.
Microsoft launches Awesome GitHub Copilot Customizations repo
This is a repository that includes custom instructions, reusable prompts, and custom chat modes created by the community.
Already, the repo includes custom instructions for Angular best practices, .NET MAUI components and application patterns, Python coding conventions, and more. Some examples of reusable prompts include transforming Python scripts into beginner-friendly projects, creating GitHub Issues for feature requests, and analyzing Azure resources an app is using. And finally, some of the chat modes include a debug mode, planning mode, and database admin mode.
The repo is constantly being updated with new additions from the community, so Microsoft is encouraging further contributions.
Gartner: More than 40% of agentic AI projects will be canceled in the next few years
Gartner recently revealed a new report where it predicted that by the end of 2027, over 40% of agentic AI projects will be canceled. Factors contributing to this decline include escalating costs, unclear business value, and inadequate risk controls.
According to the analyst firm, one trend it is seeing is that vendors are hyping up AI agents by “agent washing,” or rebranding existing products like AI assistants, RPA, or chatbots without actually adding substantial agentic capabilities.
The company estimates that of the thousands of agentic AI vendors out there, only about 130 of them are real and delivering on their promises.
BrowserStack launches suite of AI agents
BrowserStack AI agents are integrated throughout the testing life cycle, helping teams accelerate release cycles, improve test coverage, and boost productivity.
The initial release includes five agents: Test Case Generator Agent, Low-Code Authoring Agent, Self-Healing Agent, A11y Issue Detection Agent, and Visual Review Agent. There are over 20 other agents in development as well.
“We mapped the testing journey to identify where teams spend the most time and manual effort, and reimagined it with AI at the core,” said Ritesh Arora, CEO and co-founder of BrowserStack. “Early results are game-changing, our Test Case Generator delivers 90% faster test creation with 91% accuracy and 92% coverage, results that generic LLMs can’t match.”
Microsoft creates small language model for changing Windows settings
Mu powers Microsoft’s Settings agent by mapping natural language inputs to Settings function calls.
It runs directly on the devices using the Neural Processing Unit (NPU) on Copilot+ PCs, and was designed with NPU constraints in mind, such as parallelism and memory limits.
The agent that Mu powers is available currently to Windows Insiders in the Dev Channel using Copilot+ PCs.
Mirantis reveals Lens Prism, an AI copilot for operating Kubernetes clusters
With Lens Prism, developers will be able to use natural language to troubleshoot and operate their Kubernetes clusters.
Developers can ask questions like “What’s wrong with my pod?”, “How much CPU is this namespace using?” or “Is anything failing in my cluster?”
Lens Prism will then respond with insights gathered from kubectl output, metrics, logs, and the current view in Lens Desktop, and will generate commands that are ready to be run.
Amazon creates new generative AI model for its robots
DeepFleet was designed to act as an intelligent traffic management system for Amazon’s fleet of robots. According to the company, the model coordinates all of the robots’ movements and optimizes how they navigate through fulfillment centers.
The model was able to improve the travel time of Amazon’s robotic fleet by 10%, translating to faster delivery times and lower costs for customers.