
Mabl announces updates to Agentic Testing Teammate
The Agentic Testing Teammate works alongside human testers to make the process more efficient. New updates include AI vectorizations and test semantic search, improvements to test coverage, and enhancements to the MCP Server that enable testers to do a number of tasks directly within their IDE, including Test Impact Analysis, intelligent test creation, and failure recommendations.
“This new work is built on the idea that an agent can become an integral part of your testing team,” said Dan Belcher, co-founder of mabl. “Unlike scripting frameworks and general-purpose large language models, mabl builds deep knowledge about your application over time and uses that knowledge to make it–and your team–more effective.”
Couchbase 8.0 adds three new vector indexing and retrieval capabilities
These new capabilities are designed to support diverse vector workloads that facilitate real-time AI applications.
Hyperscale Vector Index is based on the DiskANN nearest-neighbor search algorithm and enables operation across partitioned disks for distributed processing. Composite Vector Index supports pre-filtered queries that can scope the specific vector being sought. Search Vector Index supports hybrid searches containing vectors, lexical search, and structured query criteria in a single SQL++ request.
Anthropic expands memory to all paid Claude users
Anthropic announced that the recent memory feature in Claude is being rolled out to Pro and Max plan users, making it available to all paid users now.
Memory was initially announced in early September, but was only available to Team and Enterprise users to begin with.
Memory allows Claude to remember your projects and preferences so that you don’t need to re-explain important context across sessions. “Great work builds over time. With memory, each conversation with Claude improves the next,” Anthropic wrote in its initial announcement.
Harness brings vibe coding to database migration with new AI-Powered Database Migration Authoring feature
Harness is on a mission to make it easier for developers to do database migrations with its new AI-Powered Database Migration Authoring feature. This new capability allows users to describe schema changes in natural language to receive a production-ready migration.
For example, a developer could ask “Create a table named animals with columns for genus_species and common_name. Then add a related table named birds that tracks unladen airspeed and proper name. Add rows for Captain Canary, African swallow, and European swallow.”
Harness’ platform would then analyze the current schema and policies, generate a backward-compatible migration, validate the change for safety and compliance, commit it to Git for testing, and create rollback migrations.
Red Hat Developer Lightspeed brings AI assistance to Red Hat’s Developer Hub and migration toolkit
Red Hat Developer Lightspeed has been integrated into both the Red Hat Developer Hub and the migration toolkit for applications (MTA).
In the Red Hat Developer Hub, it acts as an assistant to speed up non-coding tasks, like exploring application design approaches, writing documentation, generating test plans, and troubleshooting applications.
In the migration toolkit, Red Hat Developer Lightspeed automates source code refactoring within the IDE. It leverages MTA’s static code analysis to understand migration issues and how to fix them, and also improves over time by learning what made past changes successful.
GitKraken releases Insights to help companies measure ROI of AI
GitKraken, a company that specializes in improving the developer experience, announced the launch of GitKraken Insights to provide companies with better insights into AI’s impact on developer productivity.
According to the company, while many engineering teams have adopted AI at this point, it is still a challenge to prove AI’s ROI. GitKraken also believes that traditional engineering metrics weren’t designed for the AI era.
“DORA metrics can tell you if deployment frequency changed, but they can’t tell you why,” the company wrote in a blog post. “Pull request counts might go up, but is that because developers are more productive, or because AI is generating code that requires more review cycles?”
GitKraken Insights brings together several different metrics—DORA metrics, code quality analysis, technical debt tracking, AI impact measurement, and developer experience indicators—to paint a picture of what’s happening within the development lifecycle.
MariaDB unifies transactional, analytical, and vector databases in MariaDB Enterprise Platform 2026 release
MariaDB’s Enterprise Platform 2026 release was announced this week, with the promise that it will act as “the definitive database platform for building next-generation intelligent applications.”
To support agentic AI, the company added native RAG for grounding LLMs with context from MariaDB without needing embeddings, vector stores, or retrieval pipelines. The company also added ready-to-use agents within the platform, including a developer copilot that connects to the database and can respond to natural language queries, and a DBA copilot that can manage tasks like performance tuning and debugging.
Additionally, the company added an integrated MCP server so that agents can interact with MariaDB databases. The MCP interface in MariaDB allows users to integrate vector search, LLMs, and standard SQL operations, and allows agents to launch serverless databases in the cloud.
Spotify Portal now generally available and packed with features for improving dev experience
Spotify Portal for Backstage provides developers with a ready-to-use version of Backstage, its open source solution for building internal developer portals (IDPs).
AiKA, which is an AI assistant for Portal, can now connect to third-party MCP servers and trigger actions in Portal. AiKA itself also functions as an MCP server, allowing developers to connect it up to tools like Cursor or Copilot and access Portal data.
“The general availability of Spotify Portal marks a pivotal moment in how organizations build, measure, and optimize developer experience. What began as an internal tool for Spotify engineers is now a fully-fledged platform for enterprises, combining the reliability of Backstage, the insight of Confidence, and the speed of AI-driven workflows,” Spotify wrote.
Sonar announces new solution to optimize training datasets for coding LLMs
Sonar, a company that specializes in code quality, announced a new solution that will improve how LLMs are trained for coding purposes.
According to the company, LLMs that are used to help with software development are often trained on publicly available, open source code containing security issues and bugs, which become amplified throughout the training process. “Even a small amount of flawed data can degrade models of any size, disproportionately degrading their output,” Sonar wrote in an announcement.
SonarSweep (now in early access) aims to mitigate those issues by ensuring that models are learning from high-quality, secure examples.
It works by identifying and fixing code quality and security issues in the training data itself. After analyzing the dataset, it applies a strict filtering process to remove low-quality code while also balancing the updated dataset to ensure it will still offer diverse and representative learning.
