Anthropic has a number of updates to share about its AI models, including an updated version of Claude 3.5 Sonnet, the release of Claude 3.5 Haiku, and a public beta for a capability that enables users to instruct Claude to use computers as a human would. 

The new version of Claude 3.5 Sonnet features improvements across the board compared to the original version. It outperforms the original in graduate level reasoning, undergraduate level knowledge, code, math problem solving, high school math competition, visual question answering, agentic coding, and agentic tool use.

“Early customer feedback suggests the upgraded Claude 3.5 Sonnet represents a significant leap for AI-powered coding,” Anthropic wrote in a post. The company also revealed that GitLab tested the model for DevSecOps tasks and found up to a 10% improvement in reasoning across different use cases. 

Claude 3.5 Haiku is the company’s fastest model, and has a similar cost and speed compared to Claude 3 Haiku, but improves across every skill set, even outperforming the previous generation’s largest model, Claude 3 Opus, in many benchmarks.

According to Anthropic, Claude 3.5 Haiku does especially well in coding tasks, scoring 40.6 on SWE-bench, which is a benchmark that evaluates how well a model can reason through GitHub issues. This is better than the original Claude 3.5 Sonnet and GPT-4o, the company claims. 

“With low latency, improved instruction following, and more accurate tool use, Claude 3.5 Haiku is well suited for user-facing products, specialized sub-agent tasks, and generating personalized experiences from huge volumes of data—like purchase history, pricing, or inventory records,” Anthropic wrote.

Claude 3.5 Haiku will be available in a few weeks through Anthropic’s API, Amazon Bedrock, and Google Cloud’s Vertex AI. It will first be available as a text-only model, and image input will be added down the line. 

Beyond its model announcements, Anthropic also announced the public beta for a new capability that enables Claude to do general computer skills. It built an API that allows the model to perceive and interact with computer interfaces, enabling it to complete tasks like moving the cursor to open an application, navigating to specific web pages, or filling out a form with data from those pages.

In early testing via the OSWorld benchmark, which evaluates an AI’s ability to use computers like humans, Claude 3.5 Sonnet scored 14.9% in the screenshot-only category, which is the highest score of any model (the next highest score is 7.8%). Additionally, when given more steps to complete a task, Claude scored 22%.

Anthropic noted that some of the areas that Claude struggles with include scrolling, dragging, and zooming, and therefore recommends people experiment with it on low-risk tasks.

“Learning from the initial deployments of this technology, which is still in its earliest stages, will help us better understand both the potential and the implications of increasingly capable AI systems,” Anthropic wrote.