Ex-Googlers Create an Agentic Lakehouse for Databricks

“Databricks is seeing explosive growth with their Data Lakehouse product,” said Ben Lerner, CEO of Espresso. “But if they want to catch up with Snowflake adoption they’ll need to be as optimized and cost efficient as possible. By leveraging Espresso AI, Databricks customers can cut their bill in half and see their efficiency skyrocket without any manual effort.”

Three Core Pillars of Espresso AI for Databricks

Autoscaling Agent: Espresso AI’s models are trained on a customer’s unique metadata logs. This means the platform understands and can predict spikes and fluctuations, enabling resource and cost optimizations without sacrificing performance.

Scheduling Agent: The average Databricks user’s warehouse utilization is between 40% and 60%. That means about half of the bill is wasted on idle machines. Instead of routing each query to a static warehouse, Espresso AI analyzes running workloads to understand where existing machines have extra capacity, and then intelligently routes the queries to those machines for maximum efficiency.

Query Agent: Espresso AI optimizes every piece of SQL before it even hits the data lakehouse, leading to improved performance and reduced costs across the board.

Espresso AI was founded by three ex-Googlers – Ben Lerner, Alex Kouzemtchenko, and Juri Ganitkevitch – who previously worked on machine learning, systems performance, and deep learning research in Google Search, Google Cloud, and Google DeepMind. The company has raised $11 Million in seed funding from FirstMark Capital, Nat Friedman, and Daniel Gross.

The company conducted a 6-month long beta with interest from hundreds of enterprises including Booz Allen Hamilton and Comcast. “Espresso AI cut our bill in half with no lift from our side,” said Nataliia Mykytento, Head of Engineering at Minerva. “They were instrumental in reducing costs that were growing too fast for comfort.”

Article Tags

,

About SD Times Newswire

DMCA.com Protection Status