Dremio, the self-service data company, today announced that the company has closed $25 million in Series B funding. The round was led by new investor Norwest Venture Partners with participation from existing investors Lightspeed Venture Partners and Redpoint Ventures. With this investment, Dremio will continue to disrupt the data analytics market by further expanding its engineering and sales teams and increasing its international reach. Dremio has raised a total of $40 million. Founded in 2015, Dremio’s customers include leading organizations in the US, Europe, Asia and Australia, such as Daimler, OVH, Quantium and TransUnion.
“For more than thirty years, companies have built software to unlock the value of data, but millions of data consumers remain unable to access the data they need to do their jobs,” said Rama Sekhar, partner, Norwest Venture Partners. “With a stellar team of big data and open source veterans from companies including Hortonworks, MongoDB and MapR, Dremio is the first company to solve this challenge by empowering data consumers—business analysts and data scientists—to be independent and self-directed in their use of data, from any source and at any scale, using their favorite tools. In addition, Dremio allows companies to seamlessly govern and secure this self-directed use with a true platform approach that is open source and designed for cloud, on-prem and hybrid deployment strategies.”
A New Tier in Data Analytics: Self-Service Data
Dremio was built on the idea that companies need to change the dynamic between the data consumers and IT. Dremio simplifies and governs the process of achieving interactive speed on data from any source, at any scale, at any time, through a self-service model delivered on an open source platform. Dremio empowers data consumers to discover, curate, accelerate and share data for their specific needs, without burdening IT.
Dremio provides a future-proof strategy for data, allowing companies to choose the best tools for analysts, and the right database technologies for applications, without compromising on the ability to leverage data to power the business. Designed for modern on-premises and cloud infrastructure, Dremio takes advantage of elastic compute resources as well as object storage such as Amazon S3 for its Data Reflection Store.
“Business analysts and data scientists are entirely dependent on IT to get interactive access to the data they need and they are underutilized as a result, representing hundreds of billions of dollars of unrealized value every year,” said Tomer Shiran, co-founder and CEO, Dremio. “Dremio makes data self-service for data consumers in the same way that AWS makes infrastructure self-service for developers, but it benefits more than 10 times as many individuals. We are thrilled about the support and confidence we have received from our customers and investors, and we look forward to continuing to change the way data is harnessed by all companies.”
The Data Challenge
Data consumers rely on having data that is easy to access and fast to use. The challenge is that data is massive, siloed across many systems, and in different formats and technologies; making governance, security and performance insurmountable challenges for companies today. Moving all data into a single system via ETL and data prep tools is no longer practical. Modern data architectures including microservices, NoSQL databases and cloud services generate data that is fundamentally incompatible with existing analytical infrastructure. As a result, IT is tasked with perpetual data engineering projects to copy, reshape and optimize data for access, while data consumers remain underutilized and dependent on IT to get their jobs done.
Technology providers including Microsoft, Tableau, Qlik and open source communities including Python and R are collaborating with Dremio to reimagine how users of these tools work with data. Dremio estimates its total addressable market to be $75B+ by 2020 according to industry analysts.
Dremio can be run as an elastic service in the cloud and on-premises, allowing customers to easily meet their needs at any scale. Popular use cases include BI and Data Science on Modern Data, like Elasticsearch, S3, Hadoop and MongoDB; Data Acceleration, making even the largest datasets interactive in speed; Self-Service Data, making data consumers more independent and less reliant on IT; and Data Lineage, tracking the full lineage of data through all analytical jobs across tools and users.
The Power of Apache Arrow
Data scientists need tools and technologies that allow systems to efficiently interoperate and allow business analysts and data scientists to use data, despite the fact that it exists in many different systems. Apache Arrow was created by Dremio to provide the core data building block for heterogeneous data infrastructures and tools, including Spark, Python, R, BI, RDBMS, NoSQL, and file systems. Arrow is now the de-facto standard for in-memory analytics, with more than 100,000 downloads a month and adoption across a diverse range of projects. The benefit of Arrow is that it enables different systems to talk to each other with almost no overhead because they all use the same in-memory representation.