Amazon is taking on yet another IT market. The company recently announced the general availability of its new Redshift data-warehousing service. This new Amazon Web Service product allows developers and IT managers to store their long-term data inside Amazon’s cloud.
Raju Gulabani, vice president of database services at Amazon Web Services, said that Redshift was a project designed to reduce data-warehousing costs and timeframes by an order of magnitude.
“When we set out to build Amazon Redshift, we wanted to leverage the massive scale of AWS to deliver 10x the performance at one-tenth the cost of on-premise data warehouses in use today,” he said. “With order-of-magnitude improvements in price and performance, Amazon Redshift makes Big Data analytics accessible to more people, allowing large organizations to analyze more of their data, and smaller ones to afford fast, scalable data-warehousing technology. We are delighted by the excitement from our preview customers as they’ve experienced the performance improvement and lower costs that Amazon Redshift delivers.”
Amazon’s service will handle the replication, scaling, updating and durability of the data stored in Redshift, leaving developers to access this long-term data store via the AWS APIs.
Pricing for Redshift is similar to the pricing structure of typical AWS instances. For Redshift, Amazon’s pricing page reads, “On-Demand pricing starts at just US$0.85 per hour for a single-node 2TB data warehouse, scaling linearly with cluster size. With Reserved Instance pricing, you can lower your effective price to $0.228 per hour for a single 2TB node, or under $1,000 per TB per year.”
Amazon expects Redshift to help existing users of its cloud-based database services. On the Redshift FAQ, for example, Amazon suggests that Redshift could be used to lighten the analytics load on existing relational data stores.