Integrated circuits purpose-built to process graphics have been around since the 1970s, but the term “graphics processing unit” didn’t come into use until 1999. Due to their massively parallel processing capabilities and many generations of advances, GPUs are now being used in a wide variety of other applications, including data analytics.

While using GPUs to accelerate database and data analytics applications might seem like a desperate attempt to improve performance, a growing number of organizations are finding that, not only does doing so make sense, it would be irresponsible not to.

Ponder this: Where is the performance bottleneck in data analysis today? Historically, it’s been I/O to disk, then to solid state storage (with a significant improvement in random reads), and more recently to system RAM for in-memory databases. Given the dramatic increase in read/write access to RAM (100 nanoseconds vs. 10 milliseconds for direct-attached storage), I/O is no longer the biggest bottleneck.

Yet the relentless growth in data continues to put enormous strain on even the highest-performing in-memory configurations. The problem is particularly profound in streaming data applications, where large clusters of servers often struggle to ingest and analyze the streams in real time.

The inescapable fact is that, for many data analytics applications today, the new performance bottleneck is compute. So why not just scale x86-based servers and clusters up and out as needed to handle the workload? As with many IT challenges, the issue is cost.

After 50 years of achieving steady gains in price/performance, Moore’s Law has finally run its course for CPUs. The number of x86 cores that can be placed cost-effectively on a single chip has simply reached a practical limit. True, there are smaller geometries capable of incorporating more and faster cores. But these are so costly to manufacture that while performance increases, price/performance actually decreases.

Enter the GPU
Configurations equipped with GPUs are capable of processing data up to 100 times faster than those containing CPUs alone. The reason for such a dramatic improvement is their massively parallel processing capabilities, with some GPUs containing nearly 5,000 cores—some 200 times more than the 16-32 cores found in today’s more powerful CPUs.

Like the CPU, the GPU has advanced in numerous ways over the years. One of the most enabling advances has been to make GPUs easier to program, and that is what now makes them suitable for many more applications.

In addition to adding more cores, subsequent generations of these fully programmable GPUs increased performance with faster I/O to memory. For I/O with the server, some solutions now deliver a bi-directional throughput of 160 Gigabytes per second (GB/s) between the CPU and GPU, and among GPUs, which is 5 times faster than a 16-lane PCIe bus. For I/O with the GPU card’s onboard video RAM (VRAM), the state-of-the-art now exceeds 700 GB/s—over 10 times faster than the 68 GB/s in a Xeon E5 CPU.

The combination of such fast host and VRAM I/O serving several thousand cores enables a GPU card equipped with 16GB of memory to achieve single-precision performance of over 9 TeraFLOPS.

Real-time results in real-world applications
The GPUs’ massively parallel processing is now powering a new class of databases and producing some impressive improvements in performance in a wide range of data analytics applications.

With more than 200 sources of streaming data that together produce some 200 billion records per day, the U.S. Army Intelligence & Security Command (INSCOM) unit replaced a cluster of 42 servers with a single server running a database purpose-built to leverage the GPU’s power.

In another GPU-accelerated database application, a simple two-node cluster is able to query a database of 15 billion Tweets and render a visualization in less than a second.

The U.S Postal Service uses a GPU-accelerated database to track over 200,000 devices emitting location data once per minute, resulting in more than a quarter-billion events that need to be ingested and analyzed in real time every day.

A retail company replaced a 300-node database cluster with a 30-node GPU-accelerated database cluster, while achieving a 100-200 times increase in performance for the company’s top 10 most complicated queries.

Another retail company, also using a database built from the ground up to take advantage of GPUs, now ingests and analyzes 300 million events per minute, which is only a small fraction of the 8-node cluster’s 4 billion event-per-minute capacity.

Superior performance—and price/performance
From a performance perspective, GPU acceleration makes it possible to ingest and analyze large volumes of high-velocity data in real time. And the ability to scale up and/or out enables performance to be increased incrementally and predictably—and affordably—as needed.

From a purely cost perspective, GPU acceleration is even more impressive. The GPU’s massively parallel processing can deliver performance equivalent to a CPU-only configuration at 1/10th the hardware cost, and 1/20th the power and cooling costs. And that alone is a compelling reason for why it would be irresponsible not to consider GPU-powered databases in data analytics applications.

Fortunately, the GPU’s performance and price advantages are now within reach of most organizations. Open designs make it easy to incorporate GPU-based solutions into virtually any existing data architecture, where they can integrate with open source, commercial and/or custom data analytics frameworks.

Solutions that support user-defined functions (UDFs) make it possible to leverage existing algorithms, models and libraries almost effortlessly. UDFs are a great way to bridge the gap between data science teams, who need to run complex calculations, the DevOps organization tasked with implementing them, and the business analysts who can now converge AI and BI on a single GPU-accelerated database.

But the only way to fully appreciate the improvement in performance afforded by GPUs is to experience it yourself. So try a proof-of-concept with a purpose-built GPU-accelerated database using your own GPU hardware or look to the cloud where “GPUs as a service” are now available from Amazon, Google, Microsoft and Nimbix. And be prepared to be impressed.