Google has recently announced that the company will be making the OpenXLA Project available for use and contribution. This includes the XLA StableHLO and IREE repositories.

OpenXLA is an open-source machine learning compiler ecosystem that allows developers to compile and enhance their models from ML frameworks in order to provide improved training and serving on a variety of hardware.

The compiler was co-developed by organizations such as Alibaba, AWS, AMD, Apple, Arm, Cerebras, Google, Graphcore, Hugging Face, Intel, Meta, and NVIDIA.

According to Google, OpenXLA will help developers achieve improvements in training time, throughput, serving latency, time-to-market, and compute costs.

The goals behind this open-source offering include: 

  • Simplifying the process for developers to compile and optimize a model in their preferred framework for multiple different hardwares through a unified compiler API and pluggable device-specific back-ends optimizations.
  • Providing increased performance for current and emerging models that scales across different hosts and accelerators, satisfies the constraints of edge deployments, and generalizes to novel model architectures.
  • Building a layered and extensible ML compiler platform that offers developers MLIR-based components that are reconfigurable for specific use cases and plug-in points for hardware-specific customization of the compilation flow.

To learn more, read the blog post.