Meta today announced the newest update of its open-source machine learning framework, PyTorch. According to the company, PyTorch 2.0 is the initial step towards the next-gen 2-series release of PyTorch.

This release is intended to improve performance speed as well as add support for Dynamic Shapes and Distributed while still maintaining the same eager-mode development and user experience.

PyTorch 2.0 also introduces `torch.compile`, a new capability that improves PyTorch performance and starts the move for parts of PyTorch from C++ back into Python.

Additionally, several new technologies are included in the update, including: 

  • TorchDynamo, which works to safely capture PyTorch programs using Python Frame Evaluation Hooks. 
  • AOTAutograd to overload PyTorch’s autograd engine as a tracing autodiff for generating ahead-of-time backward traces.
  • PrimTorch to canonicalize ~2000+ PyTorch operators down to ~250 primitive operators that developers can use to build a complete PyTorch backend. 
  • TorchInductor is a deep learning compiler that generates code for several accelerators and backends. 

“PyTorch 2.0 embodies the future of deep learning frameworks,” said Luca Antiga, CTO of and one of the primary maintainers of PyTorch Lightning. “The possibility to capture a PyTorch program with effectively no user intervention and get massive on-device speedups and program manipulation out of the box unlocks a whole new dimension for AI developers.”

For more information, read the blog post. To get started with PyTorch 2.0, visit the website.