GPU Tensors, dynamic neural networks, and deep Python integration are the major highlights of this week’s featured GitHub project: PyTorch.

PyTorch is a new deep learning framework that puts Python first. This open-source Python package is in an early-release beta, but the PyTorch team says developers can expect some “adventures and rough edges.”

PyTorch is a python package that provides two high-level features: Tensor computation with strong GPU acceleration, and deep neural networks built on a tape-based Autograd system. Developers can also reuse their favorite Python packages like NumPy, SciPy and Cython in order to extend PyTorch.

Some PyTorch users utilize this framework as a replacement for NumPy to use their power GPUs, or as a deep learning research platform that provides flexibility and speed. And since PyTorch is not a Python binding into a monolithic C++ framework, it has been built to be deeply integrated into Python, letting developers use it naturally as they would NumPy, SciPy and more, according to PyTorch’s website.

Developers can also write new neural network modules or layers in Python using the torch API, or any of their favorite NumPy-based libraries. They can also write layers in C/C++, since PyTorch provides an extension API based on CFFI (C Foreign Function Interface for Python).

According to Brandon Amos, a computer science student at CMU working on machine learning, PyTorch’s layer creation is “powerful,” and he created a layer that solves an optimization problem with a primal dual-interior point method, according to him.

Although the code is a bit difficult to read if you are not familiar with it, “the PyTorch version executes on the GPU or CPU, and is as easy to write, read and debug as the NumPy version,” he said.

While Chainer, MinPy, DyNet and Autograd can do this, according to James Bradbury (a research scientist at Salesforce), PyTorch’s “autodifferentiation engine is several times faster.”

For the next PyTorch release cycle, there are three big features that the PyTorch team plans on adding, including distributed PyTorch, backpropagating through the optimization process, and Lazy Execution Engine for Autograd, which will allow the PyTorch team to optionally introduce caching and JIT compilers to optimize Autograd code.

Companies that are currently using PyTorch include Facebook, NVIDIA, Twitter, and Salesforce, as well as some universities. Developers can fork the PyTorch code on GitHub here.

Top 5 trending GitHub projects of the week
#1.
The Big List of Naughty Strings: A list of strings that have a high probability of causing issues when used as user-input data.

#2. Public APIs: A curated list of APIs from around the web.

#3. FreeCodeCamp: FreeCodeCamp literally hasn’t moved. It’s just that good.

#4. Clean Code JavaScript: Clean Code concepts adapted for JavaScript.

#5. WebSlides: Making HTML presentations easy.