Although hesitant to jump on the “deep learning bandwagon,” student Theodore DeRego decided to spend his winter break trying to learn more about his newly found guilty pleasure. The result is a framework he calls deeplearn-rs, which can be used to build trainable matrix computation graphs that are configurable at runtime.

After gaining an understanding of deep learning, DeRego sat down to experiment with real data and build real networks. He wrote that this was the first time he carried out a full deep learning experiment with a modern framework.

(Related: Microsoft releases a deep learning toolkit)

DeRego’s project involved both deep learning and systems programming, and he developed a deep learning framework in Rust that uses OpenCL to “do all the heavy lifting,” he wrote. His OpenCL matrix math library can be found here.

For those who don’t know about deep learning and computation graphs, DeRego broke it down. He said that artificial neural networks can be represented in a more general framework, which is computation graphs. Artificial neural networks can be represented by compact graphs of matrix operations as well.

“Fully connected layers use matrix multiplication to multiply the weights and inputs and sum the products for each node,” he wrote. “For each layer, all the inputs are represented as one matrix and all the weights are represented as another. So convenient!”

This makes it easy to implement fast deep neural networks by running the matrix operations on highly parallel GPUs, according to DeRego.

He said that Rust has strict ownership and borrowing rules, which means “You can’t just dole out pointers to things like you can in most other languages.” Instead, he used indices into arrays.

DeRego said the most interesting function here is Graph::add_node. Give the function an operation and a list of inputs, and “It’ll handle the rest,” he said. It creates the output variables and a new gradient for each of the inputs.

DeRego started working on this project a week ago, and he is “pretty satisfied” with how far it’s come.

Moving forward, he would like to implement some loss functions, train a fuzzy xor network, and automate the training process with multiple trainers.

“You can construct and (manually) train a deep computation graph, and it all runs on the GPU via OpenCL,” wrote DeRego.

deeplearn-rs can be found on GitHub here.