Coinciding with this week’s release of PyTorch 1.5, which was a major update to the framework, AWS and Facebook have come together to release an open-source model server for the framework, TorchServe.

According to AWS, developers like the flexibility PyTorch provides for building and training models, but deploying and managing them in production is the most challenging part for many. Using a model server is one way to simplify that process. Model servers can be used to easily load models, run preprocessing or postprocessing code, and provide production-critical features, such as logging, monitoring, and security. 

TorchServe wants to make it possible to deploy PyTorch models without having to write custom code.

It provides a low latency prediction API, embeds default handlers for common application, and offers multi-model serving, model versioning for A/B testing, monitoring metrics, and RESTful endpoints for application integration.

In addition, TorchServe supports all machine learning environments, such as Amazon SageMaker, container services, and Amazon Elastic Compute Cloud (EC2).

TorchServe can be accessed here