The team behind the machine learning model TensorFlow recently released a blog post laying out the ideas for the future of the project.
According to the TensorFlow team, the ultimate goal is to provide users with the best machine learning platform possible as well as transform machine learning from a niche craft into a mature industry.
In order to accomplish this, the team said they will listen to user needs, anticipate new industry trends, iterate APIs, and work to make it easier for customers to innovate at scale.
To facilitate this growth, TensorFlow intends on focusing on four pillars: make it fast and scalable, utilize applied ML, have it be ready to deploy, and keep simplicity.
The team stated that it will be focusing on XLA compilation with the intention of making model training and inference workflows faster on GPUs and CPUs. Additionally, it will be investing in DTensor, a new API for large-scale model parallelism.
The new API allows users to develop models as if they were training on a single device, even when utilizing several different clients.
The team also intends to invest in algorithmic performance optimization techniques such as mixed-precision and reduced-precision computation in order to accelerate GPUs and TPUs.
According to the TensorFlow team, new tools for CV and NLP are also a part of its roadmap. These tools will come as a result of the heightened support for the KerasCV and KerasNLP packages which offer modular and composable components for applied CV and NLP use cases.
Next, TensorFlow stated that it will be adding more developer resources such as code examples, guides, and documentation for popular and emerging applied ML use cases in order to reduce the barrier of entry of machine learning.
TensorFlow also stated that the process for deploying models developed using JAX with TensorFlow Serving and to mobile and the web with TensorFlow Lite and TensorFlow.js will be made easier.
Lastly, they are working to consolidate and simplify APIs as well as minimize the time-to-solution for developing any applied ML system by focusing more on debugging capabilities.