The PyTorch Team yesterday announced the release of PyTorch 1.5, along with new and updated libraries. The release features several major new API additions and improvements, including a significant update to the C++ frontend, Channel Last memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training.
The C++ frontend API is now at parity with Python with appropriate documentation, and features previously tagged as experimental have been moved to stable versions. C++ optimizers now also behave identically to those in the Python API.
The team has also released an experimental Channel Last memory format for computer vision models, which unlocks the ability to use performance efficient convolution algorithms and hardware such as Nvidia’s Tensor Cores and FBGEMM. The Channel Last memory format was designed to automatically propagate through the operators to allow easy switching between memory layouts.
The release also adds new APIs for the autograd package to facilitate hessians and jacobians computation and an API for binding custom C++ classes into TorchScript and Python simultaneously which is almost identical in syntax to pybind11.
The Distributed RPC framework that was launched as experimental in the 1.4 release has been upgraded to stable following various enhancements and bug fixes. Along with improved reliability and robustness, the framework also boasts some new features, including profiling support, TorchScript functions in RPC ability, and ease-of-use enhancements.
torch_xla 1.5 is now available and tested with the PyTorch 1.5 release, providing a mature Cloud TPU experience.
Note that PyTorch 1.5 and future versions will no longer support Python 2 — specifically version 2.7. Going forward, support will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8.
The detailed release notes are on GitHub.
Journalist: Yuan Yuan | Editor: Michael Sarazen