NLP-focused startup Hugging Face recently released a major update to their popular “PyTorch Transformers” library which establishes compatibility between PyTorch and TensorFlow 2.0, enabling users to easily move from one framework to another during the life of a model for training and evaluation purposes. With the update, Hugging Face has renamed the library to simply “Transformers.”
The Transformers GitHub project is designed for everyone from weekend hobbyists to NLP professionals. It remains as easy to use as the previous version while now also being compatible with deep learning library Keras. The Transformers package contains over 30 pretrained models and 100 languages, along with eight major architectures for natural language understanding (NLU) and natural language generation (NLG):
- BERT (from Google);
- GPT (from OpenAI);
- GPT-2 (from OpenAI);
- Transformer-XL (from Google/CMU);
- XLNet (from Google/CMU);
- XLM (from Facebook);
- RoBERTa (from Facebook);
- DistilBERT (from HuggingFace).
The Transformers library no longer requires PyTorch to load models, is capable of training SOTA models in only three lines of code, and can pre-process a dataset with less than 10 lines of code. Sharing trained models also lowers computation costs and carbon emissions.
The standout feature of this update is the interoperability between PyTorch and TensorFlow 2.0. TensorFlow is designed to be production ready, while PyTorch is easier to learn and use for building prototypes. In the previous PyTorch Transformers library these two frameworks were incompatible and there was no way for users to transform a prototype built by PyTorch to a production line built by TensorFlow. Now, it is possible to select appropriate frameworks for different phases of a given language model.
The Transformers library has received more than 14k stars on GitHub and garnered considerable attention on Reddit’s machine learning channel.
Founded in 2016, Hugging Face is based in New York and completed a US$4 million seed round in May 2018. Their latest paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter is on arXiv, and has been accepted by NeurIPS 2019.
The major Transformers changes are described here. Detailed installation instructions are available on GitHub.
Author: Reina Qi Wan | Editor: Michael Sarazen; Tony Peng
Pingback: THE BLOG MEDIA - LATEST DAILY TRENDS ON THE GO | 24/7 UPDATED CONTENTS.......
very gooooooooooooooood