As Its GPT-3 Model Wows the World, OpenAI CEO Suggests ‘the Hype Is Way Too Much’
OpenAI’s 175 billion parameter language model GPT-3 has gone viral once again.
AI Technology & Industry Review
OpenAI’s 175 billion parameter language model GPT-3 has gone viral once again.
Researchers from ByteDance AILab and Shanghai Jiao Tong University have introduced Xiaomingbot, a multilingual and multimodal news reporter.
Synced invited Graham Neubig, an Assistant Professor from Carnegie Mellon University to share his thoughts on Google’s universal neural machine translation (NMT) system trained on over 25 billion examples that can handle 103 languages proposed.
Hugging Face, a startup specializing in natural language processing, today released a landmark update to their popular Transformers library, offering unprecedented compatibility between two major deep learning frameworks, PyTorch and TensorFlow 2.0.
Researchers from Intel AI and University of California at Santa Barbara have introduced a new generative hate speech intervention model, along with two large-scale fully-labeled hate speech datasets collected from Reddit and Gab.
Researchers from Element AI, MILA (Montréal Institute for Learning Algorithms), and Université de Montréal have introduced a powerful transfer language model that can summarize long scientific articles effectively, outperforming traditional seq2seq approaches.
Israeli research company AI21 Labs today published the paper SenseBERT: Driving Some Sense into BERT, which proposes a new model that significantly improves lexical disambiguation abilities and has obtained state-of-the-art results on the complex Word in Context (WiC) language task.
Since Google Research introduced its Bidirectional Transformer (BERT) in 2018 the model has gained unprecedented popularity among researchers. Now, a group of researchers from the National Cheng Kung University Tainan in Taiwan are challenging BERT’s efficacy.