AI AI Weekly Industry Research

A Trio of Machine Learning & Language Technology Breakthroughs You Shouldn’t Miss

Synced Global AI Weekly August 18th

Subscribe to Synced Global AI Weekly


Nvidia Trains World’s Largest Transformer-Based Language Model
Nvidia today announced that it has trained the world’s largest language model, just the latest in a series of updates the GPU maker has aimed at advancing conversational AI. The model uses 8.3 billion parameters and is 24 times larger than BERT and 5 times larger than OpenAI’s GPT-2.
(Venture Beat)


New Advances in Natural Language Processing to Better Connect People
Facebook AI has achieved impressive breakthroughs in NLP using semi-supervised and self-supervised learning techniques, which leverage unlabeled data to improve performance beyond purely supervised systems. Their new self-supervised pretraining approach, RoBERTa, has surpassed all existing NLU systems on several language comprehension tasks.
(Facebook AI)


Facebook, Georgia Tech & OSU ‘ViLBERT’ Achieves SOTA on Vision-and-Language Tasks
Researchers from the Georgia Institute of Technology, Facebook AI Research and Oregon State University have proposed ViLBERT (Vision-and-Language BERT), a novel model for visual grounding that can learn joint representations of image content and natural language, and leverage the connections across various vision-and language tasks.
(Synced)

Technology

Joint Speech Recognition and Speaker Diarization via Sequence Transduction
Motivated by recent advances in sequence to sequence learning, researchers propose a novel approach to tackle the two tasks by a joint ASR and SD system using a recurrent neural network transducer. Their approach utilizes both linguistic and acoustic cues to infer speaker roles, as opposed to typical SD systems, which only use acoustic cues.
(Google)


On The Evaluation of Machine Translation Systems Trained With Back-Translation
In this work, researchers show that backtranslation improves translation quality of both naturally occurring text as well as translationese according to professional human translators. They provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs.
(Facebook AI Research)


“Superstition” in the Network: Deep Reinforcement Learning Plays Deceptive Games
Deep reinforcement learning has learned to play many games well, but failed on others. To better characterize the modes and reasons of failure of deep reinforcement learners, researchers test the widely used Asynchronous Actor-Critic (A2C) algorithm on four deceptive games, which are specially designed to provide challenges to game-playing agents.
(New York University & University of Strathclyde & Maastricht University & University of Hertfordshire )

You May Also Like

Does Deep Learning Still Need Backpropagation?
Researchers from the Victoria University of Wellington School of Engineering and Computer Science have introduced the HSIC (Hilbert-Schemidt independence criterion) bottleneck as an alternative to backpropagation for finding good representations.
(Synced)


Facebook, Georgia Tech & OSU ‘ViLBERT’ Achieves SOTA on Vision-and-Language Tasks
A team of researchers from the Georgia Institute of Technology, Facebook AI Research and Oregon State University has proposed ViLBERT (Vision-and-Language BERT), a novel model for visual grounding that can learn joint representations of image content and natural language, and leverage the connections across various vision-and language tasks.
(Synced)

Global AI Events

August 19–23: Knowledge Discovery and Data Mining (KDD2019) in London, United Kingdom

September 10–12: The AI Summit (Part of TechXLR8) in Singapore

September 24–28: Microsoft Ignite in Orlando, United States

October 27-November 3: International Conference on Computer Vision (ICCV) in Seoul, South Korea

Global AI Opportunities

Stanford HAI is Recruiting

Research Scientist, Google Brain Toronto

OpenAI Seeking Software Engineers and Deep Learning Researchers

DeepMind is Recruiting

DeepMind Scholarship: Access to Science

Postdoctoral Researcher (AI) – Self-Supervised Learning

LANDING AI is recruiting

NVIDIA Graduate Fellowships


Stay tight with AI!
Subscribe to Synced Global AI Weekly

1 comment on “A Trio of Machine Learning & Language Technology Breakthroughs You Shouldn’t Miss

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: