AI Research

Microsoft’s New MT-DNN Outperforms Google BERT

Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.

Multi-task learning and language model pre-training are popular approaches for many of today’s natural language understanding (NLU) tasks. Now, Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.

In their paper Multi Task Deep Neural Networks for Natural Language Understanding, the Microsoft Research and Microsoft Dynamics 365 authors show MT-DNN learning representations across multiple natural language understanding (NLU) tasks. The model “not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt to new tasks and domains.”

MT-DNN builds on a model Microsoft proposed in 2015 and integrates the network architecture of BERT, a pre-trained bidirectional transformer language model proposed by Google last year.

image (3)

As shown in the figure above, the network’s low-level layers (i.e., text encoding layers) are shared across all tasks, while the top layers are task-specific, combining different types of NLU tasks. Like the BERT model, MT-DNN is trained in two phases: pre-training and fine-tuning. But unlike BERT, MT-DNN adds multi-task learning (MTL) in the fine-tuning phases with multiple task-specific layers in its model architecture.

MT-DNN achieved new SOTA results on ten NLU tasks, including SNLI, SciTail; and eight out of nine GLUE tasks, elevating the GLUE benchmark to 82.2% (1.8% absolute improvement). Researchers also demonstrate that using the SNLI and SciTail datasets, representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations.

For more details, please find the paper on arXiv. Microsoft will release the code and pre-trained models.

The GLUE Benchmark leaderboard is here.


Author: Jessie Geng | Editor: Michael Sarazen

3 comments on “Microsoft’s New MT-DNN Outperforms Google BERT

  1. c julie ann parang si taylor swift ng pinas!!

  2. Moringa seeeds һave ɑn sufficient quantity
    of fber tо maintain your colon functioning ѡell.

  3. Pingback: Natural Language AI: Architectural Considerations – AI Explorations

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: