Tag: Natural Language Understanding

AI Machine Learning & Data Science Research

Microsoft’s DeBERTaV3 Uses ELECTRA-Style Pretraining With Gradient-Disentangled Embedding Sharing to Boost DeBERTa Performance on NLU Tasks

Microsoft releases DeBERTaV3, improving the original DeBERTa model using ELECTRA-style pretraining with gradient-disentangled embedding sharing to achieve better pretraining efficiency and a significant performance jump.

AI Machine Learning & Data Science Nature Language Tech Research

NYU & UNC Reveal How Transformers’ Learned Representations Change After Fine-Tuning

In the paper Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers, a research team from New York University and the University of North Carolina at Chapel Hill uses centered kernel alignment (CKA) to measure the similarity of representations across layers and explore how fine-tuning changes transformers’ learned representations.

AI Machine Learning & Data Science Research

Baidu’s Knowledge-Enhanced ERNIE 3.0 Pretraining Framework Delivers SOTA NLP Results, Surpasses Human Performance on the SuperGLUE Benchmark

A research team from Baidu proposes ERNIE 3.0, a unified framework for pretraining large-scale, knowledge-enhanced models that can easily be tailored for both natural language understanding and generation tasks with zero-shot learning, few-shot learning or fine-tuning, and achieves state-of-the-art results on NLP tasks.