GPT Understands, Too! Tsinghua & MIT’s P-Tuning Boosts Performance on NLU Benchmarks
Tsinghua & MIT researchers break the stereotype that GPTs can generate but not understand language, showing that GPTs can compete with BERT models on natural language understanding tasks using a novel P-tuning method that can also improve BERT performance in both few-shot and supervised settings.