Tag: in-context learning

AI Machine Learning & Data Science Research

Meta AI’s Novel Setup Reveals The Structure and Evolution of Transformers

In a new paper Birth of a Transformer: A Memory Viewpoint, a Meta AI research team introduces a new synthetic setup to explore the structure and evolution of transformer language models, aiming to provide insights of the global vs in-context learning of LLMs.

AI Machine Learning & Data Science Nature Language Tech Research

Microsoft’s Structured Prompting Breaks In-Context Learning Length Limits, Scales to Thousands of Examples

In the new paper Structured Prompting: Scaling In-Context Learning to 1,000 Examples, a Microsoft Research team proposes structured prompting. The novel approach breaks through conventional in-context learning length limits, scaling to thousands of examples with reduced computation complexity and superior performance and stability.

AI Machine Learning & Data Science Nature Language Tech Research

Introducing MetaICL: A Language Model Meta-Training Framework for Few-Shot In-Context Learning

A research team from the University of Washington, Facebook AI Research and the Allen Institute for AI introduces Meta-training for InContext Learning (MetaICL), a new meta-training framework for few-shot learning where an LM is meta-trained to learn in-context — conditioning on training examples to recover the task and make predictions.