AI Machine Learning & Data Science Nature Language Tech Research

NYU & UNC Reveal How Transformers’ Learned Representations Change After Fine-Tuning

In the paper Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers, a research team from New York University and the University of North Carolina at Chapel Hill uses centered kernel alignment (CKA) to measure the similarity of representations across layers and explore how fine-tuning changes transformers' learned representations.

Fine-tuning pretrained language encoders like BERT has become a proven way to enable such transformer-based large language models to effectively transfer to downstream natural language understanding (NLU) tasks. AI researchers however have limited knowledge as to how this fine-tuning actually changes the neural networks.

In the new paper Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers, a research team from New York University and the University of North Carolina at Chapel Hill uses centered kernel alignment (CKA) to measure the similarity of representations across network layers and explore how fine-tuning changes transformers’ learned representations.

CKA is an ideal method for comparing learned representations as it is invariant to both orthogonal transformation and isotropic scaling of the compared representations. Using CKA, the researchers were able to compare the similarity of representations across layers of the same model and even across different models.

The team considered three commonly used language-encoding models: RoBERTa (Liu et al., 2019b), ALBERT
(Lan et al., 2020) and ELECTRA (Clark et al.,2020); and conducted experiments on tasks that included the GLUE benchmark, BoolQ, Yelp Review Polarity classification, and HellaSwag and CosmosQA multiple-choice tasks.

The CKA similarity scores were presented via four comparison formats using ALBERT fine-tuned on RTE (Dagan et al., 2005). The ORIG–ORIG format shows the similarity of representations across the layers of the untuned ALBERT model on RTE inputs, while FT–ORIG shows layers of the task-tuned model on the Y-axis and untuned model on the X-axis. The FT–FT format meanwhile compares layers within a single fine-tuned model, and the FT[1]–FT[2] format compares fine-tuned ALBERT models across two random restarts.

The results of the team’s experiments show that block diagonal structure of representation similarity appears in almost every RoBERTa and ALBERT model. The significant similarity of the later layers demonstrates that many of these layers may not contribute significantly to the task at hand. The team further observed that the later layers of task-tuned RoBERTa and ALBERT models can often be discarded without sacrificing task performance, verifying that the later layers do have similar representations.

The representations of the later layers however were seen to be generally highly dissimilar in the ELECTRA models. The researchers suggest this may have been caused by the very different pretraining tasks used for ELECTRA models, but that a full understanding of the differences will require further investigation.

Overall, the study provides novel insights on how transformers’ learned representations change through fine-tuning, revealing a pattern of representation similarity in task-tuned RoBERTa and ALBERT models where early layer representations and later layer representations form two distinct clusters, with high intra-cluster and low inter-cluster similarities.

The paper Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

2 comments on “NYU & UNC Reveal How Transformers’ Learned Representations Change After Fine-Tuning

  1. Pingback: r/artificial - [R] NYU & UNC Reveal How Transformers’ Learned Representations Change After Fine-Tuning - Cyber Bharat

  2. gooooooooooooooood

Leave a Reply

Your email address will not be published.

%d bloggers like this: