While BERT-based models have been applied to various text-ranking tasks, the potential of larger and more powerful sequence-to-sequence T5 (Text-To-Text Transfer Transformer) pretrained language models remains under-explored in this challenging field, as do possible approaches for extending and fine-tuning T5 with ranking losses.
In the new paper RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses, a Google Research team presents RankT5, which employs pretrained T5 models for text ranking with various ranking losses to directly optimize ranking performance. RankT5 models more natively support text ranking by outputting real numbers rather than text tokens.


The team implements two variants of their proposed RankT5: an Encoder-decoder (EncDec) model structure that uses the first output token of the decoder and outputs real numbers; and, because autoregressive decoding is unnecessary, an alternative Encoder-only (Enc) structure that outputs real numbers based on the encoder. These two structures enable the team to fine-tune T5 with various ranking losses to directly optimize ranking performance.

In their empirical studies, the team compared the proposed RankT5 to conventional monoT5 on the MS MARCO and Natural Question datasets. The results show that RankT5 models fine-tuned with specialized ranking losses achieved significant ranking performance gains over the benchmark T5 ranking models.

The researchers also compared RankT5 with different model structures and losses, where RankT5 with list-wise ranking losses achieved better zero-shot ranking performance on out-of-domain datasets than fine-tuning the model with classification losses.
The team hopes their work will encourage additional research on T5-based ranking model architectures and proposes the possibility of also extending their training strategy to the pretraining stage to obtain better pretrained models.
The paper RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “Google Introduces RankT5: A Fine-Tuned T5 Model That Boosts Text Ranking and Zero-Shot Performance”