Google’s Mu2SLAM: Toward a Single Model For All Speech and Text Understanding Tasks
In the new paper Mu2SLAM: Multitask, Multilingual Speech and Language Models, a Google Research team presents Mu2SLAM, a multilingual sequence-to-sequence pretraining method for speech and text models that covers arbitrary tasks in over 100 languages.