Computer Vision & Graphics Machine Learning & Data Science Research

VisualVoice Uses Facial Appearance to Boost SOTA in Speech Separation

Recent AI research on speech separation has explored ways to associate lip motions in videos with audio, but this approach suffers when speakers' lips are occluded, which they often are in busy multi-speaker environments.

Even in a noisy crowd, the human perceptual system can effectively reduce auditory ambiguities to identify and isolate an active speaker — an action performed in large part by leveraging visual information. Recent AI research on speech separation has explored ways to associate lip motions in videos with audio, but this approach suffers when speakers’ lips are occluded, which they often are in busy multi-speaker environments.

Screen Shot 2021-01-12 at 5.26.14 AM.png

Inspired by work in the cognitive sciences, a team from the University of Texas at Austin and Facebook AI Research has introduced an approach that takes as its input video of a target speaker in an environment with overlapping voices or sounds and generates an isolated soundtrack of the speaker. VisualVoice is a novel multi-task learning framework that jointly learns audio-visual speech separation together with cross-modal speaker embeddings, effectively using a person’s facial appearance to predict their vocal sounds.

Screen Shot 2021-01-12 at 6.07.54 AM.png

The researchers explain that attributes such as gender, age, nationality and body weight — which present in the face — can provide a prior for sound qualities such as tone, pitch, timbre and basis of articulation. A model can use this to learn what to listen for to more accurately identify and separate an individual’s speech from a noisy environment. The network uses facial appearance, lip motion and vocal audio to perform this separation task, which augments the conventional “mix-and-separate” paradigm for audio-visual separation to also account for a cross-modal contrastive loss requiring the separated voice to agree with the face. A cost-reducing feature of the proposed method is that it can be trained and tested using unlabelled video.

Screen Shot 2021-01-12 at 7.47.41 AM.png

The approach was evaluated on five benchmark datasets for audio-visual speech separation, speech enhancement and cross-modal speaker verification, using standard metrics such as Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR) and Signal-to-Artifacts Ratio (SAR), and two speech-specific metrics: Perceptual Evaluation of Speech Quality (PESQ), which measures the overall perceptual quality of the separated speech, and Short-Time Objective Intelligibility (STOI), which is correlated with the intelligibility of the signal.

Screen Shot 2021-01-12 at 7.55.08 AM.png
Screen Shot 2021-01-12 at 7.55.44 AM.png
Screen Shot 2021-01-12 at 8.15.57 AM.png

VisualVoice excelled in audio-visual speech separation and enhancement in challenging real-world videos, outperforming SOTA methods on all metrics across all datasets. The researchers say the embedding learned by their model also improved the SOTA for unsupervised cross-modal speaker verification.

Speech separation has practical applications in assistive technology for the hearing impaired, wearable AR devices, speech-to-text in noisy videos and more. In future work, the researchers say they plan to explicitly model the fine-grained cross-modal attributes of faces and voices, and leverage these to further enhance speech separation.

The paper VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency is on arXiv.


Analyst: Reina Qi Wan | Editor: Michael Sarazen; Fangyu Cai


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

2 comments on “VisualVoice Uses Facial Appearance to Boost SOTA in Speech Separation

  1. Pingback: [R] VisualVoice Uses Facial Appearance to Boost SOTA in Speech Separation – ONEO AI

  2. Pingback: [R] VisualVoice Uses Facial Appearance to Boost SOTA in Speech Separation – tensor.io

Leave a Reply

Your email address will not be published. Required fields are marked *