Audio-Led Self-Learning for Visual Speech Clarity

Higher Education Press

Learning visual speech representations from talking face videos is an important problem for several speech-related tasks, such as lip reading, talking face generation, audio-visual speech separation, and so on. The key difficulty lies in tackling speech-irrelevant factors presented in the videos, such as lighting, resolution, viewpoints, head motion, and so on.

To solve the problems, a research team led by Shuang YANG publishes their new research on 15 December 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposes to disentangle speech-relevant and speech-irrelevant facial movements from videos in a self-supervised learning manner. The proposed method can learn discriminative disentangled speech representations from videos and can benefit the lip reading task by a straightforward method like knowledge distillation. Both qualitative and quantitative results on the popular visual speech datasets LRW and LRS2-BBC show the effectiveness of their method.

In the research, the researchers observe the speech process and find that speech-relevant and speech-irrelevant facial movements are differences in the frequency of occurrence. Specifically, speech-relevant facial movements always occur at a higher frequency than speech-irrelevant ones. Moreover, the researchers find that the speech-relevant facial movements are consistently synchronized with the accompanying audio speech signal.

Based on the new observations above, the researchers introduce a novel two-branch network to decompose the visual changes between two frames in the same video into speech-relevant and speech-irrelevant components. For speech-relevant branch, they introduce the high-frequency audio signal to guide the learning of speech-relevant cues. For the speech-irrelevant branch, they introduce an information bottleneck to restrict the capacity from acquiring high-frequency and fine-grained speech-relevant information.

Future work can focus on exploring more explicit auxiliary tasks and constraints beyond the reconstruction task to capture speech cues from videos. Meanwhile, it's also a nice try to combine multiple types of knowledge representations to enhance the obtained speech representations.

DOI: 10.1007/s11704-024-3787-8

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.