logo EECS Rising Stars 2023




Vibhaalakshmi Sivaraman

Towards Robust and Practical Neural Video Conferencing



Research Abstract:

Video conferencing systems suffer from poor user experience when network conditions deteriorate because current video codecs simply cannot operate at extremely low bitrates, and packet loss causes frame corruption and video freezes. Recently, several neural alternatives have been proposed that reconstruct talking head videos at very low bitrates using sparse representations of each frame such as facial landmark information. However, these approaches produce poor reconstructions in scenarios with major movement or occlusions over the course of a call, and do not scale to higher resolutions. On the other hand, loss recovery approaches such as retransmissions are impractical in real time settings while Forward Error Correction (FEC) techniques need to be tuned to the right level of redundancy. To address these concerns, we develop Gemino, a new neural compression system for video conferencing based on a novel high-frequency-conditional super-resolution pipeline. Gemino upsamples a very low-resolution version of each target frame while enhancing high frequency details (e.g., skin texture, hair, etc.) based on information extracted from a single high-resolution reference image. We use a multi scale architecture that runs different components of the model at different resolutions, allowing it to scale to resolutions comparable to 720p, and we personalize the model to learn specific details of each person, achieving much better fidelity at low bitrates. We implement Gemino atop aiortc, an open-source Python implementation of WebRTC, and show that it operates on 1024x1024 videos in real-time on a Titan X GPU, and achieves 2.2–5x lower bitrate than traditional video codecs for the same perceptual quality. Since Gemino is not designed to leverage high-resolution information from multiple references, we further design Gemino (Attention), a version of Gemino that computes ‘attention” or correspondence between regions of different reference frames and the target frame, instead of optical flow. Such a design is able to use the best parts of each reference frame to improve the fidelity of the reconstruction. Lastly, we develop Reparo, a loss-resilient generative codec for video conferencing that reduces the duration and impact of video freezes during outages. Reparo’s compression does not depend on temporal differences across frames, making it less brittle in the event of packet loss. Reparo automatically generates missing information when a frame or part of a frame is lost, based on the data received so far, and the model’s knowledge of how people look, dress, and interact in the visual world. Together, these approaches suggest an alternate future for video conferencing powered by neural codecs that can operate in extremely low-bandwidth scenarios as well as under lossy network conditions to enable a smoother video conferencing experience across the board.

Bio:

Vibhaa is a Ph.D. candidate in the Networking and Mobile Systems Group at MIT CSAIL where she is advised by Prof. Mohammad Alizadeh. Her research interests lie broadly in computer networks, with a particular interest in algorithmic techniques. In the past, she has worked on using networking ideas to improve blockchain scalability, as well as network monitoring and heavy-hitter detection. Recently, she has been interested in improving video streaming and conferencing applications using advances in computer vision and video compression techniques. Her thesis work focuses on enabling neural compression techniques to improve video conferencing in low-bandwidth and lossy networks. Prior to MIT, Vibhaa received a B.S.E. in Computer Science from Princeton University.