Machine Learning & Data Science Share My Research

Huawei and University of Toronto | All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced Motion Modeling

This research proposes an efficient and cost-effective solution for multi-frame video interpolation.

Content provided by Zhixiang Chi, the first author of the paper All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced Motion Modeling.

This research proposes an efficient and cost-effective solution for multi-frame video interpolation. It integrates multi-frame interpolation into one network to accelerate the process and boost the interpolation quality. It is designed for low-power devices such as mobile phones.

Screen Shot 2020-09-07 at 1.21.11 PM.png

What’s New: This work introduces a true multi-frame interpolator. It utilizes a pyramidal style network in the temporal domain to complete the multi-frame interpolation task in one-shot. A novel flow estimation procedure using a relaxed loss function, and an advanced, cubic-based, motion model is also used to further boost interpolation accuracy when complex motion segments are encountered.

How It Works: This work is able to generate high-quality slow motion videos. After the optical flow estimation among the input frames, we apply a cubic motion prediction to mimic the real-life motions for all seven middle frames. The predicted optical flows are adaptively refined through a temporal pyramid network according to their expected error in the temporal domain. The middle frames are then generated by warping and synthesis. A relaxed warping loss function is applied to the flow estimation module to facilitate the cubic motion prediction and boost the final interpolation results. The method is able to generate multiple frames at once which is efficient and cost-effective.

Also, I would like to attach our demo slow motion video:

Key Insights: It integrates the multi-frame interpolation task into one framework. With a advanced motion modeling for real-life, it is able to generate high-quality interpolated videos. And it is friendly to low-power devices.

This area is highly active in recent years in the research community for generation of high-quality videos. However, application of this task in edge devices still needs more exploration.

The paper All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced Motion Modeling is on arXiv.


Meet the authors Zhixiang Chi, Rasoul Mohammadi Nasiri, Zheng Liu, Juwei Lu, Jin Tang and Konstantinos N Plataniotis from Noah’s Ark Lab, Huawei Technologies and University of Toronto.


Share Your Research With Synced Review

0__LqVlz2BYs8kWngH.png

Share My Research is Synced’s new column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

0 comments on “Huawei and University of Toronto | All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced Motion Modeling

Leave a Reply

Your email address will not be published.

%d bloggers like this: