Video completion is a challenging computer vision task that involves filling a given space-time region with newly synthesized content — in effect, revealing the unseen. Video completion has been widely applied in applications such as video restoration, editing, watermark/logo removal, etc. Most advanced video completion methods are flow-based: synthesizing colour and flow jointly and propagating colour along flow trajectories to improve temporal coherence. Now, researchers from Virginia Tech and Facebook have introduced a novel flow-based video completion algorithm that compares favourably with the state-of-the-art in the field.
Existing flow-based video completion methods have the following three limitations: they are unable to synthesize sharp flow edges and so tend to produce over-smoothed results; the chained flow vectors between adjacent frames can only form continuous temporal constraints, which prevents constraining and propagating to many parts of a video; and they propagate colour values directly without considering factors such as lighting changes, shadows and so on.
The proposed method addresses these limitations in four ways:
- Flow edges: It obtains piecewise-smooth flow completion by explicitly completing flow edges and utilizing the completed flow edges to guide the flow completion.
- Non-local flow: It introduces additional flow constraints to a set of non-local frames such as those that are temporally distant, creating shortcuts across flow barriers and propagating colour to more parts of the video.
- Seamless blending: By operating in the gradient domain, it avoids visible seams in the results.
- Memory efficiency: It can deal with up to 4K resolution videos (where previous methods fail due to excessive GPU memory requirements).
The researchers validated their proposed method on 150 video sequences from the DAVIS dataset, where visual and quantitative results show that the proposed method compares favourably with state-of-the-art algorithms. There are however a couple of limitations. The method still has some failure cases under circumstances such as fast motion, which can result in poorly estimated flow and poor colour completion. Also, the method runs at 0.12 fps, which is slower than the 0.405 fps speed of end-to-end models.
The paper Flow-edge Guided Video Completion is on arXiv. Visit the project page here.
Analyst: Yuqing Li | Editor: Michael Sarazen; Yuan Yuan
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.