AI Computer Vision & Graphics Conference Machine Learning & Data Science Popular

ECCV 2020 Best Paper Award Goes to Princeton Team

The 16th European Conference on Computer Vision (ECCV) kicked off on Sunday as a fully online conference. In the Conference Opening Session this morning, the ECCV organizing committee announced the conference’s paper submission stats and Best Paper selections.

The 16th European Conference on Computer Vision (ECCV) kicked off on Sunday as a fully online conference. In the Conference Opening Session this morning, the ECCV organizing committee announced the conference’s paper submission stats and Best Paper selections. The Best Paper honours go to a pair of researchers from Princeton University for their work developing a new end-to-end trainable model for optical flow.
 
ECCV 20 received a record-high 5,150 submissions, double the number of the previous conference in 2018. A total of 1,360 papers made the cut this year for a 26 percent acceptance rate. There will be 104 orals and 160 spotlights which will be presented in 16 live Q&A sessions.

Best Paper Award:
RAFT: Recurrent All-Pairs Field Transforms for Optical Flow
Authors: Zachary Teed and Jia Deng
Institution(s): Princeton University
 
Abstract: We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts perpixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves state-of-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT
 
Best Paper Honorable Mentions:

Towards Streaming Image Understanding
Authors: Mengtian Li, Yu-Xiong Wang, and Deva Ramanan
Institution(s): Carnegie Mellon University and Argo AI
 
Abstract: Embodied perception refers to the ability of an autonomous agent to perceive its environment so that it can (re)act. The responsiveness of the agent is largely governed by latency of its processing pipeline. While past work has studied the algorithmic trade-off between latency and accuracy, there has not been a clear metric to compare different methods along the Pareto optimal latency-accuracy curve. We point out a discrepancy between standard offline evaluation and real-time applications: by the time an algorithm finishes processing a particular image frame, the surrounding world has changed. To these ends, we present an approach that coherently integrates latency and accuracy into a single metric for real-time online perception, which we refer to as “streaming accuracy”. The key insight behind this metric is to jointly evaluate the output of the entire perception stack at every time instant, forcing the stack to consider the amount of streaming data that should be ignored while computation is occurring. More broadly, building upon this metric, we introduce a meta-benchmark that systematically converts any image understanding task into a streaming image understanding task. We focus on the illustrative tasks of object detection and instance segmentation in urban video streams, and contribute a novel dataset with high-quality and temporally-dense annotations. Our proposed solutions and their empirical analysis demonstrate a number of surprising conclusions: (1) there exists an optimal “sweet spot” that maximizes streaming accuracy along the Pareto optimal latency-accuracy curve, (2) asynchronous tracking and future forecasting naturally emerge as internal representations that enable streaming image understanding, and (3) dynamic scheduling can be used to overcome temporal aliasing, yielding the paradoxical result that latency is sometimes minimized by sitting idle and “doing nothing”.
 
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Authors: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
Institution(s): UC Berkeley, Google Research, UC San Diego
 
Abstract: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. 
 
ECCV 2020 comprises a main conference along with 45 workshops and 16 tutorials and runs virtually through August 28.


Reporter: Fangyu Cai | Editor: Michael Sarazen


Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

0 comments on “ECCV 2020 Best Paper Award Goes to Princeton Team

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: