AI Conference

Neural Architects: What Have We Learned and Where Are We Going?

The Neural Architects Workshop gathers experts and researchers in the field of deep neural network (DNN) design to share their insights and experiences working in this domain.

The biennial International Conference on Computer Vision (ICCV 2019) will be held in Seoul, South Korea from October 27 to November 2. The ICCV is one of the top international conferences in the field of computer vision. Its workshops are followed closely by the global AI community, with the “Neural Architects Workshop” of particular interest.

The Neural Architects Workshop gathers experts and researchers in the field of deep neural network (DNN) design to share their insights and experiences working in this domain. The scholars are also expected to discuss problems in existing DNN models and promising research directions to promote the performance of technologies and systems in industrial application scenarios.

Feature extraction and representation is a fundamental task in computer vision research. In recent years, deep neural networks have significantly influenced this research field through their empirically superior performance, achieving great progress in both basic visual processing tasks (such as image classification, object detection and scene segmentation tasks) and higher-level semantic understanding tasks (such as environment perception and video understanding). With the introduction of a series of task-oriented modules and network architectures — including both manually designed structures and those obtained through automated searching methods — neural networks have become an increasingly powerful and important tool for academia and industry alike.

The objective of the Neural Architects Workshop is to share knowledge of these techniques amongst the community and to attract further research in the the design and optimization of neural network architectures to promote the development of AI technologies.

The workshop is jointly launched by the Visual Geometry Group (VGG) at the University of Oxford and Chinese automobility startup Momenta, and will take place on October 28. Leading researchers with expertise in DNN design will communicate their experience and views through a series of invited talks and a roundtable discussion:

Alan Yuille, a pioneer of the field of computer vision
Ross Girshick, winner of the Marr prize
Shaoqing Ren, author of Faster RCNN and ResNet

The workshop is inviting paper submissions. The most distinguished work will appear in the oral presentation section, and other selected articles will be presented as posters, providing the attendees the opportunity to communicate with the authors. In addition to long articles, short papers related to published or ongoing research projects are also encouraged.

Papers can be related to (but are not limited to) the following topics:

  • Theoretical or empirical understanding of DNN architectures
  • Vision-oriented network design
  • Automatic search and design of DNN architectures
  • Bold innovations related to DNNs
  • Studies of existing basic modules or units
  • The relationship between optimization and DNN architectures
  • Retrospective analysis of prior architectures in computer vision

For more information, please refer to the website: https://neuralarchitects.org/

About Momenta
Momenta, established in 2016, is building the “brains” for autonomous vehicles. Its deep-learning based software in perception, HD semantic mapping and data-driven path planning enables the realization of full autonomy. Momenta offers multi-level autonomous driving solutions as well as big data services.

Speakers:

Alan Yuille is a Bloomberg Distinguished Professor of Cognitive Science and Computer Science at Johns Hopkins University. His research has received a number of awards, including the Ralyeigh Reserach prize in 1979, the Marr prize in 2003 and the Helmholtz Test of Time Award in 2013.

Raquel Urtasun is Uber ATG Chief Scientist and the Head of Uber ATG Toronto. She was Program Chair of CVPR 2018, is an Editor of the International Journal in Computer Vision (IJCV) and has served as Area Chair of multiple machine learning and vision conferences (NIPS, UAI, ICML, ICLR, CVPR, ECCV).

Ross Girshick is a Research Scientist at Facebook AI Research (FAIR), working on computer vision and machine learning. He received the 2017 PAMI Young Researcher Award and is well known for developing the R-CNN (Region-based Convolution Neural Network) approach to object detection. In 2017, Ross also received the Marr Prize at ICCV for “Mask R-CNN”.

Shaoqing Ren is Director of Research and Development at Momenta. In 2015, he won both the ImageNet and COCO challenges. He is widely known for developing the Faster R-CNN object detector. His paper on Deep Residual Networks is the most cited paper in the top-100 publication venues in all areas in Google Scholar Metrics 2018.

Barret Zoph is a Research Scientist with Google Brain. He is well-known for spearheading the field of Neural Architecture Search. Following his seminal paper on the topic, he has continued to push the state-of-the-art in Architecture Search with Reinforcement Learning on a range of machine perception tasks.

Sara Sabour is a Researcher with Google Brain. She is a renowned expert on the topic of capsules, authoring the foundational, widely-cited paper on these models in 2017.

Organizing Team:

Andrew Zisserman is Professor of Computer Vision Engineering at the University of Oxford.

Andrea Vedaldi is Associate Professor in Engineering Science at the University of Oxford.

Samuel Albanie is a Researcher in the Visual Geometry Group at the University of Oxford.

Li Shen was a Postdoctoral Researcher in the Visual Geometry Group and is Senior Researcher with Tencent AI Lab.

Jie Hu is a Researcher at Momenta and at the University of Chinese Academy of Sciences.

Barret Zoph is a Research Scientist with Google Brain.

0 comments on “Neural Architects: What Have We Learned and Where Are We Going?

Leave a Reply

Your email address will not be published.

%d bloggers like this: