AI Technology

Allen Institute’s New ‘Computer Vision Explorer’ Lets Researchers Demo SOTA CV Models

The tool enables researchers to try, compare, and evaluate models to decide which work best on their datasets or for their research purposes.

The Allen Institute for AI (AI2) has released its new AI2 Computer Vision Explorer — a collection of demos of popular and state-of-the-art models for a variety of computer vision tasks. The tool enables researchers to try, compare, and evaluate models to decide which work best on their datasets or for their research purposes.

Tremendous progress has been made in the field of computer vision (CV) over the past decade, and thousands of research papers are published annually, with many models obtaining SOTA results on established benchmarks. It can however be challenging for even experienced researchers to decide how and where to start a particular project or to forecast how well popular models will actually perform on the data they may want or have to work with.

Headquartered in Seattle, the Allen Institute was established by Microsoft co-founder Paul Allen in 2014 to pursue scientific breakthroughs by constructing AI systems with reasoning, learning, and reading capabilities. Perceptual Reasoning and Interaction Research (PRIOR) is the Allen’s CV research division, focused on advancing the CV technology required to create such AI systems.

Built and maintained mainly by the PRIOR team, the Computer Vision Explorer project showcases a number of accessible models that have achieved SOTA or near SOTA results on popular CV tasks such as image classification, object detection, visual question answering (VQA), human poses estimation, etc.

On VQA tasks for example, researchers can choose photos presenting four existing scenarios or upload their own photos and type in the questions. They can then run those photos and questions on the Pythia model — the winning entry in the 2018 VQA Challenge.

The PRIOR team was motivated to build the Computer Vision Explorer tool by the belief that exploring a model’s qualitative behaviour can provide insights that are difficult to obtain by only tracking quantitative metrics. They say they hope this “quick and easy” method for performing qualitative error analyses on small samples of data will help researchers evaluate whether a given model may be useful for downstream tasks.

The Computer Vision Explorer is currently a work in progress, and the AI2 PRIOR team will continue to add more SOTA models to the demo page.

The AI2 Computer Vision Explorer is project page is here.


Journalist: Yuan Yuan | Editor: Michael Sarazen

4 comments on “Allen Institute’s New ‘Computer Vision Explorer’ Lets Researchers Demo SOTA CV Models

  1. Thank you for the post

  2. Thanks for sharing

  3. Your blog is really good, thanks for sharing this amazing information. It is really helpful and I would like to share your information. i know an amazing AI Demos app i.e. Chooch IC2, which uses computer vision to tag objects in live video and images. Chooch provides visual AI solutions in healthcare, security, geospatial, media, travel, and banking industries. Try the free AI demo app now.

  4. Pingback: Applied Sports Science newsletter – February 20, 2020 | Sports.BradStenger.com

Leave a Reply

Your email address will not be published.

%d bloggers like this: