AI AI Weekly Global Research

NeurIPS 2020 Changes Review Process; Hinton & Google Brain Unsupervised Model Boosts ImageNet Accuracy 7%; DeepMind Releases New JAX Libraries

Synced Global AI Weekly February 23rd

Subscribe to Synced Global AI Weekly


Getting Started with NeurIPS 2020
The Thirty-Fourth Annual Conference on Neural Information Processing Systems (NeurIPS 2020) will be held in Vancouver Canada from December 6th through 12th. Besides submission deadlines, NeurIPS also announced new changes this year in the submission and reviewing process.
(NeurIPS) /(Call for Papers)/(Video)


Geoffrey Hinton & Google Brain Unsupervised Learning Algorithm Improves SOTA Accuracy on ImageNet by 7%
In the paper A Simple Framework for Contrastive Learning of Visual Representations, a team of Google Brain researchers including Hinton propose a simple but powerful “SimCLR” framework for contrastive learning of visual representations.
(Synced) / (Google Brain)


DeepMind Releases New JAX Libraries for Neural Networks and Reinforcement Learning
DeepMind announces the release of Haiku and RLax — new JAX libraries designed for neural networks and reinforcement learning respectively. Haiku is a simple neural network library for JAX built on Sonnet, and RLax (pronounced ‘relax’) is a library built on top of JAX that exposes useful building blocks for implementing reinforcement learning (RL) agents.
(Synced)


The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World
An MIT Technology Review article on OpenAI alleges “a misalignment between what the company publicly espouses and how it operates behind closed doors… accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.”
(MIT Technology Review)


Weekly Update | AI Battles the Coronavirus
This Canadian Start-Up Used AI to Track Coronavirus and Raised Alarm Days Before the Outbreak
Artificial Intelligence Could Fight a Future Coronavirus
Can AI speed up a cure for coronavirus? This Hong Kong start-up opens its resources to global drug firms for free
Volunteer Drone Teams Organize for COVID-19 Disinfection

Technology

The 2010s: Our Decade of Deep Learning / Outlook on the 2020s
This post focuses on the past decade’s most important developments and applications based on the work of Jürgen Schmidhuber’s lab. It also mentions related work, and concluding with an outlook on the 2020s, also addressing privacy and data markets.
(Jürgen Schmidhuber’s Blog)


Original Apollo 11 Guidance Computer (AGC) source code for the command and lunar modules
Original Apollo 11 guidance computer (AGC) source code for Command Module (Comanche055) and Lunar Module (Luminary099). Digitized by the folks at Virtual AGC and MIT Museum. The goal is to be a repo for the original Apollo 11 source code.
(GitHub)


The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, Gary Marcus proposes a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.
(Gary Marcus)

You May Also Like

Introduction to Deep Learning for Graphs and Where It May Be Heading
Traditional flat or sequential data delivery can’t fully satisfy today’s demanding deep learning models — and graphs are emerging as the solution. A new tutorial paper therefore introduces what Deep Learning for graphs is and where it may be heading to.
(Synced)


Up Close and Personal With BERT — Google’s Epoch-Making Language Model
A recent Google Brain paper looks into Google’s hugely successful transformer network — BERT — and how it represents linguistic information internally. In this article, Synced will give a brief introduction to the BERT model before exploring the contributions of this paper.
(Synced)

Global AI Events

March 23-26: GPU Technology Conference (GTC) in San Jose, United States
Apr 26-30: ICLR | 2020 in Addis Ababa, Ethiopia
May 12-14: Google I/O 2020 in Mountain View, United States
May 19-21: Microsoft Build 2020 in Seattle, United States

Global AI Opportunities

Tesla Autopilot is Recruiting
Google is Hiring Research Scientist, Social Science
Waymo is Hiring 2020 Interns
Twitter is Hiring Engineering Manager, ML
Alan Turing Institute Safe and Ethical AI Research Fellow/Fellow
OpenAI Scholars Spring 2020
DeepMind Internship Program
NVIDIA Graduate Fellowships
DeepMind Scholarship: Access to Science
LANDING AI is Recruiting
Stanford HAI is Recruiting
OpenAI Seeking Software Engineers and Deep Learning Researchers


Stay tight with AI!
Subscribe to Synced Global AI Weekly

3 comments on “NeurIPS 2020 Changes Review Process; Hinton & Google Brain Unsupervised Model Boosts ImageNet Accuracy 7%; DeepMind Releases New JAX Libraries

  1. Pingback: NeurIPS 2020 Changes Review Process; Hinton & Google Brain Unsupervised Model Boosts … – NewsChest Technology

  2. Pingback: NeurIPS 2020 Changes Review Process; Hinton & Google Brain Unsupervised Model Boosts ImageNet Accuracy 7%; DeepMind Releases New JAX Libraries – Bitfirm.co

  3. Pingback: NeurIPS 2020 Changes Review Process; Hinton & Google Brain Unsupervised Model Boosts ImageNet Accuracy 7%; DeepMind Releases New JAX Libraries – Tech Check News

Leave a Reply

Your email address will not be published. Required fields are marked *