Can we build machines able to learn and work seamlessly with humans? How do machines, humans and animals learn from each other, and can we improve on these processes and implement them in new domains? In the new paper Inductive Biases for Deep Learning of Higher-Level Cognition, Yoshua Bengio and Anirudh Goyal from Mila – Quebec AI Institute delve into human and non-human animal intelligence and how it can inform deep learning.

Bengio and Goyal propose that deep learning (DL) be extended qualitatively rather than by adding more data and computing resources. Based on the hypothesis that human and animal intelligence could be explained by a few principles rather than an encyclopedic list of heuristics, their paper explores how inductive biases may help bridge the huge gap between current deep learning (DL) and human cognitive abilities to bring DL closer to human-level AI.
The team notes that DL already incorporates several key inductive biases found in humans and non-human animals. They propose that augmenting these inductive biases — with a focus on those involving higher-level and sequential conscious processing — could advance DL from its current successes on in-distribution generalization in highly supervised learning tasks to stronger and more humanlike out-of-distribution generalization and transfer learning abilities.
Bengio and Goyal discuss inductive biases based on higher-level cognition, declarative knowledge of causal dependencies, and biological inspiration and characterization of higher-level cognition. They leverage the System 1 and System 2 dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. In this classification, System 1 refers to what current deep learning is very good at — intuitive, fast, automatic, anchored in sensory perception; while System 2 represents rational, sequential, slow, logical, conscious, and expressible with language. The researchers say DL models that can perform System 2 tasks by taking advantage of the computational workhorse of System 1 abilities will be better equipped to deal with dynamic, changing conditions, i.e., will learn to think and behave more like humans.
They wrap up the paper by identifying a number of open questions and paths for future DL research:
- Jointly learn a large-scale encoder and a large scale causal model with high-level variables
- Unify declarative knowledge representation and inference mechanism with attention and modularity in a single architecture
- Innovation in neural architecture with low-level programming and hardware design requirements
- Inductive biases in novel planning methods
- Computation over modules and over data points
- Scaling to large number of modules
- Macro and Micro Modules
Bengio and Goyal note that the ideas presented on the use of inductive biases remain in the early stages of maturation, and much work needs to be done to improve understanding and to find appropriate ways to incorporate such priors in neural architectures and training frameworks.
The paper Inductive Biases for Deep Learning of Higher-Level Cognition is on arXiv.
Analyst: Rober Tian | Editor: Michael Sarazen; Fangyu Cai

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: [R] Mila Proposes New Inductive Biases to Boost Deep Learning – tensor.io
Pingback: [R] Mila Proposes New Inductive Biases to Boost Deep Learning – ONEO AI
good article