Montreal has become something of a magnet for AI. Ian Goodfellow, the research scientist who pioneered generative adversarial networks (GANs) got his PhD in machine learning at the Université de Montréal, rising AI star Hugo Larochelle now leads Google Brain in Montreal, and last year the city hosted NeurIPS.
At the center of the Montreal AI scene is Dr. Yoshua Bengio, a Université de Montréal Professor and Head of the Montreal Institute for Learning Algorithms (MILA). Bengio was honored as a 2018 ACM Turing Award Laureate, sharing the “Nobel Prize of Computing” with two other essential AI figures — Dr. Geoffrey Hinton from Google and Dr. Yann LeCun from Facebook.
Last week hundreds of academics and industry professionals filled a downtown Montreal hotel for the RE·WORK Deep Learning Summit, where Bengio gave a talk on Deep Learning and Cognition.
Mechanisms for acquiring knowledge
Bengio started his hour-long keynote with a look at the current state of research in the field: “We’ve made huge progress, much more than even my friends and I expected a few years ago. But (the progress) is mostly about perception, things like computer vision and speech recognition and synthesis of some things in natural processing. We’re still far from human capabilities.”
Over the years, Bengio has repeatedly emphasized that certain principles of learning are responsible for creating intelligence, whether in machines, humans or other animals. “They can be described compactly, similarly to the laws of physics, i.e., our intelligence is not just the result of a huge bag of tricks and pieces of knowledge, but of general mechanism to acquire knowledge.”
Bengio provided examples of machines’ superhuman performance, such as defeating top human players at Go, while stressing that these breakthroughs, though impressive, are not the final goal for researchers, and that something is missing in current approaches.
With all the success we’ve had, some people think that we’re done. We need to scale things up. If we have larger datasets, bigger models and faster chips, we’re going to build the kinds of machines with the kind of algorithms we already had. I don’t think so, I think there are actually many pieces in the puzzle that are missing.
Why do I think so? There are some fundamental ways in which I think we’re lacking. We can see that sample complexity is not good. So the number of examples that you need for the machine learning system to learn a new task is for now much higher, much worse than what a human needs.
In Bengio’s view, if we expect machines to someday solve problems as humans do, those machines will need to understand a lot more about the world, so that “when they’re presented with a new task and maybe just a few examples of it, they’ll be able to do as well as humans eventually.” Most of today’s state-of-the-art deep learning systems however still have humans playing a fundamental role by defining the high-level concepts the machines are supposed to know. Even something as basic as labeling images to build a dataset requires humans deciding whether an image shows a cat or a dog.
Bengio contrasts AI researchers’s struggle to find ways to help machines better understand the world with the almost effortless way that human babies figure out physics on their own, with no instructions from their parents regarding concepts such as gravity, pressure, solid objects, and so on.
Finding the missing pieces of the puzzle
So what is required for deep learning to reach human-level intelligence? Bengio suggests the missing pieces of the puzzle include:
- Generalize faster from fewer examples
- Generalize out-of-distribution, better transfer learning, domain adaptation, reduce catastrophic forgetting in continual learning
- Additional compositionality from reasoning and consciousness
- Discover casual structures and exploit them
- Better models of the world, including common sense
- Exploit the agent perspective from RL, unsupervised exploration
Bengio cited the “System 1 and System 2” dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 refers to what current deep learning is very good at — intuitive, fast, automatic, anchored in sensory perception. System 2 meanwhile represents rational, sequential, slow, logical, conscious, and expressible with language. Bengio suggested System 2 is where future deep learning needs to do better.
Asked if machines could or should one day develop empathy, Bengio said researchers are already considering that, as emotions are a major factor in human complexity. For example, a concern or a pain might influence a human’s consciousness, and so understanding emotions could lead researchers to a better understanding of consciousness.
Bengio wrapped up his talk by identifying three research directions for improving AI:
- Building a world model which meta-learns causal effects in an abstract space of causal variables, and is able to quickly adapt to changes in the world and generalize out-of-distribution by sparsely recombining modules
- Acting to acquire knowledge (exploratory behavior)
- Bridging the gap between System 1 and System 2
Journalist: Fangyu Cai | Editor: Michael Sarazen