When Bayesian Cognitive Scientist Josh Tenenbaum recently told a packed University of Toronto lecture hall that “intelligence is not just about pattern recognition,” the significance of the statement was not lost on the audience.
The university is the birthplace of the “Hinton School of Neural Networks”, where Geoffrey Hinton redesigned the neural network approach to AI by coding a synthesized biological circuitry of neurons into artificial models that excelled in pattern recognition, trailblazing a new path for the development of artificial intelligence.
Following an introduction by Hinton, Tenenbaum quipped about the old days, when Hinton “was willing to be called a cognitive scientist.” This drew laughter from the audience, considering the latter’s eminence in the field of computer science. As the principal AI investigator at MIT’s Center for Brain, Minds, and Machines (CBMM), Center for Brain and Cognitive Sciences (BCS), and Computer Science and Artificial Intelligence Laboratory (CSAIL), Tenenbaum is widely respected for his interdisciplinary research in cognitive science and AI.
Breakthroughs in AI and deep learning have prompted neuroscientists such as Dan Yamins from the Stanford NeuroAILab to rethink the structure of the ventral stream — the object and visual recognition part of the brain. This cross-pollination is hardly a surprise, considering that early publications on neural networks frequently appear in journals like Psychological Review, Cognitive Science, and Nature, as the field moved forward from the 50s’ single-layer perceptron to Kunihiko Fukushima’s neocognitron in the 80s, to Yan LeCun’s widely-used deep convolutional neural networks of today.
However, if humans hope to develop artificial general intelligence, data-munching, pattern-seeking deep neural networks may not be the best approach. Might Bayesian networks, causal models, and predictive coding work better? Or a symbol manipulation engine modeled after logic, lambda calculus, and programming languages be the route to pursue? Tenenbaum wants to steer the research wheel to cognitive science and look for the answer there.
AI Needs a Common Sense Core Composed of Intuition
Human common sense involves the understanding of physical objects, intentional agents and their interactions, which Tenenbaum believes can be explained through intuitive theories. This “abstract system of knowledge” is based on physics (eg. forces, masses) and psychology (eg. desires, beliefs, plans).
Such intuitions are present even in young infants, bridging perception, language, and action planning capabilities. A 2011 study by Erno Teglas models the physics reasoning capability of 12-month-olds, while Elizabeth Spelke’s pursued a similar research course in her paper psychological inferencing capability in 10-months-old.
How do we use computation models to reverse-engineer intuitive theories and teach an AI to evolve based on these principals? Tenenbaum suggests tackling the problem by using a new class of programming language called Probabilistic Programs, which is a compound of symbolic language, probabilistic inference, hierarchical inference (learning to learn), and neural networks.
Reverse-Engineering Intuitive Physics and Psychology
In collaboration with Tenenbaum’s group at MIT, DeepMind researcher Peter Battaglia is working on “a realistic model of physics that can estimate physical properties and predict probable futures,” to quote from the paper he co-authored with Tenenbaum, Computational Models of Intuitive Physics. Based on Bayesian inferencing, this model makes predictions in simulated 3D scenarios based on real-life statics, dynamics, forces, collisions, and friction.
Facebook AI proposed PhysNet in 2016, a neural network that predicted whether a stack of blocks would fall. The network excelled in predicting outcome and estimating block falling trajectories, discerning them based on color. Tenenbaum says that for a prediction involving 2-4 cubes, PhysNet required over 200k training scenarios.
In the joint research program Learning Physics from Dynamic Scenes, developed with Stanford’s Noah Goodman, Tenenbaum proposes a hierarchical Bayesian framework, working with probabilistic programs to model intuitive physical theories. The project trains the model on inferring physics scenarios in varying time periods, with different physical laws and properties at play. Inferring properties include mass, charge, friction, elasticity, and resistance.
In order to model intuitive psychology, researchers use a model wherein the agent considers its desires and its beliefs about the environment, which enables planning and then actions. Jara-Ettinger and Julian Schulz at Yale call this the “naive utility calculus.” Co-authored with Tenenbaum, Jara-Ettinger’s 2017 paper in Nature proposes a Bayesian theory of mind (BToM) model that infers an actor’s beliefs, desires, and percepts from how they move in the local environment.
Building Robots that Understand
Tenenbaum lectures at the nexus of AI, cognitive science, and neuroscience. As he notes in the 2017 paper Building Machines That Learn and Think Like People co-authored with Brenden Lake, Tomer Ullman and Sam Gershman, the most immediate task for AI is to “a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.”
Research in intuitive physics and psychology are especially promising in the field of robotics. A robot that knows intuitive physics can navigate the environment and perform nuanced actions such as carrying a cup of coffee, grasping a party balloon, and so on. Meanwhile, a robot that knows intuitive psychology by heart could observe, for example, a child pointing at cotton candy while crying as its parent shakes head, “no,” and would be able to correctly infer both humans’ intentions.
Tennenbaum is aiming for an AI that more completely understands the physical and psychological landscapes it will exist in. Such machines may also allow us to deepen our own understanding of intelligence.
Journalist: Meghan Han | Editor: Michael Sarazen