Facebook AI chief Yann LeCun gave the keynote speech “Learning World Models: the Next Step towards AI” today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
The high-profile AI researcher’s talk included a development timeline for Artificial Intelligence, Machine Learning, Artificial Neural Networks, etc., in which he identified a number of milestones and examples of AI applications.
Although most of today’s AI research uses deep learning, LeCun suggested the tech struggles in inference, and so a future trend will be combining deep learning with inference. LeCun emphasized the importance of memory. Memory modules in neural network dialog models for example have improved prediction capabilities in contextual conversation.
LeCun proposed that current AI systems are weak in inference, common sense and cognition of the mission background because they lack a “World Model.” Hence, along with attempts at improving memory, researchers must also build up a representation of the real world so AI can better understand the environment and perform more intelligently.
LeCun said he agrees with current approaches in applying narrow AI to autonomous vehicles, medical image processing and translation, etc. However, such AI remains incapable of making real deductions, working as a true smart assistant, or finally achieving the holy grail of artificial general intelligence (AGI).
Looking forward, LeCun highlighted self-supervised learning as a potential solution for problems in reinforcement learning, as it has the advantage of taking both input and output as part of a complete system, making it effective for example in image completing, image transferring, time sequence data prediction, etc. While the model’s complexity increases with the addition of feedback information, self-supervised learning models significantly reduce human involvement in the process, also known as the increase of automation.
After speaking on the historic interrelationship between science and technology, LeCun concluded his keynote by presenting a series of open questions:
- What is the equivalent of thermodynamics for intelligence?
- Are there underlying principles behind artificial and natural intelligence?
- Are there simple principles behind learning?
- Or is the brain a large collection of “hacks” produced by evolution?
Author: Victor Lu | Editor: Tony Peng, Michael Sarazen