Held May 23-25, C2 Montréal 2017 attracted over 6,000 industry participants from 50 countries. Now in its 6th year, organizers of the “immersive event” partnered with local startup Element AI to host Canada’s first Artificial Intelligence Forum and promote Montréal as a global AI hub. The keynote speakers were Professor Yoshua Bengio of the Montreal Institute for Learning Algorithms (MILA); Naveen Rao, VP and GM from Intel; Jean-Francois Gagne, Co-Founder and CEO of Element AI; and Google’s Hugo Larochelle and Blaise Aguera y Arcas. C2 provides a sneak peak of what’s to come when industry meets research, talents meet resources, and tech disruption meets creation.
Professor Bengio delivered a Fundamentals of AI masterclass to a cross-industry audience. He spoke on the fundamentals of machine learning in the bigger context of artificial intelligence. The talk was a generalization, but provided a structured glimpse into the what, how, and why of AI from Professor Bengio, who is a firm believer in making the technology accessible.
The Evolution of AI
How to make a computer intelligent and able to make the right decisions in different contexts? During the 1950s, 60s, and 70s scientists attempted to accomplish this by feeding the computer knowledge in form of books, equations and formulas. Unfortunately that didn’t work. Then came neural networks in the 80s.
Much of the knowledge we have are things we understand but can’t explain in words. For example, the way DeepBlue beat the world chess champion two decades ago is very different from how AlphaGo beat the world go champion this year. When human players look at a go board, their neural net translates from image to intuition. Deep learning enabled AlphaGo to replicate this kind of intuitive thinking, revolutionizing the process and bringing us to where we are today.
Deep Learning and Machine Learning
Deep learning is a particular approach to machine learning, machine learning is one approach to AI.
Deep learning is inspired by what we know about the brain. It focuses on how information is being represented, allowing the computer to figure out the representations for the task at hand. Neural networks can learn multiple levels of representations. That’s where the “deep” comes from: the deep stack of representations that are computed. It is very different from traditional AI, which focused on symbolic representations. Deep learning is non-symbolic, a composition of multiple layers of representations.
The initial breakthroughs took place a few years ago, when the computer became capable of looking at an image of a woman in a park for example, and producing a description of that scene in English. It was surprising that we could train neural networks this way, and generate natural language sentences that made sense. It doesn’t always work perfectly, but it’s amazing how far researchers got through deep learning. The computer can also focus attention on one particular object in the image — an idea inspired by the brain — and produce one word at a time before generating the whole sentence.
Data is Important: We can’t communicate with the computer directly, it has to learn from data. Data is a bunch of examples that tell the computer how the world is organized. Deep nets are trained on incredibly large quantities of data, thousands of times more than any human would encounter in their lifetime.
Data Needs Centralization. A lot of the medical data used for deep learning is spread across different hospitals and different organizations. In the coming years we will have more data centralization.
Learning Like a Child Learns
Computer scientists appreciate the concept of composition: to build something complicated is an additive process. This idea of combining the right pieces together to find solutions is vital.
Six-year-olds don’t start studying math through differential equations, rather they first learn simple arithmetic. One can apply the same concept to computers — begin by training the simpler aspects of the task and gradually accumulate more capabilities. The process evolves in a schematized manner — abstract artificial neurons are combined to work together from the bottom up. An image is recognized by activation of particular nodes, and the next layer of neurons transforms the same information but in a more abstract way. There are parameters responsible for connections among these events.
In deep learning, one layer represents all the exponential possibilities of patterns. There’s also the sequential possibility, each event building upon previously performed transformations.
Three Main Areas of Application
“We are very far from human level intelligence,” said Professor Bengio. He stressed however that even if AI research were to stop today, previous discoveries would still provide benefits for another decade. AI has tremendous potential, but moving forward requires data, engineering, capital, and sound social applications. Professor Bengio sees three immediate applications: first, natural language processing and generation, including personal assistance, customer service, call centres, chat bots, speech, legal assistance, and education; second, healthcare; and third, industrial applications such as robotics and transportation.
Some Applications Needs to be Banned
Professor Bengio believes it’s fairly certain AI will bring dramatic economic value, saving billions, even trillions in different industries thanks to automation. But with perhaps half of our work being done by computers, many people will lose their jobs.
And so — perhaps more than any previous technology in human history — AI will require strict and strong regulation. Professor Bengio spoke of the need to reflect upon the real-time impact of automation, and implement job retraining systems; and to also seriously consider the consequences of AI deployment in weaponry and political advertising. He recently signed an open letter calling for a ban on autonomous weapons, joining 3,105 other AI/robotics researchers including Stuart Russell, Demis Hassabis, Yann LeCun, Peter Norvig, Geoffrey Hinton, and key industry influencers such as Stephen Hawking and Elon Musk.
AI needs to be humane because it will introduce new dynamics and equilibriums into society — with effects that are simply too fundamental and far-reaching to be ignored. Professor Bengio sees the task of regulating AI as a “collective effort” that will involve researchers, industry and ultimately society.
Journalist: Meghan Han | Editor: Michael Sarazen