One of the biggest risks involved with AI is “it tends to concentrate power in hands of fewer individuals,” warns Andrew Ng, one of the AI’s most prestigious figures. Ng made the remark in his closing keynote speech at Intel’s two-day AI Developer Conference in San Francisco last week.
The founder of Deeplearning.ai and Funding.ai stressed that each technology has pros and cons, and there is no exemption for AI. While Ng has repeatedly praised AI as “the superpower” or “the new electricity,” he pointed out ways that AI and its data biases can be misused.
“For example, a smaller group of people than ever before can influence the way a very large number of people vote, and this has implications for democracy,” said Ng, a comment reflected in the Facebook–Cambridge Analytica data scandal.
Ng explained that the rise of big data and machine learning has made centralized decision-making more efficient than ever, especially for corporations and states who have access to large-scale datasets. He gave the example of Singapore’s health system, which he characterized as a big centralized database which allows the nation to make certain types of centralized decisions with greater efficacy.
However, Ng warned that AI-driven centralized decisions also risk being influenced by gender or racial bias, and so on.
The fledgling AI technology still struggles to perform tasks free of the influence of biased data, which could expose an AI solution’s creators. Google’s first generation visual AI for example identified images of African-American as gorillas, and a predictive policing algorithm unfairly targeted certain neighborhoods.
Ng called on the AI community to be aware of the tech’s potential societal impact.
“Even as all of you go and build these amazing products and systems that I think will change a lot of people’s lives and help a lot of people, I hope that also it will be up to us to the engineers doing this work to also make sure that we ameliorate any downsides, or at least do our best to solve any problems that we end up playing a role or participating in,” said Ng.
Check out the YouTube video of Ng’s closing keynote speech below. Synced selected several other highlights you might be interested in.
19:22 — “A lot of the skill in AI strategy today is knowing where to apply supervised learning, but also have a clear-eyed view of its limitations so you know where to fit it into a broader business context.”
Ng said even though supervised learning has created 99 percent of economic value in AI, it is still very limited. Thus a company should deploy appropriate AI techniques for particular problems, but also understand the tech’s limitations.
23:35 — Ng listed four major AI techniques — supervised learning, transfer learning, unsupervised learning and reinforcement learning from top to bottom, and said “It turns out that as you go down this list, I think the value created today drops very rapidly.”
Interestingly although reinforcement learning produced the epoch-making AlphaGo and enabled game computers to beat professional human players, it remains under-implemented in value creation.
36:32 — “If you have the biggest data asset in the world of pictures of heads of cabbages in the dirt, even the large web search engines or the large social media companies or other Internet companies may not have this data asset. This actually makes your business increasingly defensible.”
Ng was referring to the AI startup Blue River, which was sold to John Deere for US$305 million last year. The company was originally a class project at one of Ng’s classes at Stanford University, and it grew into a billion-dollar business that uses machine learning to manage crop-spraying equipment systems.
Journalist: Tony Peng | Editor: Michael Sarazen