Site icon Synced

Why AI Must Be Biased, and How We Can Respond

Joanna J. Bryson, Professor at University of Bath, gave a talk in the Machine Intelligence Summit, New York, in 2016

http://videos.re-work.co/videos/208

 

Introduction:

Like physics and biology, computation is a natural process with natural laws. We are making radical progress in artificial intelligence because we learned how to exploit machine learning to capture existing computational outputs developed and transmitted by humans with human culture. Unfortunately, this powerful strategy undermines the assumption that machined intelligence, deriving from mathematics, would be pure and neutral, providing a fairness beyond what is present in human society. In learning the set of biases that constitute a word’s meaning, AI also learns some of the patterns which are based on our unfair history. Addressing such prejudice requires domain-specific interventions.

Summary:

Are deep learning and artificial intelligence magic? They seem to be able to do everything. But No. No learning is magic, computation in learning is a process that takes time, space and energy. It took a very long time to get to where DL/AI are today.


“We're the ones with the laptops because we're the ones with language, because we transmit and retain more (and more useful) information than any other species.” — Bryson


 

Artificial intelligence and natural intelligence are continuous with each other. Neutral magic fairy of mathematical purity (ex. robots) will not fix the prejudice problem.

It is different between raising a child and building a robot? Because as human children evolve in our social society, they are biased in a natural way. But AI is not human or even a moral subject. We build robots and other AI and determine these systems’ goals. Our complete authorship gives us fundamentally different responsibilities from our relationship to other evolved systems. AI must be biased because the knowledge we used in the training process contains traces of our history, including our prejudices.

Related paper:

Semantics derived automatically from language corpora necessarily contain human biases:
https://arxiv.org/pdf/1608.07187v2.pdf

Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms
http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/

 


Analyst: Yuting Gui | Editor: Joni Zhong |Localized by Synced Global Team : Xiang Chen

Exit mobile version