Research Talk Review

Why AI Must Be Biased, and How We Can Respond

AI must be biased because the knowledge we used in the training process contains traces of our history, including our prejudices.

Joanna J. Bryson, Professor at University of Bath, gave a talk in the Machine Intelligence Summit, New York, in 2016

http://videos.re-work.co/videos/208

 

Introduction:

Like physics and biology, computation is a natural process with natural laws. We are making radical progress in artificial intelligence because we learned how to exploit machine learning to capture existing computational outputs developed and transmitted by humans with human culture. Unfortunately, this powerful strategy undermines the assumption that machined intelligence, deriving from mathematics, would be pure and neutral, providing a fairness beyond what is present in human society. In learning the set of biases that constitute a word’s meaning, AI also learns some of the patterns which are based on our unfair history. Addressing such prejudice requires domain-specific interventions.

Summary:

Are deep learning and artificial intelligence magic? They seem to be able to do everything. But No. No learning is magic, computation in learning is a process that takes time, space and energy. It took a very long time to get to where DL/AI are today.

  • Intelligence is about doing the right thing at the right time in a dynamic environment.
  • intelligence requires: ( improving any of these three is a win. Machine learning algorithms help to improve the requirements)
    • A set of contexts that can be perceived.
    • A set of actions that can be performed.
    • Associations between perceived contexts and actions.
  • Intelligence cannot be achieved by only one action:
    • A lot of actions are mutually exclusive. Therefore, people should be able to sequence actions.
    • Actions are mutually exclusive, because they are limited by resources like physical location, visual attention, perceptual memory. For example: one person cannot appears in a subway and a classroom at the same time, this is the constraint of physical location; one person cannot pay attention to a laptop and a phone screen at the same time, this is the constraint of visual attention.
    • Brains have different regions for action, perception, and action selection. Different regions have different cells, cell connections, and computational architectures.
  • Intelligence requires searching a sequence of actions in a large space:
    • The fundamental problem of search is combinatorics: the number of possibilities has exponential growth.
    • Existing searches have been done by humans in the previous 60,000 years. (For example in language, use a specific word to describe a specific action)
    • Artificial intelligence (ML, DL, RL) is good at learning because it exploits existing knowledge which comes from previous exploration, so does human culture or natural intelligence.

“We're the ones with the laptops because we're the ones with language, because we transmit and retain more (and more useful) information than any other species.” — Bryson


  • The more you communicate, the more you can act collaboratively and intelligently.
    • Language itself is subject to evolution by selection.
    • Words label concepts, and act as fulcrums for thoughts. When kids hear a word, they look around and try to find what that word is all about. By using the word, we tell the people there is an interesting idea, there is a context that is worthy to discriminate, and there is an action that is worthy to perform.
    • This is unsupervised learning, no one really chooses their words that carefully.

 

  • From recent research: not just knowledge, but also prejudices can be mined from language.
    • Expecting women to be interested in arts, and men to be interested in science
    • Expecting women to be interested in home, and men to be interested in career.

Artificial intelligence and natural intelligence are continuous with each other. Neutral magic fairy of mathematical purity (ex. robots) will not fix the prejudice problem.

It is different between raising a child and building a robot? Because as human children evolve in our social society, they are biased in a natural way. But AI is not human or even a moral subject. We build robots and other AI and determine these systems’ goals. Our complete authorship gives us fundamentally different responsibilities from our relationship to other evolved systems. AI must be biased because the knowledge we used in the training process contains traces of our history, including our prejudices.

Related paper:

Semantics derived automatically from language corpora necessarily contain human biases:
https://arxiv.org/pdf/1608.07187v2.pdf

Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms
http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/

 


Analyst: Yuting Gui | Editor: Joni Zhong |Localized by Synced Global Team : Xiang Chen

0 comments on “Why AI Must Be Biased, and How We Can Respond

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: