AI Industry

2017 in Review: 10 AI Failures

At Synced we are naturally fans of machine intelligence, but we also realize some new techniques struggle to perform their tasks effectively, often blundering in ways that humans would not.

This year artificial intelligence programs AlphaGo and Libratus triumphed over the world’s best human players in Go and Poker respectively. While these milestones showed how far AI has come in recent years, many remain sceptical about the emerging technology’s overall maturity — especially with regard to a number of AI gaffes over the last 12 months.

At Synced we are naturally fans of machine intelligence, but we also realize some new techniques struggle to perform their tasks effectively, often blundering in ways that humans would not. Here are our picks of noteworthy AI fails of 2017.

Face ID cracked by a mask

image (66).png

Face ID, the facial recognition technique that unlocks the new iPhone X, was heralded as the most secure AI activation method ever, Apple boasting the chances of it being fooled were one-in-a-million. But then Vietnamese company BKAV cracked it using a US$150 mask constructed of 3D-printed plastic, silicone, makeup and cutouts. Bkav simply scanned a test subject’s face, used a 3D printer to generate a face model, and affixed paper-cut eyes and mouth and a silicone nose. The crack sent shockwaves through the industry, upping the stakes on consumer device privacy and more generally on AI-powered security.

Neighbours call the police on Amazon Echo

The popular Amazon Echo is regarded as among the more robust smart speakers. But nothing’s perfect. A German man’s Echo was accidentally activated while he was not at home, and began blaring music after midnight, waking the neighbors. They called the police, who had to break down the front door to turn off the offending speaker. The cops also replaced the door lock, so when the man returned he discovered his key no longer worked.

 

Facebook chatbot shut downimage (67).png

This July, it was widely reported that two Facebook chatbots had been shut down after communicating with each other in an unrecognizable language. Rumours of a new secret superintelligent language flooded discussion boards until Facebook explained that the cryptic exchanges had merely resulted from a grammar coding oversight.

Las Vegas self-driving bus crashes on day one

A self-driving bus made its debut this November in Las Vegas with fanfare — resident magicians Penn & Teller among celebrities queued for a ride. However in just two hours the bus was involved in a crash with a delivery truck. While technically the bus was not responsible for the accident — and the delivery truck driver was cited by police — passengers on the smart bus complained that it was not intelligent enough to move out of harm’s way as the truck slowly approached.

Google Allo responds to a gun emoji with a turban emoji

image (68).png
A CNN staff member received an emoji suggestion of a person wearing a turban via Google Allo. This was triggered in response to an emoji that included a pistol. An embarrassed Google assured the public that it had addressed the problem and issued an apology.

HSBC voice ID fooled by twin

HSBC’s voice recognition ID is an AI-powered security system that allows users to access their account with voice commands. Although the company claims it is as secure as fingerprint ID, a BBC reporter’s twin brother was able to access his account by mimicking his voice. The experiment took seven tries. HSBC’s immediate fix was to establish as account-lockout threshold of three unsuccessful attempts.

Google AI looks at rifles and sees helicopters

image (69).png

By slightly tweaking a photo of rifles, an MIT research team fooled a Google Cloud Vision API into identifying them as helicopters. The trick, aka adverse samples, causes computers to misclassify images by introducing modifications that are undetectable to the human eye. In the past, adversarial examples only worked if hackers know the underlying mechanics of the target computer system. The MIT team took a step forward by triggering misclassification without access to such system information.

Street sign hack fools self-driving cars

Researchers discovered that by using discreet applications of paint or tape to stop signs, they could trick self-driving cars into misclassifying these signs. A stop sign modified with the words “love” and “hate” fooled a self-driving car’s machine learning system into misclassifying it as a “Speed Limit 45” sign in 100% of test cases.

AI imagines a Bank Butt sunset

image (70).png

Machine Learning researcher Janelle Shan trained a neural network to generate new paint colors along with names that would “match” each colour. The colours may have been pleasant, but the names were hilarious. Even after few iterations of training with colour-name data, the model still labeled sky blue as “Gray Pubic” and a dark green as “Stoomy Brown.”

Careful what you ask Alexa for, you might get it

The Amazon Alexa virtual assistant can make online shopping easier. Maybe too easy? In January, San Diego news channel CW6 reported that a six-year-old girl had purchased a US$170 dollhouse by simply asking Alexa for one. That’s not all. When the on-air TV anchor repeated the girl’s words, saying, “I love the little girl saying, ‘Alexa order me a dollhouse,’” Alexa devices in some viewers’ homes were again triggered to order dollhouses.


Journalist: Tony Peng | Editor: Michael Sarazen

25 comments on “2017 in Review: 10 AI Failures

  1. According to https://www.snopes.com/alexa-orders-dollhouse-and-cookies/, the last claim “Alexa devices in some viewers’ homes were again triggered to order dollhouses” is “questionable”.

  2. Pingback: AI Failures in 2017 – Full-Stack Feed

  3. Pingback: New top story on Hacker News: AI Failures in 2017 – ÇlusterAssets Inc.,

  4. Pingback: AI Failures in 2017 – Curtis Ryals Reports

  5. The accident with the self-driving bus was caused by the negligence of the human driver of the other vehicle (and was very minor) http://bgr.com/2017/11/10/self-driving-crash-bus-las-vegas/

    • Yeah, he mentions that! But the bus should be able to stop or move away from the other vehicle approaching it. Still a failure!

      • Przemek

        Someone has to teach the self-driving cars to lay on the horn

  6. Pingback: Synced | 2017 in Evaluate: 10 AI Screw ups – Startupon.net

  7. Pingback: Synced | 2017 in Review: 10 AI Failures | A1A

  8. Pingback: 2017 in Review: 10 AI Failures – Pingie.com

  9. Pingback: 10 AI Fallas en el 2017 – High Tech Newz

  10. Pingback: 10 AI Failures in 2017 – Curtis Ryals Reports

  11. Pingback: 10 AI Failures in 2017 | My Blog

  12. Pingback: 2017 十大 AI 犯蠢事件,蘋果、Google 和 Amazon 都上榜 | TechNews 科技新報

  13. Pingback: 2017 十大 AI 犯蠢事件,苹果、Google 和 Amazon 都上榜 | TechNews 科技新报

  14. Pingback: Декабрьская лента: лучшее за месяц — Блоги экспертов

  15. Pingback: AI 2017 part 2 highlights/ trends | Turing Machine

  16. Pingback: Декабрьская лента: лучшее за месяц (2017) — Блоги экспертов

  17. Pingback: The fears of the AI era. Where Ethics is Needed? | PocketConfidant AI

  18. Pingback: รวม 10 ข่าว ปี 2017 : เมื่อ AI ทำงานผิดพลาดจนได้เรื่อง! | Techsauce

  19. Pingback: 2018 in Review: 10 AI Failures | Synced

  20. Pingback: 2018 in Review: 10 AI Failures - AI+ NEWS

  21. Pingback: 2017 十大 AI 犯蠢事件,蘋果、Google 和 Amazon 都上榜 - About 24/7

  22. Pingback: [ANALYSE ] Stupidité artificielle - L'erreur n'est-elle qu'humaine? - CScience IA

  23. Pingback: The Fears of the AI Era. Where Ethics is Needed? – Open Ethics Initiative

Leave a Reply

Your email address will not be published. Required fields are marked *