This is the third Synced year-end compilation of “Artificial Intelligence Failures.” Despite AI’s rapid growth and remarkable achievements, a review of AI failures remains necessary and meaningful. Our aim is not to downplay or mock research and development results, but rather to take a look at what went wrong with the hope we can do better next time.
Synced 10 AI Failures of 2019
Footballer or Felon?
A leading facial-recognition system identified three-time Super Bowl champion Duron Harmon of the New England Patriots, Boston Bruins forward Brad Marchand, and 25 other New England professional athletes as criminals. Amazon’s Rekognition software incorrectly matched the athletes to a database of mugshots in a test organized by the Massachusetts chapter of the American Civil Liberties Union (ACLU). Nearly one-in-six athletes were falsely identified.
The misclassifications were an embarrassment for Amazon, which has marketed Rekognition to police agencies for use in their investigations. “This technology is flawed,” Harmon said in an ACLU statement, and “should not be used by the government without protections.”
Voice-Spoofing Software Cons CEO
In March the CEO of a UK-based energy firm got a phone call from his boss at the German parent company instructing him to transfer €220,000 ($243,000) to a Hungarian supplier. The ‘boss’ said the request was urgent and directed the UK CEO to transfer the money promptly.
It turns out the phone call was made by criminals who used AI-based software to mimic the boss’s voice, including the “slight German accent and the melody of his voice,” as reported in The Wall Street Journal. Such AI-powered cyberattacks are a new challenge for companies, as traditional cybersecurity tools designed for keeping hackers off corporate networks can’t identify spoofed voices.
Humans Inside the Machine?
Then there’s the artificial intelligence system that’s not very “artificial.” That was the accusation leveled at Engineer.ai in an article that appeared in The Wall Street Journal in August. The Indian startup claimed to have built an AI-assisted app development platform, but the WSJ, citing former and current employees, suggested it relies mostly on human engineers and “exaggerates its AI capabilities to attract customers and investors.”
Engineer.ai has attracted nearly US$30 million in funding from a SoftBank-owned firm and others. Founder Sachin Dev Duggal says the company’s AI tools are only human-assisted, and that it provides a service to help customers make more than 80 percent of a mobile app from scratch in about an hour. The WSJ story argued that Engineer.ai did not use AI to assemble code as it claimed, instead it used human engineers in India and elsewhere to put together the app.
AI: It’s Definitely a Pretzel. Not.
Is it a mushroom or is it a pretzel? OK forget about the pretzel, 99 percent sure this is a sea lion…Or wait, actually it could be a fox squirrel. Yup, looks like this one is a fox squirrel for sure… except that it’s not, nope. It’s a bullfrog. Wait…
Computer vision strives to understand what it sees the way humans do — but remains far from that goal. In July, researchers from Berkeley, University of Chicago and University of Washington hand-curated a dataset of 7,500 unretouched nature photos which are able to confuse SOTA computer vision models 98 percent of the time.
The ImageNet-A dataset of “natural adversarial examples” is but a tiny subset of the 14 million labeled images in industry-standard ImageNet, exploiting flaws in current classifiers which can over-rely for example on color, texture, and background cues.
Robot Dog Can’t Even
“Here it’s walking, now it’s pacing… and now it’s failing.”
A Boston Robotics’ Spot robodog suffered a dramatic onstage death while being live demo’d by company CEO Marc Raibert at re:MARS 2019 in Las Vegas this summer. The commercial robot was tasked with walking but its legs seemed to buckle. It stumbled desperately before sadly collapsing to the floor, where it lay motionless in front of the gasping audience.
Founded in 1992, American engineering and robotics design company Boston Dynamics has created incredible robots such as BigDog, Atlas, SpotMini, etc. While these flexible and versatile bots can jump over logs, open doors and even perform search and rescue tasks, this was not the first time they’ve succumbed to stage fright.
Boston Dynamics’ Spot takes a spill on stage:
Supercomputer Makes Awful Investment Decisions
When Hong Kong real estate tycoon Samathur Li Kin-kan let an automated platform based on a supercomputer called “K1” manage part of his fortune the goal was to boost funds. Instead the AI regularly lost up to US$20 million daily, according to a Bloomberg story.
Li filed a $23 million lawsuit against Raffaele Costa, CEO and founder of Tyndaris Investments, which sold Li the fintech service. The suit alleges Costa exaggerated K1’s abilities and is the first known case of a court action filed over automated investment losses. A verdict is expected in April 2020.
AI Gets Sleazy
An app that uses neural networks to virtually disrobe images of women caused public outrage early this year before it was shut down by its creator, anonymous programmer ‘Alberto.’ The DeepNude app used a photo of a clothed person as input to create a new, naked image of the same person.
The app of course has no X-Ray ability, it merely replaces clothes with naked breasts and a vulva — as such only realistically functioning on images of women. After a Vice story made it viral, DeepNude was taken down. Multiple people however then uploaded their own DeepNude-style apps to code repository GitHub, which responded by removing all clothes-stripping code from its platform, citing the “Sexually Obscene” section of the GitHub Community Guidelines.
Masks Foil Facial Recognition Checkpoints
Researchers with the San Diego-based AI firm Kneron were able to fool facial recognition systems at banks, border crossings and airports using a printed 3D masks — and in some cases only a 2D photo.
The team used high-quality masks based on people in databases the facial recognition system would access. The method was tested in public locations globally. In stores in Asia where facial recognition technology is deployed widely, the 3D masks deceived popular AliPay and WeChat payment systems. More alarmingly, at a self-boarding terminal at Amsterdam’s Schiphol Airport the team tricked a sensor with just a photo on a smartphone screen, Fortune reported.
Humans Taking Robots’ Jobs?
In 2015 the first futuristic, robot-staffed Henn-na Hotel opened in Japan to much fanfare. Bots staffed the front-desk and worked as cleaners, porters and in-room assistants. Early this year however, the hotel chain bucked global tech and labour trends and dismissed the last of their “unreliable, expensive and annoying” bots, replacing them with human workers.
The robot-staff novelty had worn off as customer complaints accumulated — the bots frequently broke down, could not provide satisfactory answers to guest queries, and in-room assistants startled guests at night by interpreting snoring as a wake command. Henn-na Hotels says it will head back to the lab to see if it can develop a new generation of more capable hospitality bots.
Airport Cart Gone Crazy
There was a dramatic scene at Chicago’s O’Hare International Airport in September when an unmanned catering cart suddenly broke bad on the tarmac, circling out of control and ever-closer to a vulnerable jet parked at a gate. Finally, a yellow-vested worker managed to stop the cart — by ramming and knocking it down with another vehicle.
Although the cart was neither AI-equipped nor autonomous its frenzied behavior drew comparisons to robot combat along with comments warning of the perils of machines gone amok and lauding the humans who resist and defeat them.
A video of the incident has attracted almost 20 million views.
Journalist: Yuan Yuan | Editor: Michael Sarazen