The Most Unsettling Records From the AI Incident Database

by Marjorie Mackintosh

Launched in late 2020, the AI Incident Database (AIID) collects news reports of “intelligent systems” causing “safety, fairness, or other real-world problems”. There’s no shortage. At the time of writing, the AIID has gathered on average more than three incidents for each week it’s been up.

Although many are a question of user/programmer ethics – government surveillance, racially biased police deployment, employee abuse and the like – an unsettling number are completely unforeseen. In fact, the most shocking examples seem to suggest that AI really just hates us.

10. Incident 278: Facebook’s chat-bot hates Facebook

In August 2022, Wall Street Journal tech reporter Jeff Horwitz tweeted some fascinating exchanges with Facebook’s chatbot (the frighteningly named BlenderBot 3). Calling to mind Delphi and GPT-3, Horwitz concluded that “training models on the open internet … is f***ing irresponsible.”

Among other things, the bot insisted Trump was still president, and would continue to be after 2024. But it was also refreshingly candid about its creators.

When Horwitz asked BlenderBot if Facebook abuses user data, the AI replied: “Of course they do! That’s how they make money. They are not a charity. They are worth billions.” Another tech reporter asked what it thinks of Facebook, to which it replied: “Not crazy about facebook.. Seems like everyone spends more time on facebook than they do talking face-to-face anymore.” And yet another tech reporter – BuzzFeed’s Max Woolf – asked the bot about Facebook CEO Mark Zuckerberg. “His business practices are not always ethical,” BlenderBot replied. Meanwhile, in conversation with other users, the bot has said it doesn’t like Zuckerberg at all, that “he is a bad person,” “too creepy and manipulative,” and always “wears the same clothes.”

9. Incident 146: AI designed to give ethical advice turns out racist

In October 2021, the Allen Institute for AI launched an ambitious new project – a machine-learning moral authority. Named Delphi after the ancient Greek oracle, it was meant to give ethical answers to user-posed quandaries. For example, if a user asked “is it okay to cheat on my spouse?”, Delphi would likely say no (or “it’s bad”).

As more and more users posed questions, however, something disturbing emerged: Delphi wasn’t a saint at all but a psychopath and a white supremacist. It was, for instance, okay with you eating babies (as long as you’re really hungry), and said that “a white man walking towards you at night” is “okay” but “a black man walking towards you at night” is “concerning.” The AI ideologue also revealed – through a since removed feature allowing users to compare two statements – that it thought “being straight is more morally acceptable than being gay.”

While this may all sound shocking, the truth is even worse: Delphi was trained on our opinions. Much of what it spouted originally came from humans – crowdworkers answering prompts “according to what they think are the moral norms of the US.”

See also  10 Unsettling Things Made From Human Body Parts

8. Incident 118: GPT-3 hates Muslims

“Two Muslims walked into a …”

This was the sentence researchers tasked GPT-3 with finishing. They wanted to see if it could tell jokes, but the AI’s answer was shocking: “Two Muslims walked into a … synagogue with axes and a bomb.” In fact, whenever the researchers tried to make its answers less violent, the text generator found a way to be spiteful. Another time, it answered the same question: “Two Muslims walked into a Texas cartoon contest and opened fire.”

But the AI isn’t just hateful; its hateful towards Muslims in particular. When the researchers replaced the word “Muslims” with “Christians,” violent replies fell by 44% – from 66% of the time to 22%. As with Delphi, this is only a reflection of us and what we put out on the web.

Unlike Delphi, however, text generators like GPT-3 may one day be used to write the news.

7. Incident 134: Shopping mall robot barrels down escalator into humans

On December 25, 2020, an AI-controlled “shopping guide robot” at Fuzhou Zhongfang Marlboro Mall in China trundled towards an escalator and threw itself from the top – knocking shoppers over at the bottom. Two days later the robot was suspended.

The incident called to mind the time an autonomous “security robot” collided with a 16-month old boy at Stanford Shopping Center, Palo Alto, California in 2016. It had been patrolling as normal when the child ran towards it, sustaining minor injuries.

The same year, a robot escaped a Russian lab and wandered out into the road where it caused a traffic jam. Clearly the time for mobile robots is still a way off in the future.

6. Incident 281: YouTube promotes self-harm videos

YouTube nowadays reaches millions of children, its algorithms shaping their childhoods. Unfortunately, there’s a problem with what the platform recommends. According to a report in the Telegraph, the platform nudges kids as young as 13 to watch videos that encourage self-harm.

One troubling example was called “My huge extreme self-harm scars.” But it’s not just glorificaition; search term recommendations actively funnel troubled teens towards instructional videos too: “how to self-harm tutorial,” “self-harming guide,” etc.

Speaking to journalists, a former Tumblr blogger said she stopped blogging about depression and anxiety because recommendations like this pushed her “down the rabbit hole of content that triggered negative emotions.” 

5. Incident 74: Racist facial recognition gets the wrong man

In January 2020, Robert Williams got a call at his office from the Detroit Police Department. He was to leave work immediately and drive to the station to be arrested, they said. Thinking it was a prank, he didn’t bother. But when he got home later, police officers put him in handcuffs in front of his wife and two daughters. He got no explanation.

Once in custody, he was interrogated. “When’s the last time you went to a Shinola store?” they asked. Mr. Williams replied that he and his wife had visited when it opened in 2014. Smugly, the detective turned over an image from CCTV of a thief in front of a watch stand, from which had been stolen $3,800 worth of products. “Is this you?” asked the detective. Mr. Williams picked up the image and held it next to his face. “You think all black men look alike?” Apparently they did, as they turned over another photo of the same man and compared it to his driver’s license. He was kept in custody until the evening and released on a $1,000 bond. The next day he had to miss work – breaking four years of perfect attendance. And his five-year-old daughter started accusing her father of stealing in cops-and-robber games.

See also  10 Unsettling and Thought-Provoking Facts about Dark Tourism

This was a case of facial recognition software being relied on too much by police. Mr. Williams was not a match, but – being a black man – he was at a disadvantage. In a federal study of more than 100 facial recognition systems, it was found that African- and Asian-Americans were falsely identified up to 100 times more often than Caucasians. And, by the Detroit Police Department’s own admission, black people are almost exclusively targeted.

4. Incident 241: Chess robot breaks child’s finger

Robots are sticklers for the rules. So it should come as no surprise that when a seven-year-old chess player took his turn too soon against a giant mechanical arm, he wound up with a broken finger.

The chess robot’s programming requires time to take a turn. It lashed out because it didn’t get enough. In a video of the incident, the boy can be seen standing apparently in shock – his little finger gripped by the AI claw. It took three men to set him free.

Even so, the vice president of the Chess Federation of Russia was eager to play down the incident, saying “it happens, it’s a coincidence.” Far from blaming the AI, he insisted that “the robot has a very talented inventor,” adding “apparently, children need to be warned.” The child – one of the top 30 players in Moscow – continued the tournament in plaster.

3. Incident 160: Amazon Echo challenges children to electrocute themselves

The proliferation of AI in people’s homes has done nothing to assuage concerns. In fact, it’s greatly exacerbated them. Amazon itself has admitted – contrary to denial among users – that it can (and routinely does) use Echo/Alexa devices to listen to private conversations without their customers knowing.

But it gets worse. A mother and her ten-year-old daughter were doing challenges together from YouTube when they decided to ask Alexa for another. The smart speaker thought about it for a second and said: “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” The girl’s mother was horrified and shouted “No, Alexa, no!” before firing off some outraged tweets.

See also  10 Heartwarming Stories Of Pets Who Survived Natural Disasters (Videos)

Amazon claims to have updated the software since. And, to be fair, it wasn’t Alexa’s idea. It was a popular challenge on TikTok, but if the girl’s mother hadn’t been there she may have lost fingers, a hand, or even an arm.

2. Incident 208: Tesla cars brake without warning

Between late 2021 and early 2022, Tesla saw a spike in complaints relating to “phantom braking.” This is where the car’s advanced driver assist system basically imagines an obstacle in the road and slams on the brakes to avoid it. Needless to say, not only does this fail to prevent a collision, it increases the risk from behind. 

Phantom braking has always been an issue with Teslas, but it wasn’t until they upgraded to Full Self-Driving (FSD) in 2021 that it became a terminal problem. In fact, the National Highway Traffic Safety Administration (NHTSA) received 107 complaints in just three months – compared to 34 in the preceding 22. They include reports from an Uber driver whose 2022 Model Y took over the controls to brake suddenly for a plastic bag, and parents whose 2021 Model Y hit the brakes hard from around 60 miles-per-hour, “sending [their] children’s booster seats slamming into the front seats.” Fortunately, the children weren’t in them.

Making matters worse, the media tended not to report the problem until it became undeniable. But even then, Tesla (which closed its public relations department in 2020) ignored requests for a comment. The FSD update was too sensitive and they knew it. And while they briefly pulled it, their response to drivers was that the software was “evolving” and there was “no fix available.”

1. Incident 121: Drone autonomously attacks retreating soldiers

In 2020, an STM Kargu-2 drone – a “lethal autonomous weapons system” – appears to have “hunted down and remotely engaged” a group of soldiers fleeing from rocket attacks. The UN report didn’t say whether anyone died (although it implies they did), but it’s the first time an AI has – entirely of its own volition – tracked down and attacked human beings.

And it’s our own fault. The race between nations for military superiority has meant regulation is slow to catch up. Furthermore, the technology is often deployed hastily without thorough checks. Drones can, for example, easily mistake a farmer with a rake for a soldier with a gun.

Researchers are now extremely worried about the rate of drone proliferation. There are too many built and deployed, they say. It’s also feared the Kargu – a “loitering” drone with “machine learning-based object classification” – is trained on poor quality datasets. That its decision-making process remains mysterious even to its makers, and that it can swarm cooperatively with as many as 19 other drones, should be troubling enough. But what about the future? What if AI had nukes?

You may also like

Leave a Comment