Science fiction has a curious way of being prophetic. We can only hope that stories like The Matrix, Terminator, I, Robot and others missed the mark when they suggested robots and computers are going to rise up against us and either use them as batteries or fertilizer for plants they’ll crush under their robot treads.
If you’re concerned about the rise of the machines, take heart in the fact that there are still some things humans do better than machines. Even one thing a bird can do better. They may not save our species, but you never know.
10. Veery Birds are Better at Predicting Hurricanes Than Machines
Predicting hurricanes is important business. Currently, we rely on a number of different systems to help determine if a hurricane is on the way, including things like satellites, radar, even ships and buoys in the oceans. We can predict a hurricane about 36 to 48 hours in advance thanks to all of this technology.
When it comes to long range predictions, we have little better than chance to guide us. We can predict hurricanes will come in hurricane season, but so what? That’s like predicting the sun will rise tomorrow. For greater reliability, we can turn away from computers and turn to the birds. Veeries have an uncanny knack for predicting severe hurricane weather as demonstrated by their breeding seasons. These birds live in Southern Canada and the Northern US and have one clutch of eggs per breeding season.
In years when hurricane seasons are severe, the veeries cut their breeding seasons short, even if they haven’t been successful yet. They will do this months in advance of hurricane season, but the two have been demonstrably linked.
In 2018, an ornithologist predicted a particularly heavy hurricane season while meteorological data predicted the exact opposite. Scientists who deal in weather were insisting that it would be a mild year. They used a number, ACE, which stands for accumulated cyclone energy, and predicted anywhere from 60, which is quite low, to 103 at most, which is below average.
The ornithologist, who had never predicted weather in his life, suggested 70 to as much as 150. It turned out to be 129. He got his prediction from observing the behavior of veeries, based on 20 years of observing their behavior in the wild.
9. Humans Are Superior Gamers But They Suck at Teamwork
The global gaming market is an absolute juggernaut. It’s predicted that by 2025 it will be worth about $257 billion. That’s 93% of an Elon Musk! That kind of money has inspired a lot of amazing innovations in technology, including graphics and artificial intelligence. And computers aren’t just behind the games, sometimes they get out in front.
Artificial intelligence has proven itself to be a better gamer than the average human player, even though this is a fairly recent development. But that does need to be taken with a grain of salt. In a strict, by-the-numbers style of play, a computer can often accomplish its tasks better than some guy in Idaho who keeps insulting your mother while you play Call of Duty. However, when a game gets collaborative and requires teamwork, artificial intelligence starts to show some flaws. Namely, AI sucks at teamwork.
Human players routinely express frustration when it comes to dealing with AI teammates. Research found that the AI teammate didn’t improve the results of a game over playing with a pre-programmed computer partner that was designed to just know the rules of the game and play in a certain way. But the big difference was that the human partners hated working with the AI. The computer was considered unreliable, unpredictable and untrustworthy, three things you don’t want in a gaming partner.
8. Computer Translations Tend to be Pretty Sloppy
Have you ever run across a word or phrase in another language and then translated it online? And then did you take the time to translate it back to English and discover it was essentially gibberish? This happens because computers are remarkably bad at translations. Translation programs are notoriously bad at picking up on context and other nuances of language, making computer translators basic at best and useless at worst.
Things like slang, cultural context, proper names and more are lost on machines. Consider something like the word “set.” According to Guinness, there are over 430 potential meanings for that one word. A machine needs context clues to figure out how that would be used in a given translation, and that’s not an easy task.
Idioms are generally translated literally by machine, even highly advanced ones. Even single words can alter the tone of whole sentences, and that can be a problem with AI translators. You may get the gist of a work, but that’s not necessarily what you want, especially if you’re reading fiction because you want the story and you’re not just trying to clean basic facts.
7. Pick Fruit
Machines can build cars and computers and all manner of machines for us these days, but ironically it’s some of the simpler tasks they have trouble with. For instance, they’re not that great at picking strawberries. And many other fruits and vegetables, for that matter.
The reason behind the robotic failure is all too easy to guess. A robot isn’t very good at guessing if it’s being too rough or not. For fruits like strawberries and other berries, a light touch is needed. A robot can probably harvest nuts until the cows come home, but berries need to be handled gently. Robot harvesters have no way of knowing if they’re squeezing too hard and destroying the produce.
As it happens, harvesting robots are being designed to sidestep this issue by picking whole plants rather than just the berries. They can do the work of 30 people in the same amount of time. But for now, the picking robots have to scan fields and figure out where to go to find the ripe fruit. So far they’re only able to harvest about 50% of the ripe fruit while humans can get up to 90%.
6. AI Isn’t Good at Reading Emotions
Facial recognition technology is something that has been big in the news for years. People are leery of it because it smacks of a surveillance state and constantly being watched. But another aspect people fear is computers being able to look at you and read you, essentially determining how you feel from one moment to the next. This could be used to exploit people for marketing, advertising, and additional purposes towards the goal of making money. Schools in China used it to determine how children felt while they learned remotely, ostensibly to improve their overall learning experience.
The thing about emotion detecting computers is that they’re not very good. Despite what people who market the technology insist, there is little evidence that it’s very effective. Neuroscientists have flat out stated that you cannot accurately judge a person’s emotional state based on facial expression.
5. Humans are Better Soldiers than Robots
One of the most controversial uses of AI in the world today is related to warfare. Should we entrust machines to make life and death decisions in a war zone? Is it ethical to allow a robot to take a human life? It seems like most people are against the idea and the US has already assured us that humans will always be making the final decision.That said, there is speculation that the ship has sailed and autonomous killing machines have already been used in the field. So are robots better soldiers than humans? That depends on what you mean by better.
A machine, even an artificial intelligence, will do what it’s tasked with doing. Without human emotion and ethics, an AI would have likely made a different decision than Stanislav Petrov did back in 1983, when he got word that the American military had launched a nuclear strike on the Soviet Union. Petrov did not alert his government about the attack that his monitoring station had detected, as he was required to do, and instead investigated further, determining that it was a false alarm. AI likely would have done the opposite and none of us would be here now to discuss it. They lack morality and can be unpredictable in how they process data.
Everyone from Elon Musk to Stephen Hawking has warned that AI could doom us all. That’s not what a good soldier does at all.
4. AI Hasn’t Mastered Common Sense Yet
Most of us have met someone in life who is very smart but has no common sense. We distinguish between the two. You can be a math wizard but still act like an idiot. That’s kind of what AI is like. It can be very smart, but it has no common sense.
Common sense is how we describe abductive inference. This is what lets us ignore a million silly explanations for things that happen in life and focus on the ones that make the most sense. If you hear a noise upstairs, it’s why you might think it’s your spouse or the cat and not an elephant or Gordon Ramsay. Those last two options sound stupid because you have common sense. AI doesn’t, so it needs to consider those as possibilities.
Current AI relies on symbolic logic and deep learning. These explain a lot, but ignore common sense and are the reasons why AI is not able to come close to duplicating real human intelligence.
3. AI Writing Programs Haven’t Perfected Human Writing Yet
The future of writing may be overtaken by machines but, for now, they’re still not quite up to snuff. AI is adept at writing prose, especially things like journalistic articles, but it hasn’t fully mastered a human voice yet.
Computers using something called GPT-3 or Generative Pre-trained Transformer 3 can produce text that very nearly mimics writing from a real human. It’s very good at certain kinds of writing, but not other kinds. If you want it to mimic the speech of a real human, for instance, it would be more likely to generate nonsense. It could write a fact-based article with ease, but if you want it to mimic a Stephen King story that reads like genuine King, it would seem off. The phrasing would be suspect or it would need serious editing.
The flaw is in how the tech works. It’s based on prediction and pattern matching. So, at large, it can generate general writing well. But when you want specific writing, like Stephen King, it limits the tech’s ability to understand what it’s trying to say.
2. Fulfilling Factory Orders
Believe it or not, the one thing most of us would guess a robot would do vastly better than a human just isn’t true. In warehouse settings, like at Amazon, robots are just not as good at fulfilling orders as human workers.
In 2019, it was suggested it’d be at least a decade before robots usurped the human workforce. Robots can pick items for orders if they’re big, but smaller items in bins tend to get damaged and are not as efficient as when picked by humans.
Elon Musk admitted that Tesla pushed automation too far as well, and it needed to be scaled back because humans are just better at being flexible and dealing with inconsistencies.
1. Captchas
If there’s one thing everyone on the internet knows, it’s that robots can’t look at nine squares and pick out the ones with traffic lights. Captcha tests are a website’s last defense against robotic invasion and they make use of numerous layers of data including your screen size and resolution, IP address, browsers, plugins, keystrokes and more to determine you’re you and not a machine.
If you’ve noticed these tests get harder, and the ones where you have to identify garbled looking text can sometimes even trick you, it’s because robots are actually getting better at the tests, so they have to get harder and harder to beat. In fact, for some tests robots are much better than humans already. But we’re still ahead of the curve for basic programs and, until we develop something better like various game-like tests, or ink blot puzzles which have been attempted, that’ll have to be good enough.