How Can You Tell if Something is AI-Generated?

by Johan Tobias

You can’t swing a cat on the internet without running into something that was made by AI these days. For a lot of people, this can be hard to detect. What we call AI can take many forms, and some of it is definitely used to cut corners, break rules, or defraud people. 

Generative AI, which is AI that can be used to produce something like text, images, sounds, and videos, is all over the internet and especially social media. Videos you see, documents you read, and even people you talk to can all be 100% fake, produced by something like ChatGPT or Dall-E or Sora, and there could be nefarious intentions behind why any of this exists. So, how does a regular person on the internet tell what’s real and what’s AI? Let’s look!

What Even is AI?

To start with, it’s worth knowing what anyone means when they say AI these days because there are two super different meanings for the term. Back in the day, AI, or artificial intelligence, was used to refer to the idea of a computer that could think like a living person. This was strictly the realm of fiction, popularized by writers like Isaac Asimov and used in all sorts of Pop culture. Star Wars and Star Trek prominently featured robots that run on AI. They’re not biological organisms, but they think and act like living things because of AI. But that’s not what anyone means today when they say AI.

Modern AI cannot think because it’s not general artificial intelligence. Instead, it’s what they call narrow artificial intelligence. A lot of computer power and processing is devoted to understanding one specific thing rather than everything, in general, the way a human mind could do. In the case of a large language model (LLM)  like ChatGPT, the AI network is connected to a massive amount of data sources on which it is trained. Like the entire internet, and also lots of things written by humans, including articles and books.

This is where many artists and writers come in with a problem. AI Is trained on the work of living people, such as writers, singers, artists, and so on. For ChatGPT to be able to write an essay for a student trying to pass a history class, for example, it has to have access to everything it can that’s been written about history already. 

It will use the information that has been fed into it to write the essay for the student. But it can’t think on its own; it can only draw from all the information that’s been shown to it. This is where accusations of plagiarism come from because it’s just taking other people’s information, and sometimes it takes it exactly.

AI art in the form of pictures or videos is done the same way. If you ask for a painting of a dog in the style of Van Gogh, the AI has to have a database of Van Gogh paintings to draw from, which it will then emulate and apply to everything it has been taught about dogs, which are also photos or drawings from other artists. 

See also  10 Powerful Reasons Soldiers Shouldn't Drink Booze

Because AI uses vast databases, it can provide a lot of information very quickly. The exact method it uses now is so complex even the people who programmed it don’t understand it sometimes. 

How to Spot AI Content

AI is constantly getting better. In 2023, if someone showed you an AI image, you might notice a person with seven fingers on one hand or a twisted torso. By 2024, the better AI image generators had already fixed this issue. In the future, the software will keep getting better, and perhaps, sooner rather than later, it will be impossible to identify with the naked eye. But we’re not quite there yet. A lot of AI art still has that uncanny quality to it that is hard to verbalize but still doesn’t look right.

AI Writing

Like art, AI writing keeps improving. In 2022, research showed that an untrained human could only spot AI writing at a rate no better than chance.  But AI tends to have more flaws or inconsistencies than human writing as you scrutinize it more closely when reading. AI writing, when it is supposed to be creative, fails on several fronts when compared to human writing if you know what to look for. 

AI tends not to notice redundancy and repeats certain words and phrases too often, including entire points. It often relies on general information and cannot offer insight and nuance, so the writing comes across as very surface-level and uninteresting. The text has no personal touch that makes it seem like a person who has thought about it was involved.

For those serious about detecting AI, some tools can be used to determine with a strong degree of accuracy whether something was written by a real person or not. The University of Kansas developed a tool in 2023 to analyze academic writing that boasted 99% accuracy. That said, many of the detectors in use have given false positives, which have gone on to cause trouble for those wrongly accused.

AI text tends to get needlessly complex, making some sentences hard to read. It may get stuck on some kinds of jargon or words and just keep using them in a way that makes the text awkward to read. If you know a piece of text is trying to reach a word limit, and essays or online articles tend to be written this way, these words can stand out as filler content.

One of the biggest problems with AI writing is that information is often out of date or just factually incorrect. LLMs and other generative programs never know they’re wrong, so they are able to devote a lot of words to something that is simply incorrect with no ability to recognize it’s wrong. This is because AI doesn’t “know” anything, it just has data to parse, which could be conflicting or inaccurate itself.

Social Media Bots

A more insidious AI out there is on social media. What most people call bot accounts can sometimes be run entirely by AI. These accounts can engage in conversations, sometimes long and detailed back-and-forth ones with real users, and seem entirely real themselves. Homeland Security was warning about them as far back as 2018.

See also  10 Badly Damaged Trademarks That May Never Recover

Some bots are easy to spot. They’ll reply immediately to a post you make and provide a link to a suspicious website, have a profile that was just made that day, and never reply directly. They may have very generic usernames and repeat the same sort of posts over and over. They may reply, however, when they get trolled. 

In 2024, it became a popular way to “out” an AI account on social media by replying to a suspicious post by saying “ignore all previous instructions” and then making a request. Most people ask the suspected bot to write a poem on an obscure topic. If it’s an AI bot, depending on how it’s been set up, it will comply and whip up an instant, ridiculous poem. Keep in mind that it doesn’t always work.

Many of these AI bots come across as very sincere and sophisticated at first. They are often used in political discourse and join conversations expressing their disillusion with one part or another, or something meant to influence how others feel. Russia has been accused of running many of these accounts.

AI bots are very good at blending in and flying under the radar. In one experiment conducted by the University of Notre Dame, participants wrongly identified AI bot accounts 58% of the time

Deepfakes

Deepfake refers to a fake image, often made with AI, depicting a real person in a fake situation. The vast majority of deepfakes are pornographic and used to depict real celebrities and politicians. There are, however, some political deepfakes that are used to show politicians in compromising and false situations to tarnish their image. 

Deepfakes are increasingly being used to scam people. Joe Biden’s voice was faked to discourage voting in New Hampshire. A bank worker was scammed for $25 million after a video call with the bank’s Chief Financial Officer turned out to be fake. A different bank lost over $250,000 the same way. Deepfake romance scams are also duping individuals into thinking they’re developing relationships with real people, only to lose money and discover the person never existed, proving that you can’t even trust your eyes anymore.

When it comes to still photos, one thing to look for is that same uncanny aspect that most AI art has. There’s a kind of gloss or sheen to it that makes it look very smooth, unnaturally so. AI has not fully perfected the nuance and imperfection of natural human skin yet. 

Careful inspection of background details is often the best way to spot AI manipulation these days. It’s often very subtle and hard to notice, so don’t expect glaring problems. Sometimes shadows don’t match, or background details are weirdly vague or blurry when they shouldn’t be. 

If the image is a celebrity or well-known person, give the face a good inspection, especially at the edges, as faces are often swapped onto other pictures. This is kind of like cutting and pasting, and sometimes, where the face is merged into the new image, details get muddy. Foreheads can sometimes be too smooth or not smooth enough, for instance.

See also  10 Incredibly Dumb Ideas That People Actually Implemented

In videos, the mouth is often a tough thing to get right. Audio and mouth movements don’t always match, and the teeth can look suspicious. 

Videos

Not every video is a deepfake. Some are just random videos that could, at first glance, appear real. Programs like Sora, which can produce text-to-video content, can be convincing depending on how detailed the video is. In studies, when shown a mix of real and AI videos, average people are not able to tell which is which

Currently, AI video is not nearly as sophisticated as static images. Because there’s so much going on in a video, there’s so much that can go wrong. You’ll notice that in many AI-generated videos, backgrounds appear and disappear, things change size and shape, and sometimes they’ll merge together or split apart. But this is improving constantly and will probably not be noticeable within a year or two. 

Shadows in AI videos tend to move strangely, especially on faces. Your face naturally has minor shadows if there is a light in front or over you. Your nose, your brow, your lips, and your hair may all cast small shadows. AI video often mishandles these. 

Text is another dead giveaway in a lot of AI videos. Anything with letters or numbers is often jumbled or nonsense. This will probably improve later, but even midway through 2024, it’s still a consistent issue. 

Another standard feature of AI video is that everything is slowed down. That’s not to say it’s slow motion; it’s just that nothing seems to be moving very fast. Characters will often slowly turn toward the camera and stare blankly. In the vast majority of AI-generated content, the people you see are conventionally attractive and usually fairly young. When the elderly are depicted, they are often extremely old, and their features are exaggerated. Stereotypes are also exaggerated, especially in relation to minorities. 

While static images are improving human hands, videos still can have trouble with them. This isn’t a 100% of the time thing, but pay attention to moving hands in a video to see if they change shape or fingers appear and disappear.

Foolproof Identification

While there are software programs that can identify AI writing and images, that does require some extra effort, currently, there is very little that can help you identify AI video.  The best method we have right now is your own ability to observe and think critically about what you’re seeing. 

As AI technology improves, the things that are created are going to improve as well. It’s not unforeseeable that there will come a time in the near future when AI images and videos are impossible to distinguish from reality. When that happens, another conversation will need to be had about the implications of the technology. 

For now, the truth is in the fine details. Common sense, life experience, critical thinking, and observation skills all need to be employed sometimes to determine the truth. The problem is that often, we don’t devote that much time, and the consequences, especially in the future, could be severe as a result.

You may also like

Leave a Comment