Skip to main content
Society, Arts and Culture

Ask an Expert: How Has AI Changed Misinformation — And What Does That Mean for Consumers?

A close up of hands typing on a keyboard.
Written By Gabby Ferreira
A portrait of Kim Bisheff wearing glasses and a green shirt.
Assistant professor Kim Bisheff. 

Since 2020, misinformation has become a dominant part of public online discourse: unequivocally false claims about the 2020 election, COVID-19, hurricanes and presidential candidates have swirled, driven further by the rise of AI. 

Cal Poly News sat down with Kim Bisheff, an assistant professor in Cal Poly’s Journalism Department who studies how misinformation spreads online, for a chat about what digital media consumers face, how misinformation has changed — and what we can do about it.

This conversation has been edited and condensed for clarity.

What has changed about misinformation in the last few years?

The means of distribution are mostly the same. Most Americans access political information through social media. The problems that existed in 2020 still exist today in terms of people existing in silos, choosing information that perpetuates their worldview and makes them default to their community’s beliefs.

Some things that have changed and made misinformation worse are that the major social media players like Meta, Google and X (formerly Twitter) are no longer really trying to do anything about it. There are fewer safeguards against false information.

Since 2020, the TikTok audience has grown. The Instagram audience has grown. Those are both very visual mediums, so when we see photos and videos — even if they’re not accurate or if they’re taken out of context — they're just so visceral, it’s hard not to believe your eyes.

And that leads us to the biggest change between 2020 and now, which is AI.

Social media was the first big leap in amplifying misinformation to a major degree and made it easier for people to hide in their silos. Generative AI is the next great leap, unfortunately. You don’t even have to have Photoshop skills anymore to produce something remarkably realistic looking.

Not only have we come to not trust legitimate sources of information, but we no longer trust our own eyes. 

Does this mean that all misinformation is AI-generated now? 

No, not at all. It’s just a new player in the same old game. 

One of the most popular forms of misinformation is actually super easy to produce. It’s called the meme quote. If you’ve seen it, it’s just a picture with a quote that’s attributed to someone. 

That is extremely low-tech, but it’s still one of the major ways that false information is distributed. 

The difference with AI is more for fake videos and convincingly fake images. It’s just become so fast and cheap and easy to distribute false information on a massive scale. That’s the main role that AI has played in the misinformation game. 

 Are there any instances of AI being used for good here?

A lot of news organizations are using chatbots and other AI-powered tools to help people fact-check information.

One that Cal Poly worked on via the Digital Transformation Hub is a chatbot for Snopes called the FactBot. It’s a chatbot that’s trained on the fact-checking website’s 30-plus years of reported fact-check articles. Users can interact with it directly by asking a question like, for example: is somebody eating people’s pets in Ohio? And it’ll answer the question as a summary with links to the source material.

It’s just like interacting with a ChatGPT-type interface, but it’s not trained on all the garbage that’s on the internet. It’s just trained on this very specific dataset of accurate, reported, transparently-sourced information.

Another great resource is Journalist’s Toolbox, which aggregates AI tools for fact-checking, plagiarism detection and more. These are great examples of how AI is being used to fight misinformation, not just create it. 

Is there anything else we can do, when we encounter information in the wild, to double-check whether or not it’s fake?

There are a few things I recommend. 

The first thing you should always do is a gut check. Ask yourself, how does this piece of information make me feel? If it gives you a strong emotional reaction, that’s your first red flag and you should take a pause, especially if you found that through social media.

The second thing you should do is Google it. If the information you’re looking at is an image, that means a reverse image search — there’s a little lens icon on the side of the search bar that you can plug images into, and then you can see the whole context: including whether the image was taken out of context, or if it was manipulated and here’s the original. You’d be surprised how much misinformation can be identified by plugging a screenshot of a video into Google search. 

I also recommend that people visit the News Literacy Project so that they can learn skills, like how to tell if a source of information is trustworthy, and how to tell if an image is accurate, that sort of thing. News Guard is another website that has a great election misinformation tracker.

And the last thing I recommend is that you get your news directly from professional sources of journalism, and that’s not necessarily intuitive to people. 

We are so used to accessing information through social media. When you access information through social media, you only get the stories that have been pushed to you because they’ve been circulated by people who are sharing them because they triggered anger or validation or fear. It’s not representative of the news that’s being reported and published.

When you access news from the source, it’s compartmentalized into news and opinion and all the different sections that are clearly labeled for you.

And we’re cheating ourselves if we don’t go directly to reasonable sources of news. If you don’t want to start with a news organization’s homepage, subscribe to a newsletter that gives you a news summary every day. It’s a much better way to have a healthy news consumption diet. 

What are the real-world consequences of rampant misinformation, and how have we seen that play out in the last several years?

Well, look at January 6th. You had a whole group of people who really believed they were doing the right thing. Unfortunately, they were so entrenched in a world of false information that they were led to violence. 

Interestingly, researchers used to say that most false information was coming from the political right. But that’s no longer the case: it’s about a 50-50 split. None of us are immune to misinformation.

And the threats are real. People are burning ballot boxes because they think election integrity is at risk in ways that it provably is not.

The thing that worries me about election week is that if you talk to people on the left, they feel like it’s going to go their way. If you talk to people on the right, they feel pretty confident it’s going to go their way — or if it doesn’t, something scandalous is afoot.

More than half of registered Republicans surveyed believe that Joe Biden is not the legal president. They believe that what we know to be election conspiracy is real. That has been disproven by the courts. It’s been disproven by officials in Trump’s own administration, and yet a majority of Republicans hold that view. 

There are still myths that persist about Dominion voting machines despite all evidence to the contrary. Fox News had to pay almost $800 million in settlement to Dominion because of the false narrative they were perpetuating about the accuracy of their voting machines. 

Is there any hope for a healthy news ecosystem free of misinformation, or any hope for us to break out of our information silos? 

Propaganda has been and will always be around, but I’m hopeful that in California at least, there seems to be a push for media literacy. 

There’s now a law requiring media literacy in K-12 education. We’re just at the beginning of that journey — these are not kids who are voting yet, obviously. But by the next election, everyone who’s learning media literacy skills in high school right now will be of voting age. I’m hopeful that’s going to make a difference eventually. 

I also see that — even though TikTok is the most common place for Gen Z to search for information — when I talk to young people individually, they say that they know they’re being manipulated. They’re skeptical of those sources as a default. And if what they’re learning in school is reinforcing that message, then hopefully it’ll be internalized.

There are also some technological advances. A lot of the generative AI image makers are now watermarking their content. It’s a cat and mouse game, but at least there is a concerted effort by some in the tech industry to put some safeguards in place.

As far as silos go, humans are, by nature, tribal. Unfortunately, that’s just a flaw we have to work hard to push against if we’re going to make any progress. The tribalism has gotten worse since 2020. The crisis of faith in our institutions has gotten worse. The social media companies have stopped even pretending they’re going to try to fix that problem because it simply runs contrary to their business model. 

More than ever, it’s up to us as individuals to consume information from a greater variety of sources. It’s up to us to recognize when we’re being manipulated by algorithms. In the immortal words of the great poet Ice Cube, you’ve got to check yourself before you wreck yourself. 


Want more Learn by Doing stories in your life? Sign up for our monthly newsletter, the Cal Poly News Recap!

Subscribe to the Recap