We’re swimming in AI slop. Here’s how to tell the difference.

October 8, 2025

If your feed isn’t already filled with AI-generated video slop, it’s only a matter of time.

Meta and OpenAI will make sure of it. Meta recently announced its endless slop-feed Vibes, made up entirely of AI-generated content: cats, dogs, and blobs. And that’s just in Mark Zuckerberg’s initial video post about it.

OpenAI’s new Sora app offers a different flavor of slop. Like TikTok, Sora has a For You page for vertically scrolling through content. But the scariest part of Sora is how real it looks. One feature, called Cameo, lets users make videos of themselves, their friends, and any public-facing profile that grants access. This means videos of Sam Altman hanging out with Charizard or grilling up Pikachu are making the rounds on social media. And, of course, Jake Paul videos are also starting to circulate.

It’s just the beginning, and the technology is only getting better. To help navigate it, we spoke with Hayden Field, senior AI reporter at The Verge. Field and Today, Explained co-host Sean Rameswaram discuss why these tech giants are doubling down on AI video, what to do with it, and we even get fooled by one.

Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

What is Mark Zuckerberg trying to do with Vibes?

That is the million-dollar question. These companies, especially Meta right now, really want to keep us consuming AI-generated content and they really want to keep us on the platform.

I think it’s really just about Zuckerberg trying to make AI a bigger piece of the everyday person’s life and routine, getting people more used to it and also putting a signpost in the ground saying, “Hey, look, this is where the technology is at right now. It’s a lot better than it was when we saw Will Smith eating spaghetti.”

How did it get so much better so fast? Because yes, this is not Will Smith eating spaghetti.

AI now trains itself a lot of the time. It can get better and train itself at getting better. One of the big things standing in their way is really just compute. And all these companies are building data centers, making new deals every day. They’re really working on getting more compute, so that they can push the tech even more.

Related

Let’s talk about what OpenAI is doing. They just released something called Sora 2. What is Sora?

Sora is their new app and it’s basically an endless scroll AI-generated video social media app. So you can think of it as an AI-generated TikTok in a way. But the craziest part, honestly, is that you can make videos of yourself and your friends too, if they give you permission. It’s called a Cameo and you record your own face moving side to side. You record your voice speaking a sequence of numbers and then the technology can parody you doing any number of things that you want.

So that’s kind of why it’s so different than Meta’s Vibes and why it feels different when you’re scrolling through it. You’re seeing videos of real people and they look real. I was scrolling through and seeing Sam Altman drinking a giant juice box or any number of other things. It looks like it’s really Sam Altman or it looks like it’s really Jake Paul.

How does one know whether what they’re seeing is real or not in this era where it’s getting harder to discern?

These tips I’m about to give you aren’t foolproof, but they will help a bit. If you watch something long enough, you’ll probably find one of the telltale signs that something’s AI-generated.

“Taylor Swift, actually — some of her promo for her new album apparently had a Ferris wheel in the background and the spokes kind of blurred as it moved.”

One of them is inconsistent lighting. It’s hard sometimes for AI to get the vibes of a place right. If there’s a bunch of lamps — maybe it’s really dark in one corner, maybe it doesn’t have the realistic quality of sunlight — that could be something you could pick up on. Another thing is unnatural facial expressions that just don’t seem quite right. Maybe someone’s smiling too big or they’re crying with their eyes too open. Another one is airbrushed skin, skin that looks too perfect. And then finally, background details that might disappear or morph as the video goes on. This is a big one.

Taylor Swift, actually — some of her promo for her new album apparently had a Ferris wheel in the background and the spokes kind of blurred as it moved.

Anything else out there that we should be looking for?
I just wish we had more rules about this stuff and how it could be disclosed. For example, OpenAI does have a safeguard: Every video that you download from Sora has a watermark or at least most videos. Some pro users can download one without a watermark.

Oh, cool, so if you pay them money, you could lose the watermark. Very nice.

But the other thing is I’ve seen a bunch of YouTube tutorials saying, “Here’s how to remove the Sora watermark.”

Do companies like OpenAI or Meta care if we can tell if this is real or not? Or is that exactly what they want?

They say they care. So I guess that’s all we can say right now. But it’s hard because by the very nature of technology like this, it’s going to be misused. So you just have to see if you can stem that misuse as much as possible, which is what they’re trying to do. But we’re going to have to wait and see how successful they are at that. And right now, if history is any guide, I’m a little concerned.