Since Hamas’s horrifying October 7 attack on Israel, which has prompted near-constant retaliatory Israeli airstrikes in Gaza, social media has been inundated with disinformation about the war, including a flood of graphic visual content, unsubstantiated claims, and opportunistic monetized content generation. Last week for the latest episode of On With Kara Swisher, Kara tried to cut through the digital fog by assembling a panel of three experts with unique vantage points on this information war: senior BBC Verify disinformation journalist Shayan Sardarizadeh, former longtime Facebook/Meta public policy director Katie Harbath, and Stanford Internet Observatory research manager Renée DiResta. Below is a lightly condensed excerpt from their conversation.
On With Kara Swisher
Subscribe on:
Kara Swisher: I want to start by asking each of you to tell me whether the first weeks of this war have more less or the same amount of misinformation and disinformation — especially visuals, which is what’s being focused on here — as the first weeks of the Ukraine War and why you think that is Katie first, then Shayan, then Renee.
Katie Harbath: To me, it seems like more, particularly given the coordination Hamas had on social media — and ready to be pushing out their own videos and images of this. Plus, all of everybody kind of jumping on it, adding videos, and images from past conflicts and other things. So it just seems like, from what I’m hearing from others, that the volume is more. Though Renee and Shayan may have different opinions on that.
Shayan Sardarizadeh: I think it would be difficult to because I don’t have all the data from both conflicts to be able to compare and say, “Well, this one was more, this one was less.” I think what I would say is it’s definitely been overwhelming because I was doing the exact same thing in the first few days of the Ukraine conflict. And there was a ton of misleading videos and images, and it’s not just, by the way, visual posts. It’s also completely unsourced, un-evidenced claims. You know, when something breaking is developing, and in this case being a war, that can actually have consequences. So I would say it’s been quite overwhelming. I would say probably more or less similar to the Ukraine War. Well, I can’t definitively say which one was more, but both of them were bad enough, basically.
Renée DiResta: I’d say we don’t know. The reason being we don’t have Twitter API access anymore.
Swisher: Right. Explain what that is for people who don’t understand that.
DiResta: Yeah, so back in the olden days, there was a really good researcher relationship between academic institutions and Twitter, and we had access to what I call the fire hose, right? Various types of fire hoses, I’m not going to get into the details, but ways that we could build tools, create dashboards, and just ingest data directly from the company. And because it was an academic project, we didn’t have to pay for it.
The kind of data access that we had now costs over $42,000 dollars a month. So a lot of academic institutions have backed out of observing Twitter, which means that our focus has really been on Telegram. That’s not entirely bad, right? That’s where a lot of the content is for the impacted populations; they’re not necessarily sitting on Twitter, so spending our time on Telegram isn’t a bad thing.
Elon very significantly changed curation. So there’s what’s visible, and then there’s volume, and those are not necessarily the same things. What’s visible is really decidedly different, and I think maybe Shayan would agree, of often a very significantly worse quality in terms of accuracy.
Swisher: Shayan, we’ve seen video games and TikToks from old concerts repurposed as misinformation online. We’ve also seen rumors, as you just noted. Give us a rundown of what you think are the five most viral pieces of dis- and misinformation, and tell us what you know about them; obviously, a lot of attention was given to the unsubstantiated claims that 40 babies were beheaded, but I’d like you to pick out your own.
Sardarizadeh: I would say that the most viral stuff that I’ve seen, that I’ve been logging in the last two weeks, has been basically video that is either unrelated to what’s been going on in the last two weeks on the ground in either Israel or Gaza. It could be from the past conflicts. Obviously, Israel and Hamas have been involved in several conflicts just in the last 10 or 15 years. So either from those past conflicts or from the war in Syria or from the war in Ukraine or from military exercises. In the case of TikTok, actually, it’s become really, really fashionable now that when a conflict happens somewhere, you just say you’re running live streams of that conflict and you either use video of past conflicts or you use a YouTube video of military exercises and you actually make money off of it.
Swisher: Or put on a helmet. I’ve noticed some people — not there — putting on helmets, correct?
Sardarizadeh: Absolutely. And actually make money off of it. That’s the important bit. So most of the stuff that I have seen — and I’ve seen stuff from both sides, by the way, it’s not been one-sided at all. I’ve seen claims from both sides and from both directions, also supporters of both parts of this conflict, sharing all sorts of completely untrue material. And the most important thing for me personally is that this is not fringe stuff — and this is what people need to know.
Swisher: Meaning what?
Sardarizadeh: We’re not talking about stuff that is being shared by 50 people, 100 people, and you know 100 retweets, 200 likes. We’re talking about material that’s been viewed tens of millions of times on platforms like X, formerly Twitter, TikTok, YouTube, Facebook, Instagram.
We’re not living in the 1950s and 1960s anymore. People these days don’t necessarily sit in front of a TV and watch the sort of nightly bulletin to find out what’s happening around the world. They go on the internet, they go on social media, they look at their feeds, they want to get updates constantly, particularly when there’s an event of this magnitude.
So quite a lot of the visual evidence that they’ve been getting, unfortunately, online in the last two weeks has been completely false. There’s also been quite a lot of video that’s been shared that’s actually genuine from the last two weeks and has been helping us journalists who want to investigate what’s going on, for instance, with what happened at the hospital two nights ago.
Swisher: Right. We’ll get to that in a minute.
Sardarizadeh: Everything that we’ve done has been based on footage that’s been shared online that is genuine. But the point is you have to verify first that that footage is genuine, and then while you’re doing verification, quite a lot of stuff is actually untrue and you see a piece of video that is from the Arma 3 video game, which is a military simulation video game, has 4 million views on TikTok.
Swisher: So Katie, there’s talk of this being a TikTok war — that’s according to Bloomberg — showing a new role for the platform, which has more moderation than the other platforms now of course. You worked at Facebook, now Meta, for ten years. Explain the platform shift and if and why it matters.
Harbath: Well, I think first and foremost, it’s that for many, many, many years, platforms like Google, Facebook have built up their defenses in trying to be able to find some of this content. They’re still having a lot of challenges, as well, in terms of doing that. And as Shayan was mentioning, and Renee too, verifying this content and working with fact-checkers to verify it, and then to decide whether you’re going to de-amplify it? Are you going to take it down? Who’s doing it? What’s their intent in sharing it is also a very hard thing to do. And so a lot of these newer platforms are having to grapple with a lot of the questions that some of these legacy platforms have already kind of worked through and spent the years refining their policies and their algorithms and everything to find.
Swisher: Shayan, I do want to ask you about how you identify what is disinformation — how do you account for that?
Sardarizadeh: Well, I think the first and most important thing is to clarify. Me and my colleagues, we only go for content that is viral. Then when we have a piece of video, or an image or basically just a post online, that is a claim that is either incendiary or has implications, what we want to know, first of all: When was this piece of video filmed? Where does it come from? Who’s the original source? Can we actually source the video to the person who filmed it? The platform, that way it was first shared, because obviously you put something on Telegram or on WhatsApp then it travels across platforms, but just because you see it on TikTok or on Instagram doesn’t mean that’s where it came from. You have to source it. You have to find out who filmed this video, this piece of footage, who first posted it online, and then you have to contact them and talk to them because they probably have more context. Then the second thing is: Is this actually the entire footage? Is there a longer version? Have other people been at the scene where this video was filmed?
Swisher: Meaning trying to manipulate it to look like-
Sardarizadeh: Exactly. Has it been edited? And then the next thing is: Is this actually current footage or is it old? So we have to go online and look on platforms like YouTube, Instagram, TikTok, you name it-
Swisher: You do reverse image search, correct?
Sardarizadeh: Yes. So we take screengrabs of pieces of video. With images, you don’t need to do that — the image is there, you can just reverse-search it. But with video, we take 5, 10, 20 screengrabs of a piece of video, and then we go online on several reverse image search tools, including Google, Yandex, Bing, and then we try to find whether there are other examples of this video shared online in the past. Then if it is from these past two weeks, then you want to investigate it properly and find out what it actually shows, who actually filmed it, where it came from. Because sometimes you have genuine pieces of video that are either edited or deceptively manipulated, or taken out of context.
Swisher: So, talk about the sources of disinformation on both sides and where they’re coming from because a lot of other actors have also gotten involved. There’s a lot of anti-Palestinian disinformation and generally more Islamophobic content around this conflict coming from India. So where are the sources?
Sardarizadeh: I would say the vast majority of misinformation that I’ve seen has come from people who seem to have nothing to do with the conflict directly. It’s just people online who are farming engagement, farming followers, farming influence, and, in some cases, trying to put as much outrageous, shocking content as they can to make money off of it.
When it comes to two sides of the conflict, being the government of Israel and Hamas, we expect the two sides involved in a war to actually show — because wars these days are not just for on the ground, there’s an information war as well that you need to win — so we expect the two sides to try to put whatever they can online regardless of whether it’s factual or not to win the information war. So that’s expected. And also for people who live either in Gaza or in Israel, again because of all the atrocities that have happened in the last two weeks, you expect them to be emotional and obviously taking the side that is involved in the conflict.
Swisher: Yeah, having opinions, etcetera.
Sardarizadeh: You absolutely expect that and, obviously, people have seen horrific stuff. So, I don’t pay too much attention to somebody who is emotionally affected by this conflict putting something out that is misinformation. What is important to me is people who are not directly related to it, say somebody sat in America or sat in Great Britain or in China, and they’re posting content that, in my view, is just for getting influence and engagement online.
Now, apart from that, there’s also more sort of nefarious misinformation or disinformation that is put out for political gain. One good example of it: Last week there was a video that was posted online that quite a lot of people saw that had the branding, logo, and style of BBC News. This one was actually fake, 100 percent fake. We didn’t produce it, but it looked genuine, and it said that we had reported that the weapons that Hamas militants used on the 7th of October had come from the government of Ukraine, or was weapons that had given to the government of Ukraine by western powers that had been smuggled out of Ukraine and ended up in the hands of Hamas.
There’s zero evidence for it. We have not reported it. And then Dmitry Medvedev, who is the former Russian president put out the same baseless claims online. So you have to think, why would someone go through that effort to produce a fake BBC video to say Hamas militants got their weapons from the government of Ukraine? That has nothing to do with this conflict.
Swisher: Yeah. Why would they do that? I wonder, Shayan, I wonder why they would do that. I don’t know. Maybe I have some ideas.
I want to get into one specific attack: the blast at the al-Ahli Gaza City hospital. None of us are experts on airstrikes, but the blast and questions around it are metastasizing online. Katie, you’ve worked on elections where anyone can build whatever narrative suits their purposes. Is this common, what’s happening here?
Harbath: It is, but I think one of the things that I’m seeing that’s different is also just the confusion amongst mainstream news organizations in terms of, you know, the New York Times had a headline that I saw that, you know, initial reporting, trying to be first at this. And that helps to add to the confusion of what Shayan was saying was, then when people are making fake videos of this, of using the branding and stuff of these news platforms, it continues to contribute to people just not trusting and not knowing what is true because we’re all trying to figure it out in real time. And so that makes it much harder to verify what is or is not true from a social-media company standpoint, but I think everybody, what you choose to amplify or not amplify, and trying to figure that out while it’s all happening in real time. To me right now, this just feels like it’s just coming faster, at higher volumes, and more things happening that it’s just a lot more facets to have to deal with than what I’ve necessarily seen in a particular election situation — unless it’s something like January 6, but even then this feels like even a higher stakes because the amount of gruesome images. And that also has an impact on the people trying to moderate this, trying to cover this. There’s an emotional aspect to this as well and a burnout that continues to happen as people, as this continues to go on longer and longer.
Swisher: Yeah so Renee, how do you look at this? Is this common from your perspective? You’ve seen hundreds of these over the years, I would assume.
DiResta: We have. I think what I would say is most different is the widespread democratization of generative AI. That’s what’s really different here, right? That was what wasn’t the case in February of 2022, during the initial Russia-Ukraine invasion. There was, you might recall, there was a lot of, there were rumors, there were stories — you might recall Snake Island.
Swisher: Right, Snake Island.
DiResta: Right, so everybody remembers Snake Island and the way that that was reported, and then [the Ukrainian service members there] hadn’t died, but, you know, they had been taken hostage, but there was a whole narrative-
Swisher: Used for heroism.
DiResta: Right, the heroism narratives. So you do see the governments come into play, right? The Ukrainian government was quite good at war propaganda in the early days, really. Just kind of riling people up about that. I followed that story. I remember that. With something like this though, what is really distinctly different now is the liar’s dividend piece of this, where content that is real, you can say is faked because of the existence of the technology to fake it, right? And so it’s this question of — I see it as like the kind of collapse of consensus reality, right? You can pick which ones you’re actually going to trust, you can pick which ones you’re not going to trust, you can dismiss all the rest of it as, “Oh, that’s AI generated.”
Swisher: Renee, once a conclusive answer is established, if possible, what’s the responsibility social-media companies have to stop or at least de-amplify the reach of misinformation around such an event [as the hospital blast], given it has such real-world repercussions to upset and create, you know, real — there’s protests all over the place. So what’s their responsibility? Because this has so many echoes of incidents in Myanmar or elsewhere in the past. It reminds me, again, I feel like I’m on constant loop.
DiResta: Well, I think the challenge is, you know, some people are going to see something go by once. They’re going to form an opinion on it. They’re not going to spend a whole lot of time thinking about it, and then they’ll just move on. Other people are going to really follow this story. I think in the particular case of the bombing or missile misfire, whatever it turns out to be. A lot of people are following that, and a lot of very prominent accounts are following it, so I think it is going to stay kind of at the top of the feed as people debate what happened around it. I think with social media, one of the real challenges is in a conflict situation like this — people are looking for it, right? And so you’re going to have to return something. And so I think the best thing that you can do is try to return authoritative sources, try to return people who are verified to be in the region, right? And that’s hard to do. That really takes effort. We’ve seen platforms do it in the past. You know, Twitter is particularly — that’s the place that people would normally go for this sort of thing, but they don’t have the staff to do any of the things that ordinarily would be done: To say, “Okay, who are the credible sources? Who are the people who are in region? Who are the official accounts? You know, maybe we shouldn’t surface paid blue checks for this one. Maybe the armchair opinion of some rando commentator who paid me eight bucks is not the person that we should be putting at the top of the feed. That’s a crazy idea.”
But that’s the sort of thing that Twitter would have done in prior environments.
Swisher: They would return to authoritative [sources] through badges, fact-checks.
DiResta: Right, and you know, I think Community Notes is a great concept, but it’s better when you’re surfacing corrections for things that are established. It’s not equipped, it’s — they’re not journalists, they’re researchers, maybe, or commentators who are clarifying a fact or a connotation, maybe something a politician says. It’s great for slow-moving stuff like that. There is nobody sitting on their computer typing up a community note who is on the ground in Gaza or on the ground in Israel who has any idea of what the actual facts are. So you’re winding up with Community Notes “checks” that have also been shown to be wrong several hours later. And that is not a dig on citizen journalism. It’s that — that’s not citizen journalism. That’s the whole point, right? You should be servicing citizen journalism, but then it requires the actual effort of going and figuring out who the citizen journalists are in this particular case. Who the channels are that are authoritative and should be returned first in search.
This interview has been edited for length and clarity.
On With Kara Swisher is produced by Nayeema Raza, Blakeney Schick, Cristian Castro Rossel, and Megan Burney, with mixing by Fernando Arruda, engineering by Christopher Shurtleff, and theme music by Trackademics. New episodes will drop every Monday and Thursday. Follow the show on Apple Podcasts, Spotify, or wherever you get your podcasts.
More From 'on with kara swisher'
- Ron Klain Still Thinks Biden Got a Raw Deal
- Gretchen Whitmer on Why She’s Still Confident in Biden
- AOC on Gaza, Insults, AI, and Whether Trump Will Lock Her Up