just asking questions

Jimmy Wales on Why Wikipedia Is Still So Good

Jimmy Wales. Photo-Illustration: Intelligencer; Photo: Getty Images

As internet discourse has grown partisan, more aggro, and more all-consuming, Wikipedia still manages to reflect the utopian dreams of the 1990s web. The idea that a user-generated compendium of human knowledge could be reliable seems even more audacious today than it did in 2001 — yet it is widely (if not universally) acknowledged that Wikipedia works. It’s certainly as popular as ever; Wikipedia is the seventh-most-visited website in the world, per Similarweb, and the only one in the top 50 run by a nonprofit organization, the Wikimedia Foundation. (Wikipedia has not minted any billionaires, unlike many of its tech brethren.) To understand how Wikipedia remains relatively above the fray, I spoke with its co-founder, Jimmy Wales, who remains heavily involved in the site’s culture. We discussed how Wikipedia tries to stay neutral about explosive topics, the threat and promise of AI, and why people shouldn’t edit their own pages.

We’re coming up on 24 years of Wikipedia in January. I wanted to get your sense of how you think it’s working right now. Is it as good as it’s ever been? Is it more challenging to maintain in our turbulent AI times? 
I think it’s as good as it’s ever been. Just in terms of the information environment we live in — that is more challenging, I would say, in some ways than in the past. Though it’s not like it has dramatically changed overnight. It’s been changing for many, many years. Just with the rise of a lot of clickbait, low-quality media, things like that. And obviously the rise in divisive feelings, partisanship, culture wars, all of that — adds a lot of noise to the world.

I would say the decline of trust in journalism and politics is quite severe, which then, in some cases, translates into people feeling more angry and lost because they don’t know what to trust and what to believe. That’s all challenging, but the Wikipedia community, we just plug away, trying to be neutral, trying to be clear.

The internet seems like it’s defined by that partisanship, that instinctive vitriol, now in a way it wasn’t a decade ago. Wikipedia has managed not to be defined by that — though there are people who criticize the site for having a political bias, including Elon Musk and your co-founder, Larry Sanger. How have you avoided it? Is it just the tenacity of the volunteers who edit Wikipedia, who are able to stay above the fray?
The main thing I think about there is what I call “community health.” What that really means is, Is the community happy? Is it doing quality work, productive work? Is it feeling supported in that work? That’s obviously crucial because if you don’t have the right people, what things can descend into — and we see this all over the internet — is just a battleground. There’s a page on “What Wikipedia Is Not: Wikipedia Is Not a Battleground.” Some people have a model in mind that what you’re supposed to do is come and fight for your side in Wikipedia. What we found from the very earliest days, and it’s kind of obvious, is if you get two people who are there to try to win the argument and shape things to be consistent with their worldview, it doesn’t get anywhere useful. You just pound on each other forever.

What you really want is a spirit of saying, “Okay, look, there’s certain disagreements, and we’re never going to fully resolve those disagreements here and now, but we can describe that fairly.” I always say the best Wikipedia is one where you couldn’t even guess what the editors’ stand on an issue is. They’re just trying to write something that’s clear and acknowledges the different viewpoints out there. When people are new to the community, particularly if they’ve been sent by an angry tweet by Elon or something — I don’t know what we’re supposed to call them now if it’s not Twitter anymore, but —

I will call it Twitter forever. I refuse to change.
I can’t stop. People come in from a tweet. They’re new, and they start arguing and debating but pretty quickly they either get tired because the community isn’t that receptive to it or they’re like, Oh, I get it. This isn’t like Twitter.

Who filters out that stuff? A lot of times, the angriest voices on the internet do win out and are boosted.
This is where the community-health piece is so important. The way it works is the administrators are elected by the community. They’re subject to recall. Generally speaking, what the community’s looking for in an admin is adherence to the Wikipedia way of doing things and so forth.

Then the admins have to have sufficient power to actually block people if they’re misbehaving, but it’s not really just about blocking people. That’s a piece of it. If somebody just can’t pull themselves together and stop yelling at everybody, they will get blocked, but it’s also trying to turn people around, to say, “Oh, hey, take it easy. We’re here to try and write both sides of the story, and we want to do so clearly,” and so forth. That kind of thing is really important.

Have you ever agreed with the criticism that Wikipedia is too left-wing, too resistant to more conservative sources? I saw an interview you did where you said, “When people try and come up with an example of this, they struggle.”
I think that’s right. That’s what I would say. It is something I look at and focus on and think about, but whenever I try to find problematic examples, it’s pretty hard. Somebody on Twitter started trying to bait me with the “What is a woman?” question, assuming we’ve become woke trans ideologues or something like that. I just said, “Let me just quote the first line of the Wikipedia entry, which says, ‘Adult female human.’” And by the way, the entry does get into an explanation of this controversy. I’m like, “If you read it, unless you just are offended by even talking about the existence of different perspectives on the question, it’s actually quite good.”

Then there are other cases, often smaller cases, obscure cases, where you just say, “Yeah, this article could use some improvement.” It’s usually that nobody’s really paid attention and noticed that somebody has gone in and done something not great. In terms of a systemic sitewide issue, I don’t see it, but I’m always looking for it. The other thing I’ll say is oftentimes these things are more difficult when emotions are running high. In recent months, pages around Israel-Palestine have been difficult. That doesn’t mean they’ve become one-sided in one direction or the other. But it does mean those conversations are harder as people sort of grapple with accusations of genocide and things like that. It also means the readers are more sensitized, so if a sentence isn’t quite to their liking, they feel it as more of an intense slight.

A relatively small number of people do a huge amount of the work of Wikipedia. Have you sensed a generational turnover here, or is it the same people who were doing it a few years ago? 
I think the Wikimedia Foundation has actual statistics on that, though maybe not much; we occasionally do surveys and things. My sense of it is we are on average older than we used to be, which mainly means there’s a cohort of people who’ve been around for a long time. That’s a good thing in many ways. We do have young people in the community — we haven’t seen any collapse in young people joining. Then, some people do die. They get old and die on us.

You’re kidding me.
We’re still sort of young, but there’s some natural turnover in the community. Also, sometimes people for no particular reason get massively obsessed with Wikipedia, do it for a few years, and then drift away to another hobby. That’s the natural flow of any hobby people have. I do have a sort of note in my mind that we need to keep an eye on that.

One of our supporters, a donor, once said something that always stuck with me: “You don’t want to become like ham radio.” The analogy is quite good if you know about the history of ham radio because it was a really geeky hobby and a very outward-looking hobby. You could get on your ham radio and talk to people on the other side of the planet well before the internet. Then eventually, ham radio became a hobby that today is largely populated by old people because they’ve been doing it for decades. I said this once in a speech and a young guy came up to me and he’s like, “I’m really offended that you said that. There’s a lot of young people in ham radio. I’m in ham radio.” In general, though, we want to guard against that. We want to refresh with a new generation of people.

People are like, “Oh, kids these days, they’re on TikTok,” but I don’t know. I used to be an idiot when I was a kid, and we all turned out all right. I watched MTV, music videos, for far too many hours.

Being attached to your phone seems like a different kind of thing.
It does feel slightly different, but I guess what I’m saying is people are still people. And we still get really young Wikipedians who are super-geeky. They’re like, “Oh, kids have such a short attention span because of all this.” I’m like, “These are the same kids who also binge-watch 20 straight hours of a very complicated TV show.”

Wikipedia editors have skewed white and male over the years, and many of the entries skewed that way as well. There was a concerted effort to change this. How do you measure that progress now? 
I would say it’s still a problem; it’s still an issue. It’s sort of a deeper and broader issue than just Wikipedia. Sometimes when I’m reading about it and studying it, I think people aren’t thinking it through as clearly as I would like them to.

For example, if you look at our coverage of — I’ll take an extreme example — 19th-century scientists, you’ll find it’s overwhelmingly male. But that’s not our fault. That’s a historical fact. Women were systematically excluded from certain professions. There are a handful, like Marie Curie.

You can’t invent history. 
Yeah, exactly. But if you look at, say, contemporary biology, that’s a field where I think there are more women than men. So we should be more equal, and we are. We tend to be pretty close, so that’s good news.

There was a criticism recently when a woman won the Nobel Prize in chemistry and, before she won, Wikipedia didn’t even have an entry about her. I thought, Ouch. But then I went and looked and I’m like, “Well, actually, she wasn’t a famous public-intellectual scientist.” She was a great scientist grinding away in academia, of whom many are. We don’t necessarily have every scientist who’s in a professorship somewhere unless they’re known. It would’ve been hard to write a biography because there would be no biographical information — you just know who published these papers. I was like, “Okay, well that one’s not too fair.” It’s certainly well worth us taking a closer look to go, “Why did that happen?” But I concluded in that case, yeah, that  was probably the right answer.

I do think there are still broad issues around whether you have a sufficiently intellectually diverse community. People write about what they know and what they’re interested in; if they’re not interested in something, we’re not going to have much about it. I actually find that to be true about, for example, companies. This is not a gender-based issue but just something I’ve noticed. I was giving a talk a few years ago at a conference of the Flexible Packaging Association. This is the industry trade group for tin cans and plastic.

I had a call with the organizers and they told me about some of the big companies, and I was thinking, I need to learn something about the packaging industry. I noted there was a company based in Ohio with something like 11,000 employees, listed on New York Stock Exchange, unbelievably boring, not a consumer brand. It makes the cans for Campbell’s Soup. There was no Wikipedia entry about it. There was information available about the CEO and the company, but it wasn’t in the public consciousness. Tim Cook, CEO of Apple — that’s a public figure. A lot of other big companies have public figures, Bob Iger at Disney.

Yeah, there’s lots of non-flashy places that just do the same thing for 80 years and nobody pays any attention.
And yet it’s an important company. There are lots of organizations with 300 employees who would have a Wikipedia entry, and this one doesn’t. Now, they’re not probably concerned about it because they’re like, “We sell cans to Campbell’s Soup Company, so we don’t really need a Wikipedia entry,” but it’s interesting. And that’s just an indicator of how we can have blind spots. Some of them are fairly benign, but they’re blind spots because people don’t know anything about it and, therefore, they’ve never thought to write about it.

AI is training itself, in part, on Wikipedia entries. Are you worried about the opposite happening — AI sneaking itself into Wikipedia entries? Are there safeguards around that? 
We think about it. The community talks about it quite a lot, so the safeguards are pretty robust. It would be very hard for an AI run by a person even to just come and pretend to be a Wikipedian and get away with it. If you’ve talked to any AI, they pretty quickly show themselves not to be human. But that’s not really the attack we would be worried about.

We do think about the positives and negatives because there are both. If community members are starting to use AI to support their work, in many cases that’s completely fine. The bad example is if you go to a relatively obscure topic, say a person who’s a little bit publicly known but not much, and you ask ChatGPT for a Wikipedia-style biography with references, it’ll cough something up but it’s very often hallucinations. It makes up the references, even. It’s not well grounded, but it’s going to look pretty plausible. That is bad. Obviously, if anybody is caught doing that, they’ll be banned quite quickly. In fact, I saw a post in German Wikipedia; there’s a guy who was checking ISBN numbers, just verifying them. He found a pattern of ISBN numbers that were wrong from a user who had put several there. He asked the user, “What’s up?” They’re like, “Oh, I got those from ChatGPT.” Then I think the user was blocked, though he might have just been advised to stop doing it.

So that’s a bad example. But as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases. I’ve played around with this myself, where you take a short stub article that has maybe five sources and you feed AI the article and the sources and ask, “Is there anything in these sources that should be in the entry but isn’t? Or anything in the entry that’s not supported by those sources?” It can make suggestions, and if you’re a human looking through it, you can say, “Nobody had noticed that this article actually doesn’t mention the person’s date of birth, so now we can put the date of birth.” That’s the kind of thing I think could be semi-automated, where we have a tool running all the time that gives volunteers suggestions.

As with many AI uses people are talking about, it’s more of an assistant than anything.
Yeah, exactly. Then you can also think about the question of bias. I’ve played around with this a little bit, but I haven’t really gotten my head around how good it would be. One of the problems is I haven’t figured out a prompt that isn’t just way too verbose to be useful. You can feed it an article and ask, “Is this biased?” and describe the bias. The problem is anything you feed in, it will prattle on for a while about potential bias and so on. Usually it’s just like, “Yeah, fine, but that’s not actually bias. It’s not a problem.”

I’m not sure if AI will be so useful for that.
I’d love something that returns a “yes” or “no” rating. You could say, “Here’s a group of 20 articles that are both popular and look very biased to the AI.” If a tool were any good whatsoever, that could be really useful to core community members who aren’t interested in that particular hot topic. I remember years ago, somebody pointed out this one page to me; I went on the “talk” page and it was a huge flame war. People were screaming at each other. I’m like, “What is going on here?” It was about a breed of dog that wasn’t recognized by the American Kennel Association, but the owners association desperately wanted to get it recognized. Anyway, I was like, “Who are all these dog people? And they’re having a huge fight.”

Dog people are pretty intense. They might rival the Israel-Palestine crowd.
But you can imagine what you really need in a case like that is people who don’t have an emotional state to come in and go, “Let’s try and find a way to write this so that whether you think it should or shouldn’t be an officially recognized breed, we’re not going to settle that here. We’re just going to describe the situation in a way that satisfies all sides.”

To find those kinds of cases where you say, “This article is heavily taking one side of a difficult story” — that might be too hard for an AI, but I’m intrigued by that kind of thing.

AI has gotten to be quite good at multiple languages, and what I did use it for is to compare pages in English, Hebrew, and Arabic about the October 7 Hamas attack. That was super-interesting, and I found it very useful. It basically said the English one is very, very neutral and the others tend to take one side or the other. I still can’t read Hebrew, and I can’t read Arabic. I did get a translation. If the translation was accurate, I’m kind of okay with where they are. Neither of them is completely crazily one side or the other, and people are very upset at the moment.

I also wonder if AI could detect when a page is out of date. My one major criticism of modern Wikipedia is that many pages, especially for minor celebrities, end around 2012 or 2013. Specifically those years 2011, 2012, 2013.
It’s a great example. That is almost certainly the case because that was when we had the peak number of editors. It stabilized, so it didn’t collapse after that, but it stabilized and is pretty steady. Again, it would require a job that runs all the time, that’s scanning through Wikipedia one page at a time. And now that you can augment it with search — ChatGPT by default will often go and do a search on the web. I’m going to play with this. That’s actually a great idea, just to say, “Here’s a page that hasn’t been updated and is citing sources from 2012, but there’s actually new important information in the past five years.”

I’d invoice you later, but this is all volunteer work.
Very good, very good.

How much are you involved in the day-to-day Wikipedia stuff? It sounds like quite a bit.
Yeah, I’m very involved. I just got back from Uzbekistan, where I met the local Wikipedia community.

Naturally.
That was actually a super-interesting meeting. In Uzbek Wikipedia, they’ve been having a big debate about machine translation. Somebody was starting to use it quite heavily, but they felt the quality was terrible because I guess Uzbek isn’t top on the list of languages that large language models are good at, though all the languages are improving. They’ve decided to say, no, they don’t want machine-translated articles, but it’s still an ongoing discussion in the community.

Anyway, I’m very close to the community. I don’t work at the Wikimedia Foundation. I always joke they’ve got work to do; they don’t need me there bothering them. So I’m just a geeky Wikipedian here at home.

You’ve said you recommend against people changing their own Wikipedia entries. Why?
I think people find it emotionally difficult to do, particularly if there’s some criticism of you. It’s very tempting to want to soften that or take it out, but also it just raises all kinds of questions of conflict of interest so it just generally doesn’t work out very well.

And you might get caught, right? People track this stuff.
You might get caught. Then, if you’re in the public eye, that could become a problem. But I also understand it. One reason we’ve never outright banned it, rather than just recommend against it, is that sometimes people see something terrible that’s wrong in their entry. They click “edit” and they take it out. They sometimes yell at people: “Why did you say this about me?” And you’re like, Well, that’s not really how to be a good Wikipedian, but it’s not your job to be a good Wikipedian. You’re just somebody who had a bad false claim in your Wikipedia entry and you’re upset. We sort of have to go, “Yeah, actually, that’s fine.” There’s probably a better way, but we don’t want Wikipedia to be wrong.

This interview has been edited for length and clarity. 

More just asking questions

See All
Jimmy Wales on Why Wikipedia Is Still So Good