![](https://pyxis.nymag.com/v1/imgs/524/210/6a1c901cf3041e69e29ceffd77d12c39fe-27-bug.rsquare.w400.jpg)
On Wednesday, Vice News published a story about Twitter’s search bar, and the way certain Republicans did not automatically show up without a click, as prominent accounts like the president’s do. The unscientific survey incorrectly described affected accounts as suffering a “shadow ban,” and framed Twitter’s sorting system as explicitly biased against Republicans. It was sloppy reporting that was naturally seized upon by the president as an opportunity to inflate the unsubstantiated claims that tech platforms target and censor Republicans.
Algorithmic sorting (in this case, to be clear, of search auto-population, not of anything as dramatic and important as the content you see in your feed) is not a shadow ban, nor is it targeted censorship. It’s a strategy intended to maximize the amount of time users spend on the site by ensuring they are less likely to see content that upsets them or turns them off. It’s focused on behavior, not on party identification. If your identity on Twitter is aligned with trolling, misinformation, or aggressive rudeness — behaviors that will alienate other users, which Twitter needs in order to make money by selling advertisements — you might have your account flagged. Twitter clarified this in a blog post last night and in an interview with Wired. “If you send a tweet and 45 accounts we think are really trolly are all replying a hundred times, and you’re retweeting a hundred of them, we’re not looking at that and saying, ‘This is a political viewpoint.’ We’re looking at the behavior surrounding the tweet,” a spokesperson told the magazine. The company announced that hundreds of thousands of accounts had been affected by this software function, and that it was making changes to resolve the issue.
Twitter has been very careful in describing the auto-population sorting as an “issue,” but some outlets have succumbed to the temptation to describe Twitter’s demotion of certain accounts as a “bug” or “glitch.” BuzzFeed refers to the issue as a “bug” five times in its explanation. Engadget’s headline reads, “Twitter says supposed ‘shadow ban’ of prominent Republicans is a bug.” The Associated Press reported that Twitter had said the issue was “due to a bug” (the AP’s wording, not Twitter’s). CNET paraphrased Twitter’s Kayvon Beykpour by writing that the “search results bug involves an error with Twitter’s algorithm.” Vice News’ original report said that the slightly reduced visibility of the accounts might be “more of a bug than a feature” — the line has since been removed from the article.
It’s not a bug. We need to be clear about this — the issue here is not a bug, glitch, error, or whatever other synonym you can conjure up. Calling this a “bug” implies an outcome contrary to what should be expected by the code, and implies Twitter made a mistake. This is not what we normally think of when it comes to the sorting algorithms that power Twitter, or Facebook’s News Feed, or Google’s search engine, or YouTube’s recommendation system. These are programs designed to anticipate what a user wants based on a myriad number of signals and behaviors, and if the results they serve up are imperfect to a few users, that doesn’t mean the software is buggy. The results might not be politically helpful to a company, or they might be unpredictable. But they’re not a mistake.
Let’s say I want to read about the original Mission: Impossible TV series, so I type “mission impossible” into Google. Google shows me information about the latest film in the series. Did Google give me a valid answer? Sure. Does it make sense that it would show me these results? Yeah. Did it give me the answer I was looking for? No. Would I characterize this behavior as a “bug”? Definitely not.
We are used to technology meant to accomplish very specific tasks, which either accomplishes those tasks completely or fails entirely. Sorting algorithms do not exist on this binary. They are intended to show you (or not show you) what they perceive to be the most relevant results, using factors that change on the fly dozens of times every second. They process hundreds of variables and when the people tasked with overseeing them change one variable, it’s almost impossible to predict how it will behave across millions of users each receiving a personalized result. Their kludgeyness and imperfection are accepted by-products of this work. These programs are tweaked constantly, attempting the impossible task of trying to produce The Perfect Result. Twitter’s algorithm was, some would argue, overzealous in minimizing users’ ability to find certain accounts. It’s important to characterize it correctly not as a bug or a glitch — those words imply specific intent, and they let Twitter off the hook — but as an issue defined by a consistent outcome. Whether or not it’s the right outcome … well, a lot of people have opinions about that.