Of all the buzzy 21st-century tech phrases, “machine learning” threatens to be the most important. Programming computers is slow, but we’re nearing the point where humans give the bots parameters and let them teach themselves. After all, computers can run tons of simulations and figure out the instructions we would have given them if we knew enough. Thus, we don’t try to define a sheep for image-recognition software (that would be hard!), we give the computer a bunch of sheep pictures and let it figure out the most efficient way to define the commonality. It sounds easy enough, except sometimes machines learn the wrong lessons.
In a blog post, researcher Janelle Shane wrote about some of the unconventional answers she’s seen algorithms come up with when they’re asked to teach themselves. The aforementioned sheep-recognition program is real, but the commonality it noticed was the scenery; the sheep-recognition algorithm became a picturesque-grassy-hill-recognition algorithm. Because these programs are looking for the best (read: most efficient) answers to their problems, they’re especially good at finding and using cheat codes. In one disturbing example, a flight simulator discovered that a very hard landing would overload its memory and register as super smooth, so it bashed planes into an aircraft carrier.
In 2018, it’s very easy to feel like the whole country is one of those planes getting crashed into a boat. American society has come to follow the same logic as the glitch-hunting AIs, and in the process, we’ve become vulnerable to these glitches, at an increasingly large scale. The racial gap in incarceration rates has exploded since the Civil Rights Act. College debt has expanded in a similar way over the same time period, like it’s glitching. That doesn’t mean that the American government is way more white-supremacist than it used to be, or that universities and their partners in debt-servicing have become more venal, but that the criminal-justice system and the academy are operating according to a qualitatively different logic. The problem is not that our national institutions are broken per se, but that they’ve come to follow their rules in a new way.
When their technocrat proponents talk about the market or the state, it tends to be in terms that strongly resemble those of programming and computer science. Both institutions are supposed to take their strict parameters and make the most of them, building on information and learning from past experiences toward a more perfect union and/or a lot of money. And, like computers, when you look at the actual functioning of either of these systems, they’re full of glitches and exploits.
When I read Shane’s account of a program that was supposed to generate a fast-moving robot but instead decided to build a very tall tower that just fell down, I thought of the 2008 financial crisis. The way their analytics programs were written, if lenders blended up a bunch of mortgages from around the country, all the risk disappeared. Why spend time and money getting to know your clients to properly assess lending risk when there’s a cheat code to make it vanish? We learned the hard way. Or maybe we haven’t.
Firms have discovered that if they classified workers as contractors instead of employees they could pay less, so they’ve kept doing that, a lot, to the degree that it’s deforming our social structure. Rich malefactors like Harvey Weinstein have known for a long time that they can overload the justice system by throwing money at it, like an AI that’s discovered that the most efficient way to stay out of jail is simply to donate to Cy Vance, rather than avoid committing crimes. Meanwhile, cops create bullshit crimes by poor people just to please the computers — leading to nearly a million unjustified charges in New York alone. The FBI does pretty much the same thing, and has accidentally become the country’s central source of terror plots, according to Human Rights Watch. We’re supposed to be incentivizing the creation and distribution of useful goods. Instead, we’ve incentivized the creation of giant mind-control machines for sale to the highest bidder because that was more efficient. Whoops!
The whole Silicon Valley ethos of “move fast, break things” is essentially an endorsement of the glitch as a mode of production. Shane describes one of the ways programs look for shortcuts as “hacking the Matrix for superpowers,” which sounds right out of a tech pitch deck. What it means in practice is finding ways around rules that probably exist for at least some good reasons. Uber found a hole in taxi regulations (they just took “cab” out of the name), Airbnb in hotels. Facebook emerged like Athena from a gap in Harvard’s data security. In one of Shane’s examples, a bot learned to harvest energy from a glitch by sitting down and bouncing up, another by twitching rapidly. The program fulfills its task without breaking rules, but just because it is fulfilling its task doesn’t necessarily mean it’s doing anything useful; the gap between the two is where we all live now.
If an algorithm generates a bad solution — like face-planting as a mode of ambulation — it’s usually something we can fix.
That’s what tests are for, and engineers learn from their mistakes and oversights. Liberal capitalist democracy, however, isn’t great with do-overs. In the political realm, there’s a fear that any flexible or dynamic process would be subject to tyrannical abuse, and it’s better to just wait until the next election. When it comes to property, possession is nine-tenths of the law; good luck trying to get your money back due to unfairness. And then there’s our system’s ultimate exploit: regulatory capture. That’s like if the twitchy robot used its ill-gotten energy to take over the computer and make sure the error never got patched. What looked like a glitch becomes the system’s defining characteristic, which might help explain why we all walk around now by slamming our face against the floor.
In the stories of algorithms gone haywire, the glitches prompt programmers to reassess what they really want from their programs, and how to get it. What we can learn from the errors of machine learning is that we do not have to live according to a set of rules that produces obviously unfair and undesirable outcomes like a bloated one percent, apartheid prisons, and the single worst person in the country as president. There are American political traditions that saw these problems coming and envisioned relationships between our algorithms, our state, and ourselves better than the one we have now. For instance, the final clause of the tenth point of the Black Panther Party’s 1972 Ten-Point Program was “people’s community control over modern technology” — that sounds like a good idea, especially compared to walking on your face.
But until we reassert control over our societal machine learning, we’re stuck face-planting. I remember the scholar Cornel West telling a joke about success as a narrow goal: “Success is easy!” he said. Then, mimicking a mugger, “Gimme your wallet.” America looks like a glitchy computer, and it’s because capitalism is a machine language, reducible to numbers. America exists to create wealth, and the system isn’t broken, it’s just obeying the rules to disaster; as a country, we’re more ourselves than ever. Donald Trump, who seems to be speedrunning American democracy, is like a living, breathing cheat code, proceeding through life by shortcuts alone. But if Trump represents a terminal failure of this system, it’s because he is a solution, and the easiest one in our current environment. He reminds me of another one of Shane’s examples: A program that, told to sort a list of numbers, simply deleted them. Nothing left to sort.