If you’re female, the machines may not recognize you as human. They may not see you if you’re trans or a person of color, nor, possibly, if you have poor dental hygiene or carry a cane or are diminutive in stature or extraordinarily tall. The machines understand the world based on the information they’ve been given, and if you aren’t well represented in the data — if the white-male prejudice of history itself has disenfranchised you to date — then chances are to the machine you don’t exist. A dark-skinned woman in the U.K. couldn’t renew her passport online because the digital form looked at her photo and didn’t recognize it as a proper face. Trans people confound airport body scanners and are regularly hauled out of security lines to be frisked as if they were terrorist suspects. Worst-case scenarios are not so far-fetched. A self-driving car knows to brake in the crosswalk when it sees a person. But what does it understand a person to look like?
If you think structural bias is bad now, in other words, just wait until the machines take over. “Bias,” warns Kate Crawford, co-founder of the AI Now institute at NYU, in a lecture she gave last year, “is more of a feature than a bug of how AI works.” And the worst of it is that you may never know how the machines have judged you, or why they have disqualified you from that opportunity, that career, that scholarship or college. You never see the ad on your social-media feed for your dream job as a plumber or roofer or software engineer because the AI knows you’re female, and it perpetuates the status quo. (Instead, you only see ads for waitresses or home health-care workers — lower paying and with less opportunity for advancement.) These are real-life examples, by the way.
The reason recruiting engines downgrade candidates with names like Latanya is that people named Latanya have always had a harder time finding a job, according to research conducted by Harvard’s Latanya Sweeney, who used her own name as a sample. (And if you do happen to be searching for Latanya online, you will find ads alongside your search for criminal-background checks.) One recent experiment showed that AIs gave special preference to the résumés of job candidates named Jared who played lacrosse, which corroborates every one of your worst fears about the world. But there it is, replicating ad infinitum and without oversight.
The data are only part of the problem. There are also the men (mostly men: AI researchers are 88 percent male) who build the algorithms that tell the machines what to do. Sometimes the faulty design is unintentional, as when Amazon decided to create an AI that sorted résumés to find optimal employees. The men in the Amazon AI lab built their algorithm around a question — what kinds of people get hired to work at Amazon — and then loaded generations of résumés into the machine to teach it the attributes of a successful Amazon employee. And what did they find? That maleness was a prerequisite for getting hired at Amazon because for as long as Amazon has been in business it has promoted and rewarded men. Ashamed of themselves, the AI geniuses scrubbed their program. They tried to make the AI neutral, but they couldn’t guarantee it would ever unlearn its biased beginnings and wound up killing the project dead.
*This article appears in the November 11, 2019, issue of New York Magazine. Subscribe Now!
More From the Future Issue
- What Life in 2019 Can Tell Us About Life in 2029
- In 2029, It’ll Be Harder to Write Science Fiction Because We’ll Be Living It
- In 2029, I Will Worry About the Wind
- In 2029, IRL Retail Will Live Only Inside Amusement Parks
- In 2029, the Internet Will Make Us Act Like Medieval Peasants
- There Will Be No Turning Back on Facial Recognition