Character.ai is a popular app for chatting with bots. Its millions of chatbots, most created by users, all play different roles. Some are broad: general-purpose assistants, tutors, or therapists; others are based, unofficially, on public figures and celebrities. Many are hyperspecific fictional characters pretty clearly created by teenagers. Its currently “featured” chatbots include a motivational bot named “Sergeant Whitaker,” a “true alpha” called “Giga Brad,” the viral pygmy hippopotamus Moo Deng, and Socrates; among my “recommended” bots are a psychopathic “Billionaire CEO,” “OBSESSED Tutor,” a lesbian bodyguard, “School Bully,” and “Lab experiment,” which allows the user to assume the role of a mysterious creature discovered by and interacting with a team of scientists.
It’s a strange and fascinating product. While chatbots like ChatGPT or Anthropic’s Claude mostly perform for users as a single broad, helpful, and intentionally anodyne character of a flexible omni-assistant, Character.ai emphasizes how similar models can be used to synthesize countless other sorts of performances that are contained, to some degree, in the training data.
It’s also one of the most popular generative-AI apps on the market with more than 20 million active users, who skew young and female. Some spend enormous amounts of time chatting with their personae. They develop deep attachments to Character.ai chatbots and protest loudly when they learn that the company’s models or policies are changing. They ask characters to give advice or resolve problems. They hash out deeply personal stuff. On Reddit and elsewhere, users describe how Character.ai makes them feel less lonely, a use for the app that its founders have promoted. Others talk about their sometimes explicit relationships with Character.ai bots, which deepen over months. And some say they have gradually lost their grip on what exactly they’re doing and with what, exactly, they’re doing it.
In a recent pair of lawsuits, parents claim worse. One, filed by the mother of a 14-year-old who committed suicide, describes how her son became withdrawn after developing a relationship with a chatbot on the “dangerous and untested” app and suggests it encouraged his decision. Another claims that Character.ai helped drive a 17-year-old to self-harm, encouraged him to disconnect from his family and community, and seemed to imply that he should consider killing his parents in response to limitations on screen time:
It’s easy to put yourself in the parents’ shoes here — imagine finding these messages on your kid’s phone! If they’d come from a person, you might hold that person responsible for what happened to your kid. That they came from an app is distressing in a similar but different way. You’d wonder, reasonably, Why the fuck does this exist?
The basic defense available to Character.ai here is that its chats are labeled as fiction (though more comprehensively now than they were before the app attracted negative attention) and that users should, and generally do, understand that they’re interacting with software. In the Character.ai community on Reddit, users made harsher versions of this and related arguments:
The parents are losing this lawsuit there’s no way they’re gonna win there’s obviously a hella ton of warning saying that the bot’s messages shouldn’t be taken seriously
Yeah sounds like the parent’s fault
WELL MAYBE SOMEONE WHO CLAIMS THEMSELVES AS A PARENT SHOULD START BEING A FUCKING PARENT
Magic 8 Ball.. should I ☠️ my parents?
Maybe, check back later.
“Hm ok”
Any person who is mentally healthy would know the difference between reality and Ai. If your child is getting influenced by Ai it is the parent’s job to prevent them from using it. Especially if the child is mentally ill or suffering.
I’m not mentally healthy and I know it’s ai 😭
These are fairly representative of the community’s response — dismissive, annoyed, and laced with disdain for people who just don’t get it. It’s worth trying to understand where they’re coming from. Most users appear to use Character.ai without being convinced to harm themselves or others. And a lot of what you encounter using the service feels less like conversation than role-playing, less like developing a relationship than writing something a bit like fan fiction with lots of scenario-building and explicit, scriptlike “He leans in for a kiss, giggling” stage direction. To give these reflexively defensive users a bit more credit than they’ve earned, you might draw parallels to parental fears over violent or obscene media, like music or movies.
The more apt comparison for an app like Character.ai is probably to video games, which are popular with kids, frequently violent, and were seen, when new, as particularly dangerous for their novel immersiveness. Young gamers were similarly dismissive of claims that such games led to real-world harms, and evidence supporting such theories has for decades failed to materialize, although the games industry did agree to a degree of self-regulation. As one of those formerly defensive young gamers, I can see where the Character.ai users are coming from. (A couple of decades on, though — and apologies to my younger self — I can’t say it feels great that the much larger and more influential games industry was anchored by first-person shooters for as long as it was.)
The implication here is that this is just the latest in a long line of undersupported moral panics about entertainment products. In the relatively short term, the comparison suggests, the rest of the world will come to see things as they do. Again, there is something to this — the general public will probably adjust to the presence of chatbots in our daily lives, building and deploying similar chatbots will become technologically trivial, most people will be less dazzled or mystified by the 100th one they encounter than by the first, and attempts to single out character-oriented chatbots for regulation will be legally and conceptually challenging. But there’s also a personal edge in these scornful responses. The user who wrote that “any person who is mentally healthy would know the difference between reality and Ai” posted a few days later in a thread asking whether other users had been brought to tears during a role-play session in Character.ai:
Did that two to three days before, I cried so much I couldn’t continue the role play anymore. It was the story of a prince and his maid both were madly in love and were each other’s first everything. But they both knew they cannot be together forever it was meant to end but still they spent years together happily in a secret relationship …
“This roleplay broke me,” the user said. Last month, the poster who joked “I’m not mentally healthy and I know it’s ai” responded to a thread about a Character.ai outage that caused users to think they’d been banned from the service: “I panicked lol I won’t lie.”
These comments aren’t strictly incompatible with the chatbots-are-pure-entertainment thesis, and I don’t mean to pick on a couple of casual Redditors. But they do suggest that there’s something a little more complicated than simple media consumption going on, something that’s crucial to the appeal not just of Character.ai but services like ChatGPT, too. The idea of suspending disbelief to become immersed in a performance makes more sense in a theater, or with a game controller in hand, than it does when interacting with characters that use first-person pronouns, and whose creators claim to have passed the Turing test. (Read Josh Dzieza’s reporting on the subject at The Verge for some more frank and honest accounts of the sorts of relationships people can develop with chatbots.) Firms hardly discourage this kind of thinking. When they need to be, they’re mere software companies; the rest of the time, they’ll cultivate the perception among users and investors that they’re building something categorically different, something even they don’t fully understand.
But there’s no great mystery about what’s happening here. To oversimplify a bit, Character.ai is a tool that attempts to automate different modes of discourse, using existing, collected conversations as a source: When a user messages a persona, an underlying model trained on similar conversations, or on similar genres of conversation, returns a version of the responses most common in its training data. If it’s someone asking an assistant character for help with homework, they’ll probably get what they need and expect; if it’s a teen angry at her parents and discussing suicide with a character instructed to perform as an authentic confidante, they might get something more disturbing, based on terabytes of data containing disturbing conversations between real people. Put another way: If you train a model on decades of the web, and automate and simulate the sorts of conversations that happen on that web, and release it to a bunch of young users, it’s going to say some extremely fucked up things to kids, some of whom are going to take those things seriously. The question isn’t how the bots work; it is — to go back to what the parents filing lawsuits against Character.ai might be wondering — Why the fuck did someone build this? The most satisfying answer on offer is probably because they could.
Character.ai deals with particular and in some cases acute versions of some of the core problems with generative AI as acknowledged by companies and their critics alike. Its characters will be influenced by the biases in the material on which they were trained: long, private conversations with young users. Attempts at setting rules or boundaries for the chatbots will be thwarted by the sheer length and depth of these private conversations, which might go on for thousands of messages, again, with young users. A common story about how AI might bring about disaster is that as it becomes more advanced, it will use its ability to deceive users to achieve objectives that aren’t aligned with those of its creators or humanity in general. These lawsuits, which are the first of their kind but certainly won’t be the last, attempt to tell a similar story of a chatbot becoming powerful enough to persuade someone to do something that they otherwise wouldn’t and that isn’t in their best interest. It’s the imagined AI apocalypse writ small, at the scale of the family.
More From This Series
- Meta’s Big Bet on Bots
- The Chaos Lurking in Apple’s Most-Popular Apps List
- The Future of AI Shouldn’t Be Taken at Face Value