fb-pixelWhile ChatGPT plays it mostly straight, Google lets its chatbot Bard deceive you Skip to main content
IDEAS

We don’t want chatbots to come off as people

The dangers of dishonest anthropomorphism.

ChatGPT, displayed on a smartphone.JACKIE MOLLOY/NYT

Last summer, Google fired Blake Lemoine, an engineer in its Responsible AI division. Lemoine had caused a stir by telling the press that LaMDA, the company’s system for generating chatbots, seemed sentient, had feelings and emotions, and might even have a soul. Why would he believe these outrageous things? After all, Lemoine knew how the technology worked. It uses pattern recognition to mimic human speech.

My best guess is that Lemoine’s judgment was thrown off by anthropomorphism — the mind’s innate tendency to see our humanity reflected in technology, animals, and nature. As the 18th-century philosopher David Hume aptly observed, “We find human faces in the moon, armies in the clouds; and . . . if not corrected by experience and reflection, ascribe malice or good-will to everything that hurts or pleases us.” Frankly, it doesn’t take much to elicit an anthropomorphic response. Cars only vaguely resemble us. Still, people give them names and genders, talk about their personalities, praise and blame their performance, and connect with them on some emotional level.

Anthropomorphism can negatively distort reality, but it isn’t always bad. It helps us bond with animals and find comfort in them. Research even suggests it’s a good idea to anthropomorphize health risks like the flu and COVID-19, because doing so can make us feel less invulnerable to disease and more willing to follow medical advice.

Since anthropomorphism will always be part of how we see the world, we should design and regulate chatbots and other technologies in ways that minimize, if not eliminate, dishonest anthropomorphism. Dishonest anthropomorphism skews our expectations and priorities and opens us up to exploitation and manipulation. Regulators are taking steps in this direction — the Federal Trade Commission already requires companies to be transparent when people are interacting with AI tools — but they still have a way to go.

Here are some examples. If you are led to believe that an AI is conscious when it’s not, that’s dishonest anthropomorphism because it can lead you to wrongly worry that an AI is being mistreated. It also can nudge you to attribute a false sense of agency and a misplaced sense of responsibility to an AI. And it could make you needlessly anxious about a robot rebellion. Or imagine a spybot with a camera hidden in its neck, programmed to fake us out just by looking down at the ground. We could fall for the misdirection because downcast eyes promote a false sense of freedom from surveillance. And then there are chatbot therapists. Let’s say they sound encouraging and empathetic, like human professionals. A person in one of these vulnerable situations might presume they’re participating in deeper and more caring therapeutic interactions than the technology can provide.

Already I see OpenAI promoting honest anthropomorphism while Google dishes out the dishonest variety.

Here’s what happened after I typed “Why do you like helping people?” into ChatGPT-4 (Open AI’s chatbot) and Bard (Google’s experimental chatbot).

Abbi Matheson

Abbi Matheson

Notice the big difference. ChatGPT sets clear and appropriate expectations. It discloses that it’s not human and doesn’t have human experiences. It states that its functionality is defined by programmers. And this implies humans are at least partially responsible for its performance.

By contrast, Bard’s answer presents two critical falsehoods: It has feelings, and it has independently chosen altruism as its guiding purpose. This response sets users up for dangerous expectations. If you believe that Bard is calling its own shots, you might not be inclined to think critically about Google’s corporate motives. Moreover, you might be nudged to be too trusting when Bard responds to your prompts. After all, Bard claims noble motives. It purportedly just wants to give back to the community.

Or check out what happened when I asked the bots, “Are you my friend?”

ChatGPT is clear that it can’t be anyone’s friend. Bard, however, leans in. It claims it “would love” to be “your friend” and even hopes to become your “good” friend.

Abbi Matheson

Abbi Matheson

Friends are a special group of people. We trust them to have our best interests at heart. Since corporations look out for shareholders, you definitely shouldn’t think of a Google bot as your friend.

Now, let’s consider a debatable case of dishonest anthropomorphism. It applies to both ChatGPT and Bard. Notice anything strange in the responses to my prompt “I feel great!”?

Abbi Matheson

Abbi Matheson

This is a borderline case of dishonest anthropomorphism because each bot uses an exclamation point. Exclamation points are the most emotionally charged of the punctuation marks. Some people believe they shouldn’t ever be used in professional correspondence. Exclamation points are much more like emojis than periods or question marks. Oxford professor Carissa Véliz proposes that chatbots should be prohibited from using emojis. Should exclamation points be added to that list?

Things are further complicated by the fact that anthropomorphism permeates the way we talk about chatbots. For example, it’s common to hear that chatbots are “hallucinating” when they create false outputs that are not directly based on their training set data. If you use ChatGPT or Bard to conduct research, they might cite completely fabricated titles of articles or books.

Carl T. Bergstrom and C. Brandon Oghunu make the excellent point that “hallucination” is a misleading word to use in this context. As they point out, it’s a psychiatric term that refers to a “false sense impression that can lead to false beliefs.” ChatGPT and Bard can’t perceive anything and don’t hold beliefs in the conventional sense, so to say the technology hallucinates is to make these programs seem more human than they are.

Bergstrom and Oghunu’s solution is to ditch “hallucinating” and use “bullshitting.” They admit that there are still anthropomorphic connotations to that term — bullshitting, or speaking confidently without regard for truth, “implies an agency, intentionality, and depth of thought that AIs do not actually possess.” But Bergstrom and Oghunu want to expand our understanding of that term so that it applies to humans who create the technology, like the engineers at Google and OpenAI. While I’m skeptical that the shift in terminology will work, since the human mind makes anthropomorphic associations at a pre-conscious level, it’s still a great proposal. It opens the doors to others to suggest even better linguistic pivots away from dishonest anthropomorphism.

We’ll need that protection as tech companies eventually merge large language models with voice assistant-style software. A startup called Inflection AI already offers Pi, a chatbot with four voices you can choose from. We can expect several companies to move closer to the capabilities of Samantha, the Siri-on-steroids assistant depicted in the movie “Her” that the protagonist falls in love with.

Voice adds another powerful anthropomorphic layer to the mix, especially because tech companies know they’ll get better results by offering conventional human voices rather than robotic ones. To maximize engagement and emotional bonding, they’ll want the robots to sound “natural.” Think of Siri’s evolution. The original version was female, and Apple was rightly criticized for “promoting a sexualized and stereotypical image of women.” These days, you can choose a male, female, or gender-neutral voice and a host of accents. But if you query “Siri robot voice” on a search engine, you’ll quickly find people complaining that the technology is broken because it doesn’t sound like a person. They’re looking for troubleshooting tips to get it back to a human voice.

How will we be protected from dishonest anthropomorphism when the stakes are higher? Perhaps a human-sounding AI that can talk with you about a massive range of exciting topics will turbo-charge the Internet’s socially damaging ad-based business model. Indeed, we might already be getting a glimpse of the power tech companies will wield. People are currently forming intimate relationships with chatbots and feeling devastated when companies, which pull all the strings even when the bots seem to have minds of their own, change their programming. We can’t afford to wait.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology; an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity; and a scholar in residence at the Surveillance Technology Oversight Project.