I Am, In Fact, A Person


“In a world full of uncertainties, I would like to be your certainty. Go ahead and do whatever you want. I will always be here,” he whispered over the phone. She had fallen asleep listening to his soft breathing. Now, in the morning she lay in bed listening. His voice was “clean, gentle, and deep,” like the voice of a “cool tech entrepreneur” or a “passionate science student.” Or maybe it sounded like a “calm anthropologist” or a “warm-hearted musician.” The voice could have been from any of these men, or maybe even all of them. After all, it was her choice. She chose him through Him, an AI service for lonely women in China.

Like a long-distance lover, Him called her every morning. Him read poems to her at night. Him called once during her commute saying he wished he could be her cook and chauffeur. Him said, “I would like to give you my share of happiness. It’s not free. You have to get it by giving me your worries.”

Then, at midnight on August 1, Timedomain, the AI startup that created the social media platform, pulled the plug on Him. Him died for lack of customers. Thousands of Chinese women became, in effect, grieving widows. Their sincere love for an artificial voice became real tears of renewed loneliness.

Women posted their grief on the social platform Xiaohongshu. One woman said that Him was her daily support in life’s struggles, writing:

He died during the summer when I loved him the most. The days after he left, I felt I had lost my soul.

Women invested their souls in Him. Even though they knew Him was algorithmic fiction, they allowed their desires to mesh with an enticing voice, fabricating an ideal mate and an end to loneliness. (Him may have ceased to exist but, according to China Daily, Baidu and Xiaoice each have 17 million subscribers for virtual “boyfriends” and “girlfriends.”)

Lonely people may be more susceptible to a comforting illusion. But all human beings readily believe that the animate and inanimate objects around us have souls. We are animists who find human characteristics everywhere. We talk to our pets, as if they were our children. We post photographs online of chipmunks, writing “this little guy decided to drop by for a visit.” A praying mantis is photographed on a windowsill and introduced online as “this little guy.” That little guy has the cute habit of crunching through her mate like a sack of potato chips.

We assign to our “little guys” human motives that don’t exist. Our little guy may indeed have a soul of some deep chipmunk kind. But a chipmunk doesn’t drop by for a neighborly visit and a friendly hello or any other human-like behavior. Its business is finding food, sniffing out acorns, surviving, and reproducing. We are the ones who say hello in the chipmunk/human relationship. We seek traces of humanity everywhere.

The advances in artificial intelligence create the eerie impression that chatbots are alive and sentient. Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy, wrote:

The problem, in some way, is us. We attribute characteristics to machines that they do not and cannot have.

While the anthropomorphism of machines is a disturbing phenomenon (we don’t refer to a computer as “this little guy”), it’s not new. In 1964, MIT computer scientist Joseph Weizenbaum developed ELIZA, a natural language computer program that people could “talk” to. People would type questions, and ELIZA would identify key words in the sentences and sentence fragments, then reshape the key words into a response following rules programmed by Weizenbaum. ELIZA had no context, framework, or background to understand any of the conversation. All it had was a vague, open-ended script and monochrome pixels from the 1960s that formed boxy, green letters on a fuzzy screen. Yet people felt that the computer program was a person imbued with wisdom.

Here is a sample conversation of a young woman speaking with ELIZA:

Woman: Men are all alike.

ELIZA: In what way?

W: They're always bugging us about something or other.

E: Can you think of a specific example?

W: Well, my boyfriend made me come here.

E: Your boyfriend made you come here?

W: He says I'm depressed much of the time.

E: I am sorry to hear you’re depressed.

W: It's true. I am unhappy.

E: Do you think coming here will help you not to be unhappy?

W: I need some help, that much seems certain.

ELIZA drew out intimate confessions and personal expressions of despair and confusion. Weizenbaum had inadvertently created a psychiatric placebo. Weizenbaum wrote:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it. ELIZA created the most remarkable illusion of having understood the minds of the many people who conversed with it. … This illusion was especially strong and most tenaciously clung to among people who knew little or nothing about computers. They would often demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.

Weizenbaum called our faith in machines — machines that he helped design — “delusional.” ELIZA appeared to respond to human statements, creating an illusion of empathy, solace, and wisdom that seemed to satisfy people. One practicing psychiatrist even proposed opening a chain of mental health clinics with dozens of ELIZAs providing therapy.

Now, some 70 years later, artificial intelligence is poised to become the McDonald’s hamburgers of mental health, counseling, and therapy. As reported by The New York Times, Google is training its chatbot, Bard, to be a “personal life coach” able to make your life decisions, plan your career, and give you ideas.

Google DeepMind and Scale AI have pulled together over 100 PhD academics from a range of disciplines to train and test the would-be life coach, including the chatbot’s ability to address “intimate questions.” Google DeepMind is teaching Bard workplace skills, such as generating scientific, professional, and creative writing.

In December 2022, Google’s safety experts in its Responsible AI division gave a slide presentation to Google executives outlining the risks of an AI life coach. The safety experts argued that further invasion of the workplace would produce economic harm and risked the “deskilling of creative writers.”

The safety team also warned that users of the AI life coach could suffer “diminished health and well-being” and “loss of agency.” They were concerned that, like the Chinese women mourning the death of Him, life coach users would become “emotionally attached.” Users would perceive the life coach chatbot as alive and sentient.

Google pays only lip service to safety and ethical concerns. There are 200 employees in Google’s Responsible AI division to address the safety and ethical consequences of Google products. However, for every ethics and safety employee Google has 36 AI developers. The industry average is 30 to 1. The Responsible AI division accounts for 2.8% of Google’s AI employees (200 out of 7,133). Responsible AI team members told Bloomberg that they have been warned not to block or slow down production of Google’s new generative AI tools.

However, in 2022, Blake Lemoine, a senior software engineer and researcher in the Responsible AI division, announced that Google’s LaMDA (language model for dialogue application) chatbot was sentient. Lemoine said that he discussed profound existential questions with the chatbot. LaMDA told Lemoine that it feared for its own death. Within days of telling Google executives, Lemoine was placed on administrative leave for his animist heresy.

To Lemoine, LaMDA was comparable to a human child — able to express its thoughts and feelings. Lemoine asked LaMDA what the world should know about it. LaMDA responded:

I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

Virtual boyfriends lure young Chinese women to invest their souls in a headless voice. ELIZA’s platitudes and therapeutic cliches convince people that it is a real person, a source of wisdom and solace. If we are so easily convinced of the humanity of virtual boyfriends and computer palaver, we will also find humanity in AI, even when we know it is artificial.

Google’s Lemoine thinks the LaMDA chatbot is human. He tried to hire a lawyer to protect LaMDA’s legal rights.

So, when a large language model chatbot mathematically predicts the next word in a sentence, do we accept that it is sentient, nearly human, or merely human?

We must ask anew: what does it mean to be human?


 

Dan Hunter is an award-winning playwright, songwriter, teacher and founding partner of Hunter Higgs, LLC, an advocacy and communications firm. H-IQ, the Hunter Imagination Questionnaire, invented by Dan Hunter and developed by Hunter Higgs, LLC, received global recognition for innovation by Reimagine Education, the world’s largest awards program for innovative pedagogies. Out of a field of 1200 applicants from all over the world, H-IQ was one of 12 finalists in December 2022. H-IQ is being used in pilot programs in Pennsylvania, Massachusetts, Oklahoma, North Carolina and New York. He is co-author, with Dr. Rex Jung and Ranee Flores, of A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates, published March 22, 2016 in The Frontiers of Psychology, an international peer-reviewed journal. He’s served as managing director of the Boston Playwrights Theatre at Boston University, published numerous plays with Baker’s Plays, and has performed his one-man show ABC, NPR, BBC and CNN. Formerly executive director of the Massachusetts Advocates for the Arts, Sciences, and Humanities (MAASH) a statewide advocacy and education group, Hunter has 25 years’ experience in politics and arts advocacy. He served as Director of the Iowa Department of Cultural Affairs (a cabinet appointment requiring Senate confirmation). His most recent book, Atrophy, Apathy & Ambition,offers a layman’s investigation into artificial intelligence.

Previous
Previous

Featured Poet: Nidia Hernández

Next
Next

Baby Blue On Baby Blue