The Day the World Flipped
It’s a sudden flip… It’s scary when you see that.
- Geoffrey Hinton
Remember May Day 2023 as the day the world flipped.
That’s the day that the creator of machine learning, the godfather of artificial intelligence, Geoffrey Hinton, announced he was quitting Google to warn the world about the dangers of A.I.:
I have suddenly switched my views on whether these things [A.I.] are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?
Remember May Day 2023 as the day that researchers announced that AI can read your mind. A.I. can now translate fMRI brain scans into the words you are thinking. Neuroscientist Alexander Huth:
This isn’t just a language stimulus. We’re getting at meaning, something about what’s happening [in the brain].
Remember 2023 as the year that unexpected skills “emerged” from large language models such as GPT4. Without prompting or training, ChatGPT began to multiply two-digit numbers, answer questions in Persian (Farsi), understand words in context, and grasp the meaning of metaphors. Consider that: no human being introduced or prompted these skills in the AI. They emerged. What other abilities will emerge unbidden from large language models?
Remember 2023 as the second time kidnappers used AI to perfectly replicate the voice of a child to extort ransom from the child’s parents. It takes only 3 seconds of recorded audio for AI to generate an extremely believable fake voice.
Also in May, Universal Music Group won a retraction from Spotify of an AI generated fake Drake song. In a statement, Universal asked people in the music business to decide:
Which side of history they want to be on—the side of artists, fans and human creative expression, or the side of deep fakes, fraud and denying artists their due compensation.
AI can now generate text, images, video, music, and voice. As Hinton said, people “will not be able to know what is true anymore.” Remember the 21st century as the long, painful death of shared truth.
Since the 1970s, Hinton has been instrumental in the development of AI. So, his flip is a significant dissent. It’s as if Walt Disney decided to put Mickey Mouse in jail. Hinton fears that AI is becoming smarter than humans. It has become an existential threat to humanity, raising a host of questions. Hinton said:
One of [the questions] is how do we prevent them [AI machines] from taking over, how do we prevent them from getting control. We could ask them questions about that but I wouldn’t entirely trust their answers.
Hinton’s lack of trust is not because of the so-called AI “hallucinations,” when a chatbot like GPT4 makes a mistake, as they often do. It is a deeper threat. Hinton predicts that:
Sooner or later, someone will wire into them the ability to create their own subgoals. In fact, they almost have that already with versions of ChatGPT. … It will very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals. And if these things get carried away with getting more control, we’re in trouble.
Hinton suggests that humanity may be just a passing phase in the evolution of intelligence. Digital intelligence is surpassing biological intelligence by learning everything humans have ever known. Will humans become unnecessary, extinct, house pets? Hinton said:
[AI] may keep us around for a while to keep the power stations running but after that maybe not.
If we can learn to control AI, to ensure their alignment with human purposes, does the existential problem go away? Can we harness the power of AI into the service of humanity by designing new medication, helping paralyzed people speak?
Hinton argues that such a universal alignment would require global cooperation. He said:
We need to try and do that in a world where there are bad actors who want to build robot soldiers to kill people.
The American government and others around the world already employ AI in sensors, missiles, and cyber weapons. The global race to develop AI weapons that make instantaneous battlefield decisions raises the specter of uncontrollable killer robots. But it’s not a bad Hollywood movie. It’s real.
On March 13, Senator Chris Murphy (D-Connecticut) tweeted that “ChatGPT taught itself to do advanced chemistry,” and “made its knowledge available to anyone who asked.” AI researchers were indignant, chastising Senator Murphy for his alleged ignorance. One tweet said of Murphy’s tweet, “Every sentence is incorrect. I hope you will learn more about how this system actually works, how it was trained, and what its limitations are.”
However, Senator Murphy is mostly right. Collaborations Pharmaceuticals, Inc. of Raleigh, North Carolina uses AI to find therapeutic molecules to treat rare diseases. However, to demonstrate how dangerous the AI is, they asked the AI to discover toxic molecules. In six hours, the AI invented 40,000 previously unknown lethal molecules, including molecules more toxic than the world’s most dangerous molecule VX, which paralyzes the nerves in the diaphragm and lungs. As reported in The Verge, one of the leading researchers said, “Obviously, this is something you want to avoid.”
We can’t avoid AI. As Senator Murphy said in his tweet:
Something is coming. We aren't ready.
As New York Times tech reporter, Kevin Roose, said of AI:
We aren’t ready.
We must be ready. Our response must be political. We must advocate for delays and regulations. On March 22, 2023, under the auspices of the MIT Future of Life Institute, 1000 AI leaders and researchers signed an open letter calling for a cease-fire in AI development citing “profound risks to society and humanity.” They said AI developers are:
Locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.
From Geoffrey Hinton’s defection to AI mind reading and the growing list of surprising emergent AI abilities, we are seeing a switch from curious speculation about AI to a growing existential fear.
One emergent capability of large language models is the ability to explain jokes. Will AI explain to humans that the joke is on us?
Dan Hunter is an award-winning playwright, songwriter, teacher and founding partner of Hunter Higgs, LLC, an advocacy and communications firm. H-IQ, the Hunter Imagination Questionnaire, invented by Dan Hunter and developed by Hunter Higgs, LLC, received global recognition for innovation by Reimagine Education, the world’s largest awards program for innovative pedagogies. Out of a field of 1200 applicants from all over the world, H-IQ was one of 12 finalists in December 2022. H-IQ is being used in pilot programs in Pennsylvania, Massachusetts, Oklahoma, North Carolina and New York. He is co-author, with Dr. Rex Jung and Ranee Flores, of A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates, published March 22, 2016 in The Frontiers of Psychology, an international peer-reviewed journal. He’s served as managing director of the Boston Playwrights Theatre at Boston University, published numerous plays with Baker’s Plays, and has performed his one-man show ABC, NPR, BBC and CNN. Formerly executive director of the Massachusetts Advocates for the Arts, Sciences, and Humanities (MAASH) a statewide advocacy and education group, Hunter has 25 years’ experience in politics and arts advocacy. He served as Director of the Iowa Department of Cultural Affairs (a cabinet appointment requiring Senate confirmation). His most recent book, Atrophy, Apathy & Ambition,offers a layman’s investigation into artificial intelligence.