The Real World

Where is the real world these days?

I first heard “real world” used as a distinction between life inside and outside academia, as if students were embryos living in a protected nest. Someday students would learn the harsh lessons of life in the so-called real world of work, bills, and family.  

Then, along the way, political campaign slogans began promising “real-world solutions.” What other kind of solutions can there be? Fake-world solutions? But the implication is supposed to be that the other candidate is living in some fantasy world, divorced from voter realities.

However, in the jargon of the day, voter realities are polarized by big lies, which in turn invokes the ubiquitous phrase “the reality is...” But no one agrees on what the reality is anymore.

And reality is becoming more unreal by the day. Technology can manufacture reality to your taste and political flavor. Artificial intelligence can now conjure synthetic media — faked video and audio — indistinguishable from what we previously considered real.

Hany Farid, UC Berkeley professor, who studies synthetic media, told the New York Times:

Imagine a world where many people can create audio/video of a president, a candidate, a CEO, a general, a military leader saying and doing anything that you want them to.

Dessa is a Toronto-based company that develops machine-learning applications including a “perfect deepfake” of the comedian Joe Rogan. The AI was trained on about 90 minutes of Rogan’s comedy show and, in the process, eliminated the anomalies that usually reveal a fake. The hairline wrinkles under Rogan’s eyes, the stubble of his beard, and the pores of his skin appear and move exactly like the real Rogan.

Dessa is also far ahead of other AI in replicating the human voice. The variables of human speech — the timing, emphasis, breath, and pitch — are far more difficult to reproduce than visual images. As a demonstration for the New York Times, Dessa put new words in the mouth of David Attenborough, the voice of BBC natural history documentaries. After training the AI on a 30-minute tape of Attenborough, the artificial Attenborough took an unequivocal stand on two Green Bay football quarterbacks, stating that “Brett Favre was a lion and Aaron Rodgers was a pussycat.” The ersatz Attenborough made one mistake in his posh-British accent: he pronounced Favre as “fave ray.”

With a click of computer code, the Dessa programmer corrected the mispronunciation because, as George Washington promised, “Truth will ultimately prevail where there is pains to bring it to light.” Maybe that was true in the 18th century, but who takes the pains to bring the truth to light today? The world quickly moves on to the next news cycle. There is no follow up, and any correction is lost in the tumult, not exciting enough to reach the headlines.

Yet even that doesn’t matter. As Berkley’s professor Farid pointed out:

I can have a CEO saying profits are down 20%, that video goes viral, stock market crashes. I just need that to be “true” for ten minutes.

If it’s real for ten minutes, it’s real enough to injure.

The existence of deepfakes alone is enough to damage shared veracity. During the 2016 election, a video tape of Donald Trump emerged. He speaks of “grabbing pussy” with impunity because “they let you do it if you’re a star.” Trump even admitted that it was his voice on the tape. Later he said he wasn’t sure it was his voice.  

Danielle Citron, legal scholar and professor at Boston University, calls Trump’s denial the “liar’s dividend.” Trump, the liar, doesn’t need to prove or disprove the truth. All he needs to do is to cast doubt by calling the tape fake — his get out of jail free card. In the real world.

The term deepfake was first applied to pornographic videos with celebrity faces pasted on the actors. Now, any face or body can be inserted unknowingly into pornography. Citron told the New York Times:

Seeing the fake videos of celebrities morph into porn, I felt it viscerally. It’s such a deep violation of your sense of sexual identity and integrity. Anyone could be the subject of a deepfake sex video.

And anyone can become a simulated human. The Israeli company Datagen manufactures synthetic human data to train deep-learning algorithms. (The term deep-learning may seem to imply deep knowledge or insight, but in fact merely refers to having two or more layers of processing nodes in the algorithm.)

Datagen hires people off the street to step into a full-body scanner, creating an exact 3-D replica of them in every detail — skin texture, irises, finger curvature. The fake humans are used to train artificial intelligence to understand human facial expressions (as if humans even understand them). AI can then be used to monitor driver alertness or follow customers around a store. The fake humans train virtual reality headsets to better respond to real human hand and eye movements. Fake humans enable real humans to enjoy a simulated world of virtual reality.

At what point does a virtual reality headset block reality?

A few years ago, I was invited to attend a demonstration of student ideas created during an innovation seminar. One of the ideas from these Harvard students was a pair of eyeglasses designed to improve not how well you see, but what you see. Tired of looking at dingy, industrial brick buildings? Depressed by poverty and squalor? Slip on a pair of these eyeglasses and the world springs with bright color. Replace the sight of poverty with “innovative” rose-colored glasses. Beauty replaces the existing ugliness.

Some scientists have proposed that the world is a hologram. Neuroscientist Anil Seth and philosopher Andy Clark argue that reality is only a hallucination we collectively agree to see.  Cognitive scientist Donald Hoffman concluded his book, The Case Against Reality, by saying:

Spacetime is your virtual reality, a headset of your own making. The objects you see are your own invention. You create them with a glance and destroy them with a blink.

Philosophically, we may live in a hologram of delusion that we call reality. But how we define reality has powerful consequences. We die in hurricanes and wildfires. We suffer from malnutrition and disease.

Can we agree that reality is life with consequences we all must face? Can we agree that climate change is real? Can we agree that inserting fake words in another person’s mouth, inserting unwilling faces in pornography, or substituting fake images to block the view of human suffering is not a reality we can survive in?


 

Dan Hunter is an award-winning playwright, songwriter, teacher and founding partner of Hunter Higgs, LLC, an advocacy and communications firm. H-IQ, the Hunter Imagination Questionnaire, invented by Dan Hunter and developed by Hunter Higgs, LLC, received global recognition for innovation by Reimagine Education, the world’s largest awards program for innovative pedagogies. Out of a field of 1200 applicants from all over the world, H-IQ was one of 12 finalists in December 2022. H-IQ is being used in pilot programs in Pennsylvania, Massachusetts, Oklahoma, North Carolina and New York. He is co-author, with Dr. Rex Jung and Ranee Flores, of A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates, published March 22, 2016 in The Frontiers of Psychology, an international peer-reviewed journal. He’s served as managing director of the Boston Playwrights Theatre at Boston University, published numerous plays with Baker’s Plays, and has performed his one-man show ABC, NPR, BBC and CNN. Formerly executive director of the Massachusetts Advocates for the Arts, Sciences, and Humanities (MAASH) a statewide advocacy and education group, Hunter has 25 years’ experience in politics and arts advocacy. He served as Director of the Iowa Department of Cultural Affairs (a cabinet appointment requiring Senate confirmation). His most recent book, Atrophy, Apathy & Ambition,offers a layman’s investigation into artificial intelligence.

Previous
Previous

The God Gap

Next
Next

Abandon Hope, All Abandon…!