Eye to Eye

On October 22, 2021, the Harvard-Radcliffe Institute hosted a day-long forum on artificial intelligence, entitled “Decoding AI: The Science, Policies, Applications, and Ethics of Artificial Intelligence.” 

The Covid 19 virus kept us from meeting face-to-face, so the Harvard-Radcliffe forum had to be online.

The isolation of our separate computer screens kept us from the virus, but it also kept us from each other. The screens disrupted our human contact, particularly the eye contact critical to communication.

Speakers could not see the audience, and there was no contact between audience members, no ability to share reactions. Most speakers kept their eyes locked to their computer screen sending the message that what was on their computer was more important than communicating eye-to-eye.

We don’t trust people who don’t look us in the eye. Imagine how a fourth grader feels trying to learn on Zoom without seeing the teacher’s eyes. Eye contact is essential to understanding communication, establishing empathy and shared emotions. From birth, eye contact is the basis of trust.

Nonetheless, the Harvard-Radcliffe experts felt trustworthy. They were mostly dedicated academics striving to make artificial intelligence comprehensible and visible.

But most of us have no eye contact with artificial intelligence. It dwells in an untouchable black box. Its algorithmic codes are trade secrets. Even the creators of algorithms are at a loss to explain its inner workings. What we can’t see or understand—we distrust. Somewhere between 70% to 88% of Americans tell pollsters that they fear AI. We have been told by AI proponents that driverless cars and machines smarter than humans are just around the corner. So, the general public imagines AI to be a coming robot apocalypse where we become slaves to our robot masters.

Artificial intelligence is already here. The miracle of artificial intelligence is thoroughly woven into our daily lives in social media, advertising, facial recognition, block chain currency, 5G, doorbells, banking, and on and on. This is the triumph of Big Tech as we buy AI products and sacrifice our privacy for the “privilege” of buying the next miracle. 

The Harvard-Radcliffe forum had its share of AI miracle workers. Suchi Saria of Johns Hopkins University works to convince doctors that diagnosis by algorithm is trustworthy. Because, Saria argued, AI can see what humans cannot—the obscure patterns that only emerge through processing massive data. For example, AI is able to make an earlier diagnosis of diabetic retinopathy which can lead to blindness. Surprisingly, the AI also detected previously unknown markers in the retina of cardio-vascular symptoms.

Materials expert Alán Aspuru-Guzik, professor of chemistry and computer science, at the University of Toronto, is a crusader for “self-driving labs” where the next research goal is generated “on the fly.” New molecules are discovered in 3-5 years rather than 10. Computers generate ideas for new drug therapies. AI proposes new oxide materials capable of drawing carbon dioxide out of the atmosphere.

However, “don’t expect miracles” anymore, as one of the first speakers said.

An AI algorithm uses Bayesian probability in identification. Shown a picture, the algorithm doesn’t identify a panda bear the way a human might. Instead, the algorithm calculates probabilities. It was 58.5% sure that the picture was a panda. Runner up in the probability scheme? The algorithm was 24.7% sure the picture was a cat. Are you 100% sure that you can rely on 58.5% accuracy?

Machine learning algorithms must be trained on thousands and thousands of bits of data. But who knows how valid the data is? Bias can be buried within it. As widely reported, facial recognition algorithms fail to recognize Black people’s faces because the training data used primarily white faces. Algorithms are mired in the past: they only learn from existing data; they do not generate new data to keep up with social and scientific change. 

Daniela Rus, director of the MIT AI lab, cited data availability and data quality as key challenges facing the “AI community.” Because adequate training data is expensive to generate, data sets are reused continuing earlier bias and privacy issues. Not only is the machine learning algorithm a black box, the data-training set is also behind the mysterious screen. 

Several speakers reached into the grab bag of AI jargon to call for “explainability,” which has the ring of non-words like “truthiness” or “saleability.” Explainability hasn’t yet escaped the walls of the AI community into the broader world. It is the:

concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level.

The forum speakers were keenly aware that artificial intelligence does not make sense to most humans and that polling (see above) shows increasing distrust of technology and especially artificial intelligence.

They recognize the lack of understanding, and that governments around the world are in discussion about regulation. 

As one speaker said:

We know these problems are coming. We need to know what tech can or can’t do, what it should or should not do, and what it can do.

But no one ventured a policy or recommendation. Instead, like many sincere advocates for a cause, the AI community is poised to advocate for… empty rhetoric, like calling for “more robust public engagement” or a “national conversation.” 

These are good, well-intentioned people. But proposing engagement and conversation rather than regulation and legislation is dodging the issues. They are tiptoeing around while Godzilla is stomping the villages. Godzilla is Big Tech—Facebook, Google, Microsoft, Amazon, and Apple. These companies control artificial intelligence from top to bottom—research to implementation. If any neighborhood start-ups launch a breakthrough, Big Tech buys them and devours them, like Facebook and Instagram or Google and Open Mind. 

Big Tech hides inside their labs and offices. We don’t learn what they are really up to until they are caught. Mark Zuckerberg of Facebook issued his first apology for privacy violations in 2006 when the company was two years old. He publicly apologized again in 2007, 2010, 2011, 2017, and 2018 for the Cambridge Analytica scandal. Throughout his apologies, Zuckerberg asks for forgiveness because “Facebook seeks to bring people together.” 

We learned this year through thousands of secret company documents released by whistle-blower Frances Haugen that Facebook allows hate speech, misinformation, privacy violations, and body shaming to amass obscene levels of profit.

“A national conversation” and a “more robust public engagement” should begin with regulation of Big Tech, busting their monopolies. That will take time and may be a fool’s mission. But It’s not hard to imagine regulations to pursue in the interim. Here’s a short list off the top of my head of regulations to advocate for:

Ban deep fakes
Unzip algorithms to eliminate bias and privacy violations
Ban algorithms in human evaluation processes such as hiring and parole decisions
Keep teaching human and in-person
Ban AI facial recognition 

We need to bring artificial intelligence out of the corporate shadows and into the light. We need to see with clear eyes what is being done to us. We welcome the medical miracles and the carbon absorbing materials. But we cannot lose sight of our basic humanity—the trust in each other that begins with simple eye contact.


 

Dan Hunter is an award-winning playwright, songwriter, teacher and founding partner of Hunter Higgs, LLC, an advocacy and communications firm. H-IQ, the Hunter Imagination Questionnaire, invented by Dan Hunter and developed by Hunter Higgs, LLC, received global recognition for innovation by Reimagine Education, the world’s largest awards program for innovative pedagogies. Out of a field of 1200 applicants from all over the world, H-IQ was one of 12 finalists in December 2022. H-IQ is being used in pilot programs in Pennsylvania, Massachusetts, Oklahoma, North Carolina and New York. He is co-author, with Dr. Rex Jung and Ranee Flores, of A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates, published March 22, 2016 in The Frontiers of Psychology, an international peer-reviewed journal. He’s served as managing director of the Boston Playwrights Theatre at Boston University, published numerous plays with Baker’s Plays, and has performed his one-man show ABC, NPR, BBC and CNN. Formerly executive director of the Massachusetts Advocates for the Arts, Sciences, and Humanities (MAASH) a statewide advocacy and education group, Hunter has 25 years’ experience in politics and arts advocacy. He served as Director of the Iowa Department of Cultural Affairs (a cabinet appointment requiring Senate confirmation). His most recent book, Atrophy, Apathy & Ambition,offers a layman’s investigation into artificial intelligence.

Previous
Previous

Notes from a Strange War

Next
Next

The Bear Behind Bannon