Die with a Hammer in my Hand


John Henry told his captain
"A man ain't nothing but a man
But before I let your steam drill beat me down
I'd die with a hammer in my hand, Lord, Lord
I'd die with a hammer in my hand"


According to legend, John Henry, the African-American folk hero, took his nine-pound hammer and drilled through the day and into the night to prove that a man was better than a steam engine at drilling pilot holes in a mine. He failed. He collapsed and died with a hammer in his hand. 

John Henry’s death in the early 1870s marked a turning point: We traded human sweat and muscle for steam power, fossil fuels, and electricity.

Roughly 150 years later, we are ensnared in another man vs. machine competition: Will human intelligence lose out to artificial intelligence? Our nine-pound hammer is a 3-pound brain. Our straining muscles are one trillion neurons racing to solve great problems and remember where we parked the car.

We’ve had a few John Henry candidates stand up to AI.   None of them collapsed and died. Gary Kasparov lost in chess to IBM’s Big Blue. Lee Sedol, a champion of many international Go tournaments, lost to DeepMind’s AI in 2016. DeepMind’s AlphaFold swept CASP (Critical Assessment of Protein Structure Prediction), a protein folding competition held in Mexico every two years, where rowdy fans stamp their feet and chant scientific incantations.

The founder of CASP, John Moult, praised AlphaFold’s victory over human scientists:

This is the first time a serious scientific problem has been solved by AI.

Chess, Go, video games, and even protein folding are closed systems proscribed by rules: the knight piece in chess cannot suddenly sprout wings to buzz the queen. There are conceivably as many as 10¹²³ positions in chess, therefore a computer is only limited by its calculating power. We humans accept that these machines can compute faster than a speeding bullet, higher than the tallest building.

But we pride ourselves in our domination of the ultimate human domain— creativity. Machines cannot be creative, only humans can, we believe.  Something is deemed creative when it is original or new and adds value—aesthetic or utilitarian. Some argue it must be unexpected or surprising. The United States Patent Office says that it must be “non-obvious.” 

However, the AI steam drill is coming for our creativity. 

In the July 2023 issue, Neuroscience magazine featured the banner headline:

AI Outperforms Humans in Creativity Test

According to researchers at the University of Montana, AI ranks in the top 1% of all “human thinkers on a standard test for creativity.” 

Parse that phrase for a moment. Creativity is “unexpected, original.” How can you have a standard of the unexpected? “A standard test for creativity” is an oxymoron. “Human thinkers” may soon be an oxymoron, too.

How can you test for creativity?

It’s a contradiction in terms: a test implies correct and incorrect answers; creativity seeks the new, original, and unknown.  If creativity plunges you into the unknown, how can anyone know the “answers” in advance?

The University of Montana researchers were using “creativity tests” designed by Paul Torrance in the early 1960s to measure divergent thinking. Over the years, creativity tests have stayed under the Torrance umbrella: the assumption that divergent thinking tests can measure creativity.  These tests ask you to solve a problem (connect 9 dots in a grid using only four lines) or “invent” unexpected alternatives (find multiple uses for a brick.)  

Has an employer or anyone ever said, “Quick! Give me 9 different uses for a brick?” Some divergent thinking tests even claim that there are right and wrong answers, which is anathema to creativity.

Moreover, these tests don’t predict creative achievement—they predict how well your answers match the expectations of the creators of the Torrance-style test. If your ideas match the Torrance expectations, well then Bob’s your uncle.

Charles Darwin would have performed poorly on a divergent thinking test—he was meticulous and thorough in his thinking.  He wrote a 684-page monograph on a single mollusk.  Even the mollusk’s mother wouldn’t read that.  Darwin’s gift was his passion and patience to accumulate and then synthesize information to generate ideas and insights.

The lead researcher for the University of Montana announced that AI performance on Torrance creativity tests was a “gamechanger.”

Meanwhile, researchers from The Wharton School and Cornell Tech posted their own gamechanger running up the score against puny human minds. The Wharton/Cornell team decided to pit ChatGPT-4 against college students in a “tournament” to invent physical products designed for college students that retail for less than $50. 

In a working paper entitled “Ideas are a Dime a Dozen” from July 2023, Wharton/Cornell researchers wrote:

ChatGPT-4 is very efficient at generating ideas. Two hundred ideas can be generated by one human interacting with ChatGPT-4 in about 15 minutes. A human working alone can generate five ideas in about 15 minutes.

Humans are down 200 to 5 in only 15 minutes.

Americans love to keep score. Because in sports, it’s simple: the value of each run, goal, or touchdown remains the same: seven runs equal seven points. Every time. 

Ideas are different. You can generate 5 ideas, 200 ideas, or even 1000 ideas without finding one idea of value. (Obviously, the more ideas you generate the greater possibility of finding the right idea.) 

The Wharton/Cornell Tech team tried to rate the quality of chatbot ideas by measuring product novelty and likely retail success. By their estimates, the most valuable idea from ChatGPT-4 was a compact printer with a novelty rating of .55, or moderately novel. But the idea is not original. According to Rtings.com, a printer evaluation site, there are at least 135 compact printers on the market. The fourth most valued chatbot idea was noise-canceling headphones, ubiquitous on campuses. At the bottom of the top 10% list of chatbot ideas is a laundry basket, an idea embraced by both primitive and ancient cultures.

ChatGPT’s idea of a laundry basket was the hot innovation—a real “game changer” in ancient Egypt around 3150 BC. Where is the John Henry with his creativity hammer to protest this AI absurdity?

Ray Gross.

In the 1930s, Gross was dubbed the Gadget King. He had over 5,000 gadget ideas:

weird and wonderful contrivances that are aimed at catering to every possible and impossible idea. 

Gross was 38 years old when he launched his torrent of ideas to improve life, to “lift the burden of humanity” or at least halt the annoyances of daily life. He published his ideas for anyone to use. First, in a syndicated newspaper cartoon, then in a book, and even two short films by Vitaphone.

Each cartoon and book page featured an illustration, a few descriptive lines of text, and at the bottom a drumbeat challenge to the world: “Can it be done?”

Many of Gross’s ideas have been realized over the years, such as the television newspaper. And many of ChatGPT-4’s ideas first appeared in Gross’s mind. For example:

Ray Gross ChatGPT-4

Odors Absorbed air purifier

Clothes Hideaway closet organizer

Device to adjust table height adjustable laptop riser

Gross’ ideas were serious, but often laced with whimsy: a mattress alarm clock that shakes the sleeper awake, sliding wallpaper that changes design at the push of a button, an inflatable sofa, an office desk that unfolds into a billiards table, and liquid linoleum sprayed onto the floor.

In 1935, Vitaphone produced two short films: Can It Be Done and An Ounce of Invention in which Gross conceives one household invention after another to convince Ray’s future father-in-law to let him marry his daughter.

However, if your idea of creativity is claiming existing products as new ideas, then we may as well surrender to ChatGPT-4. Its ideation speed and banality are incomparable.

If only Ray Gross could return as the creative John Henry to defend the human imagination. Liquid linoleum would be a “game changer.”

And he’d die with his ideas in his mind, Lord, Lord. 


 

Dan Hunter is an award-winning playwright, songwriter, teacher and founding partner of Hunter Higgs, LLC, an advocacy and communications firm. H-IQ, the Hunter Imagination Questionnaire, invented by Dan Hunter and developed by Hunter Higgs, LLC, received global recognition for innovation by Reimagine Education, the world’s largest awards program for innovative pedagogies. Out of a field of 1200 applicants from all over the world, H-IQ was one of 12 finalists in December 2022. H-IQ is being used in pilot programs in Pennsylvania, Massachusetts, Oklahoma, North Carolina and New York. He is co-author, with Dr. Rex Jung and Ranee Flores, of A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates, published March 22, 2016 in The Frontiers of Psychology, an international peer-reviewed journal. He’s served as managing director of the Boston Playwrights Theatre at Boston University, published numerous plays with Baker’s Plays, and has performed his one-man show ABC, NPR, BBC and CNN. Formerly executive director of the Massachusetts Advocates for the Arts, Sciences, and Humanities (MAASH) a statewide advocacy and education group, Hunter has 25 years’ experience in politics and arts advocacy. He served as Director of the Iowa Department of Cultural Affairs (a cabinet appointment requiring Senate confirmation). His most recent book, Atrophy, Apathy & Ambition,offers a layman’s investigation into artificial intelligence.

Previous
Previous

Tributes to Amy Lowell

Next
Next

Encounter: A Surrealist Game of Empirical Perception