The Big Story: Google’s Cringey AI Olympics Commercial Is Backfiring in a Big Way
“‘It’s hard to think of anything that communicates heartfelt inspiration less than instructing an AI to tell someone how inspiring they are.’ People on Reddit don’t seem to like it either.”
AI lessons for healthcare’s humans
By David Jarrard and David Shifrin
3-minute read
Amid last week’s Olympic celebration of human excellence, we were cheerfully and relentlessly presented with Google’s televised vision of a Brave New World in which soulless corporate machine mediocrity replaces human creativity, authenticity and relationships.
We hope healthcare was watching.
“The marketing of generative AI is a broadside against singularity in favor of digestibility, against creativity in favor of drudgery,” said NPR, reflecting hand-typed angst echoing across the internet.
“[The ad’s message] is perfect for anyone who watched the video for Pink Floyd’s “The Wall” and rooted for the meat grinder.”
The ad in heavy rotation during the early Olympic coverage was for Google’s AI product, Gemini. It featured a dad turning to AI to help his young daughter write a fan letter to a sports hero instead of, you know, encouraging her to write it herself or helping her write it. Dad says, it’s only with AI that the letter can be “just right.”
Better, Google seems to say, to force the bland word salad of AI between you and your family and your audience than to risk being imaginatively unique. Or awkwardly human.
“This ad makes me want to throw a sledgehammer into the television every time I see it,” wrote The Washington Post. “I do not think that a good way of selling your product is to announce that it will suck all the joy out of being alive. I enjoy the joys of being alive.”
In a demonstration of regular marketing intelligence, Google yanked the ad.
This visceral rejection of the misapplication of AI evokes Apple’s misstep last May with an ad featuring its new iPad Pro literally crushing an array of instruments of human creativity – paints, pianos, games, turntables and trumpets.
“Company failed to understand it conjured fears of ‘Tech kind of destroying humanity.” wrote Variety. Even actor Hugh Grant posted on X: “The destruction of the human experience. Courtesy of Silicon Valley.”
To be clear: We’re not AI Luddites. Artificial intelligence promises to be an extraordinarily powerful tool to help make healthcare better. Our industry is replete with encouraging examples of AI’s potential.
Analyzing data? Or, say, offering diagnostic support? It’s no secret AI is already a very promising tool for diagnostic purposes. In some cases, it’s beginning to outperform clinicians. Patients are open to the idea. According to a recent survey featured by HIMSS TV, “Almost half of respondents to a new survey have no objection to diagnoses powered by artificial intelligence.”
Meanwhile, health systems are taking the big but deeply considered leap into using AI to streamline physician workflows. It’s working – Ochsner Health says early results show a 75 percent decline in the time clinicians are spending on documentation.
Moreover, Ochsner Health “reported a 78 percent clinician adoption rate during the initial launch” of the program. And patient satisfaction increased – yet another sign that people are ready to get behind applications that make things easier for them, not replace them.
Last week Cleveland Clinic appointed its first chief AI officer, someone with a tech, not healthcare, background, to manage the application and ethics of using AI. Expect more top AI postings in healthcare soon. The New York Times called the CAIO the “Hottest Job in Corporate America” for good reason.
Yet for all of the high hopes for AI (and we share them) there are uncanny valleys where AI should tread lightly today and, maybe, never.
The public disdain given to Google and Apple’s for their AI stumbles reflects a powerful and not unreasonable undercurrent of unease about how swiftly a powerful tech tool could become a cost-efficient digital replacement for the analog, beautiful messiness of being human.
Asking a computer to write your daughter’s fan letter obliterates the purpose of the letter. Just as asking an algorithm to take on the wrong part of the patient relationship would defeat the mission of healthcare. It can also lead to an easy complacency – costing us the watchful eye and the listening ear needed to create great art through which care givers handle the complexity presented by every patient.
AI is weaving itself deeply into healthcare. Patients are generally bullish, and clinicians are ready to use it, in the right context. But context matters. A lot. People are most interested in AI advances that make their lives easier, and a lot less interested in the ones that erase our shared humanity.
Contributors: Tim Stewart, Emme Nelson Baxter
Image Credit: Shannon Threadgill