The Big Story: A.I. Is Making Doctors Answer a Question: What Are They Really Good For?
“‘Patients with serious diseases need a human connection. In the end you want to look someone in the eye,’ he said, and explain that the patient has 10 years to live, or just six months.”
Go touch grass
4-minute read
At the end of Foundation and Earth, the chronological conclusion to Isaac Asimov’s genre-defining Foundation series, readers discover (spoiler alert) that a benevolent robot has been guiding – some might say manipulating – the arc of human history for 10,000 or so years. It’s deus ex machina on a galactic scale.
For some, it’s an AI fever dream. Wouldn’t it be great, some might ask, to sit back and pass off responsibility for, well, everything? It’s not that SkyNet seizes control; it’s that we hand control over. We willingly give our agency away under the guise of efficiency.
For all the fear about AI encroachment, we may have already met the real enemy, so to speak: It’s us. At some point, efficiency becomes inhumane by stripping away the quirks and unpredictability of two humans interacting.
Fortunately, we’re not there yet. Even as we test the boundaries, debating how far along the machines really are, how we should interact with them and how much we should hand over to them, we humans are still on the hook.
Reports from the front
Four recent stories stand out in the rapidly evolving world of AI, highlighting both the promise and the peril:
First, X’s AI agent Grok was used to generate millions of inappropriate images in under two weeks. Most people were horrified. Some joked about toasters.
Days later, after the deaths of Renee Good and Alex Pretti at the hands of ICE agents, misinformation spread like wildfire. As the New York Times reported, “A.I. fakes of the victims spread, genuine videos were viewed with suspicion, a Democratic lawmaker displayed an altered image on the Senate floor and online sleuths misidentified random people as being the agents involved in the shootings. The federal government spread an altered image and backed provably false narratives.”
Then came Moltbook, a chatroom supposedly exclusively for AI agents taking the next steps towards consciousness and maybe plotting our overthrow. The Atlantic’s Reece Rogers snuck in to check it out.
Instead of sentient scheming – much less plans for a moon base from which humanity could be puppeteered – Rogers found something “overblown and nonsensical.” It turned out that Moltbook isn’t evidence of AI’s next evolution. Humans are still doing a lot of directing and just having some fun.
Finally, in a trend that anyone reading this is well aware of, AI tools continue improving their ability to help physicians prep for tough conversations and support or even lead clinical decisions.
Across these examples, varying degrees of advanced technology are being used to impact human relationships and leading us to think about our own value. Not by the technology itself, like some all-powerful Asimov creation, but by the people directing it. The responsibility doesn’t belong with the machines.
Getting real
So far it seems we broadly have not lost sight of who’s pulling the strings. Elon Musk is taking flak for Grok and politicians face backlash for posting misinformation. Even so, there’s a growing undercurrent of anxiety: “What are they going to do to us” as if AI – or any technology – has agency over humans.
Technology is addictive, fun, useful and generally very damn cool. It’s easy to get lost in that and let technology distort our perspective about what’s real and who’s in control.
It takes effort to ask the more important question: “What will we choose to do?”
A simple first step is to step away from the screen. To physically separate from the exhaustion of endlessly deciding whether something is real and worth our time. As dramatic as it sounds, to use our own senses to experience the world.
Breaking the spell reminds us that there are actual people, not avatars, on the other side of our decisions. And that only other humans can truly connect – for good or ill. Technology can assist, not replace. That goes for both relationships and decision-making. As The New York Times put it, we, not chatbots, understand how “to read subtle signs and synthesize information that is difficult to make explicit.”
So maybe it’s time to pursue the real thing. More actual face time – or at least FaceTime. Coffee with a friend, or tea if you must. Ask questions. Not “questions” posed to a hyper-personalized algorithm in search of affirmation but, y’know, the real thing where you seek out new information and a new perspective. All of that makes it harder to want to harm others and easier to want to help and connect. Which in turn helps us feel more human.
In Foundation, Asimov’s robot operates under the Three Laws of Robotics – plus the critical Zeroth Law. These fictional concepts are so powerful they’ve shaped real-world AI ethics debates. They’re a nesting doll of rules to prevent sci-fi’s synthetic life from harming humanity.
We humans don’t have that same programming. We have agency. We get to make eye contact, shake hands, sit in silence with other people. That’s what’s real.
At least until the robots start looking just like us.
Contributor: David Jarrard, Emme Nelson Baxter
Image Credit: Shannon Threadgill



