Skip to main content

The Big Story: What 570 Experts Predict the Future of Work Will Look Like

“The question is thus not, ‘What will the future of work will be like?’ but rather, ‘What do we want the future to be like?’ This reframes the future-of-work question as an arena for values, politics, ideology, and imagination.”

The quiet before

By Alex Hunter and David Shifrin

4-minute read

Two years ago, author Gal Beckerman published The Quiet Before: On the Unexpected Origins of Radical Ideas. In it, he sought to show how explosive moments in history or major movements that seemed to spring onto the scene overnight were, in fact, the product of tiny moments bubbling to the surface.

We – society at large and healthcare – seem to be in the early stages of another movement. Artificial Intelligence. It’s not quiet, per se, thanks to massive funding and endless headlines and even congressional hearings. But Beckerman’s underlying idea still holds. AI might revolutionize the way healthcare is managed and delivered. It probably will. But, despite the hype, it’s a journey of a thousand miles, and we’re just now taking those first steps.

How do you take those early steps carefully, putting one foot in front of the other and moving through the quiet before towards the defining moment of technological revolution? Here are a few somewhat unconventional hints.

Distinguish between hype and reality.

AI in healthcare today is largely bringing advantages to businesses’ bottom line corporate advantages, not the consumer advantages or even caregiver advantages that are exciting, worrying and make headlines. Those things are coming but aren’t yet fully baked.

There’s much more happening in the back office – ranging from revenue cycle automation to helping leaders operate their organizations more efficiently. Nearly half of hospitals are already using AI for rev cycle, and three-quarters have some form of automation in place – AI or otherwise.

So, keep an eye on the headlines and ensure your tech team is working towards successful implementation of AI across your enterprise. Engage with internal stakeholders on what’s happening. But know that the biggest immediate opportunity and value may just be behind the scenes. Which has some advantages for you regarding how you communicate about the work.

View consumers as secondary.

Sounds controversial, doesn’t it?

Early last year, Pew found that “60 percent of Americans would be uncomfortable if their own healthcare provider relied on artificial intelligence” to make clinical decisions. That skepticism hasn’t improved much in the intervening 18 months. A national consumer survey we fielded in late August shows that 57 percent of U.S. adults have little or no trust in the use of AI by clinicians for medical care. Slightly more – 62 percent – say the same about payers’ use of AI for prior authorization.

Remember, 15 years ago consumers were skeptical about pulling up a ride-share app and getting into a strangers’ car. As much as consumers – patients – are at the center of care delivery, in some areas they are more of a lagging indicator. Remember the famous (but apocryphal) Henry Ford quote: “If I would have asked people what they wanted, they would have said faster horses.”

If you sense your patients are concerned about your use of AI – back office or otherwise – talk to them. Bring them along, but don’t try to force the issue.

Be clear about where it is being used for decisions that affect their care, such as assisting radiologists. The need to build trust among the public is real and important, but it’s also a long-term play not as much an urgent campaign.

Talk to your nurses and docs.

We also surveyed physicians and nurses about the use of AI and found 56 percent of nurses and 39 percent of physicians have little or no trust in AI for administrative tasks. When it comes to using AI for clinical decision support, about six in 10 nurses and about a third of physicians express little or no trust.

Here, we’re getting closer to the end user. Physicians are more likely to be using AI directly. There’s a massive push today towards ambient listening tools designed to handle notetaking, ultimately reducing burnout and increasing retention. That is undoubtedly a good, necessary thing, but it is also a long way from point A to point B.

The immediate aim should be to spend time with your caregivers, listening (not ambiently) and learning about the tasks that get in the way of patient care, why they do or don’t trust technological solutions and how they would design something to improve their work. Nurses, in particular, are concerned about AI becoming a cheaper, less safe shortcut. It is vital to work with them to find a path forward as part of the long game.

Hear their concerns and work together to discern what they would do instead or what could allay those fears.

Above all, take responsibility as an organization for making active decisions. Show them that the organization is looking to make the best decision for everyone and not simply transfer jobs from people to an algorithm. Tech writer Brian Merchant wrote in 2019 that “’Robots are not coming for your job’ – management is.” His point? That even if the rise of AI is inevitable, it is still people making decisions about how it happens. Using the “robots are coming” framing gives leaders a way to duck responsibility. Don’t.

Everyone: Start with the problem.

All of this comes down to being very clear about why your provider organization needs to use AI. Ensure your team that you won’t be making snap decisions or be distracted by the newest and shiniest objects that could cause long-term problems.

You’re being told that AI can solve every problem you have. Where do you start?

Here’s where: Take a step back and work with your team – in partnership with docs, nurses and staff – to identify your key couple of problems. Is it burnout? Patient acquisition? Billing? Then, define success. Only then can you know what tools you actually need and work back into the right solution.

Having a clear process for solving a priority problem and executing it also plays that long game of trust-building. Specifically, among your workforce and healthcare consumers. It gives you a clear “why” and “how,” both necessary to reduce skepticism and increase buy-in.

Meanwhile, a note for the innovative digital health companies developing AI tools. It’s not just building a better mousetrap or picking an interesting challenge to solve. It’s starting with a clear problem that is fully defined and completely vetted with potential customers – the hospitals, health systems and medical groups who will be using it and having to do the hard change management work with their people to implement it.