AI Governance Isn't Optional in Healthcare

Build an AI Governance Foundation for Healthcare Agents

Key Takeaways

  • Build your governance structure before you start deploying AI, not after.
  • Backend processes like scheduling, coding, and revenue cycle are the right place to start with autonomous AI. Clinical workflows still need a physician in the loop.
  • Clinical staff adopt AI when they see it working for colleagues, not because leadership tells them to.
  • AI needs its own dedicated team. A team juggling AI alongside everything else will always put it last.
  • Every AI initiative requires a defined ROI before it gets approved. If the case is soft, it goes to the C-suite for a deliberate call rather than getting quietly shelved.

When an AI agent at Beacon Health System began automatically ordering Cologuard colon cancer screenings, it reached patients who had fallen through the gaps in traditional outreach. Of the 7,000 screenings ordered, about 40% came back, leading to 250 additional colonoscopies. One returned a positive result, caught early. The patient had a resection and survived.

Beacon didn't get there because they had the most sophisticated AI. They got there because they built the governance foundation first.

In a recent AI Explained, Dr. Stacey Johnston, CIO and Digital Executive Officer at Beacon Health System, walked through what it actually takes to deploy AI responsibly in a clinical environment where a bad output isn't just a bad user experience. She came to the role by way of medicine, having worked as a hospitalist and Chief Medical Information Officer before moving into technology leadership, and that background shapes how she approaches every deployment decision.

The Mistake Most Health Systems Make

The instinct in most health systems is to find the most promising use case and deploy. The governance comes later, and that sequence tends to backfire.

When she joined, there was no formal IS governance structure. Tickets were submitted and disappeared. Before any AI got approved, she stood up an executive steering committee, eight advisory levels, multiple workgroups, and a dedicated AI Council, forming what amounts to a formal AI governance framework for the organization. 

Here's what that foundation actually looks like in practice:

  • AI Council and policies: Two policies govern every AI initiative: one defines how AI gets approved, the other defines what's permitted and what isn't. PHI into unapproved systems falls squarely in the prohibited column.
  • Vendor risk assessment: Every vendor pursuing deployment completes a review covering data modeling, LLM selection, guardrails for drift and bias monitoring, and data storage practices.
  • AI literacy training: Required for all managers and above, so the people closest to day-to-day operations can recognize issues that AI agent monitoring tools might flag, rather than waiting for a central team to catch them.
  • Department of AI and Transformational Technologies: A dedicated team that isn't competing with break-fix tickets, code upgrades, and order set maintenance. A team pulled in too many directions will always deprioritize AI when things get busy.
  • ROI requirement: Nothing gets approved without one. If the ROI is soft, it goes to the C-suite rather than getting quietly shelved. "Finances are tight for all of us," Dr. Johnston said. "You should be looking for ROI first with your AI."

Where the ROI Is Actually Showing Up

Beacon has deployed agentic AI across multiple operational domains:

  • Documentation efficiency: Ambient documentation reduced note completion time from 7.5 minutes to 2.5 minutes per encounter, with adoption rates between 70 and 80% among providers in ambulatory clinics.
  • Revenue capture: Improved documentation quality generated approximately $10,000 in additional revenue per physician over twelve months by capturing charges that would otherwise have been missed. "When we first deployed it at another organization, it was, gosh, I don't know if we can afford to do this," Dr. Johnston said. Now it's a no-brainer to keep rolling out.
  • Preventive care outreach: An agent automatically ordered 7,000 Cologuard colon cancer screenings for every patient who met inclusion criteria, reaching patients who had fallen through the gaps in traditional outreach. About 40% came back, leading to 250 additional colonoscopies. One returned a positive result, caught early. The patient had a resection and survived. That's what fully autonomous AI looks like when the criteria are well-defined and the governance foundation is solid.
  • Scheduling: When Beacon acquired four hospitals and had two weeks to back-load 100,000 appointments, an autonomous agent completed the work in three weeks. The alternative was hiring 40 contractors.

Health systems aren't deploying agents to cut headcount. Like most, they're already dealing with nursing and physician shortages, and open positions are being filled with expensive contractors. "It's not about reducing your FTEs," Dr. Johnston said. "It's just about filling the positions we have open or reducing our contractor spend."

Trust in the Clinical Space Builds Differently

Revenue cycle, scheduling, and administrative workflows can run autonomously without much hand-holding, but clinical AI is a different story. In the clinical space, AI should function as a copilot, supporting physician judgment rather than replacing it, at least for now.

Clinical trust has to be earned one use case at a time. The ambient documentation tool works because providers can still review and sign off on every note before it's finalized. "You've got to move the needle a little bit, have them trust that one component, then move the needle again," Dr. Johnston said. Even with adoption rates between 70 and 80%, she isn't pushing for full autonomy yet.

Trust also doesn't come from the top down. When she spoke at a physician meeting, she opened the floor for ideas and committed to building agents for requests with clear ROI. The physicians who saw early value started bringing colleagues along. For other healthcare AI leaders, the sequence she'd recommend is straightforward: governance structure first, a clear permitted and prohibited use policy, a dedicated AI team, and a small number of confident use cases before expanding.

Watch the full AI Explained session with Dr. Johnston for her perspective on AI observability and governance in healthcare, ROI, and what it takes to build clinical trust in agentic systems.

AI Explained: Lessons from a Physician-CIO on AI Governance

This post draws on a conversation with Dr. Stacey Johnston, CIO and Digital Executive Officer at Beacon Health System, from Fiddler's AI Explained AMA series.