In our last article, we looked at the growing debate over prior authorization in Medicare Advantage – how often it’s used, whether it delays care, and what regulators are doing to rein it in.
That debate is still unfolding. But alongside it is a related and increasingly important question that extends well beyond Medicare Advantage: how prior authorization and coverage decisions are made in the first place, and what role artificial intelligence plays in that process.
This second conversation is less about volume and more about decision-making. And while Medicare Advantage has been the focal point of recent headlines, similar tools and workflows are used across individual and family plans, employer-sponsored coverage, and public programs, making it worth stepping back and looking at the broader system.
When people hear that insurers are using AI, it’s easy to imagine computers making coverage decisions on their own. In practice, federal guidance makes clear that algorithms and AI are tools used to assist human reviewers, not to replace them.
In the context of health plans, “AI” typically refers to algorithm-driven tools and software that help organize or flag information in administrative and clinical workflows, such as utilization management and prior authorization reviews, rather than acting as autonomous decision-makers.
According to CMS guidance:
In other words, these tools may provide support functions such as flagging prior authorization requests that fall outside normal utilization patterns; identifying outliers or patterns that merit clinician attention; or helping clinicians apply coverage criteria consistently. But they are not allowed to make final coverage decisions independently; the plan remains responsible for ensuring determinations are compliant with applicable rules.
Health plans process millions of authorization requests each year, many of which are routine or follow well-established clinical guidelines. The growing use of AI reflects an effort to manage that volume more efficiently.
As McKinsey & Company explains, AI technology can help manage this workload by:
In theory, better tools should make prior authorization less disruptive and more efficient for everyone involved.
At the same time, provider groups and patient advocates have raised valid concerns.
Organizations like the American Medical Association have warned that poorly designed or opaque algorithms could reinforce rigid decision-making, increase inappropriate denials, or make it harder for providers to understand why a request was rejected.
There have also been lawsuits and regulatory actions alleging that algorithm-assisted systems were used too aggressively or without adequate individualized clinical review. While many of these cases remain unresolved, they underscore an important point: the risk isn’t the technology itself; it’s how it’s implemented and monitored.
Regulators have been actively shaping how AI and algorithm-assisted tools can be used in prior authorization. In addition to issuing guidance that emphasizes coverage decisions must align with established coverage rules and include appropriate clinical oversight, CMS has made clear that technology may support utilization management but cannot replace individualized clinical judgment.
CMS has also taken further steps by launching pilot programs to test technology-supported prior authorization in traditional Medicare. One such initiative is the WISeR model, which will begin in 2026 and use advanced technology – including AI – to help review certain services vulnerable to fraud or misuse, while still requiring clinician involvement in decision-making. The fact that these pilots have drawn criticism from providers and lawmakers underscores how sensitive the topic has become, and how closely regulators are watching.
For health plan members, the biggest risk isn’t AI itself; it’s misunderstanding.
AI does not mean that computers are independently deciding who gets care. What it does mean is that:
Many denials are still overturned through appeals, especially when additional medical context is provided.
If clients ask about AI and coverage decisions, a few practical explanations usually help:
The goal isn’t to eliminate technology; it’s to ensure it’s used responsibly, transparently, and with appropriate oversight.
The conversation around prior authorization continues to evolve.
One side of the debate focuses on how often prior authorization is used. Another increasingly important side focuses on how decisions are made and what role technology should play.
AI is becoming part of the infrastructure behind coverage decisions across the health system, even if most members never see it directly. Agents who understand that distinction (and can explain it clearly) are better positioned to address client concerns and set realistic expectations.
As with prior authorization itself, the focus now is on how these tools are used, where guardrails apply, and how accountability is enforced, a discussion that will continue for the foreseeable future.