Penn Medicine faculty members are raising urgent concerns about significant gaps in federal regulation of artificial intelligence in healthcare settings, highlighting fundamental flaws in current FDA approval processes that could compromise patient safety.
In a recent analysis, experts from the Perelman School of Medicine identified critical shortcomings in the 510(k) regulatory pathway, which currently approves approximately 98% of AI-enabled medical devices with limited evidence requirements.
The Challenge of AI Drift
Dr. Gary Weissman, assistant professor of pulmonary and critical care medicine, emphasized a particularly concerning issue known as “AI drift” or “shift” - the phenomenon where AI system performance changes over time and varies between different deployment locations.
“The current regulatory process lacks good infrastructure to ensure an AI system is functioning properly at the location where it is deployed,” Weissman explained, noting that existing approval pathways “were developed decades ago for traditional medical devices, and don’t apply well to modern AI-type devices.”
Proposed Solutions
The Penn Medicine team proposes several key reforms:
Graduated Autonomy Framework: Dr. Eric Bressman, assistant professor in the Division of Hospital Medicine, advocates for a supervisory model that “starts to look a lot more like what a human would go through as they approach independent practice.” This would require AI systems to undergo monitored periods before receiving full clearance.
Institutional Governance: Dr. Kaustav Shah, Medical School internist and National Clinician Scholars Program fellow, argues that healthcare institutions should develop their own AI governance processes to determine “the right level of evaluation, rigor, and transparency” regardless of federal requirements.
Federal-State Partnership: Bressman supports an FDA-centered framework that includes partnerships with state medical boards, recognizing that “the federal government is not necessarily well suited to be in a position to deal with every little update or new thing that comes to practice or the market.”
Legislative Response
Pennsylvania is taking early action with House Bill 1925, which would require insurers, hospitals, and clinicians to provide transparency when AI is used in patient care. The legislation mandates that an “ultimate individualized assessment” be made by a human and requires evidence that bias and discrimination have been minimized.
While Weissman views this as “a good start,” he notes it differs from comprehensive device regulation in clinical decision support applications.
Implications for Healthcare
The regulatory gaps identified by Penn Medicine experts reflect broader challenges as AI deployment accelerates across healthcare systems. Current approval processes may inadequately address the dynamic nature of AI systems, which can evolve and adapt in ways traditional medical devices cannot.
The complexity of AI devices makes it increasingly difficult to determine quality and safety using frameworks designed for static medical technologies, potentially creating blind spots in patient protection.
Looking Forward
These concerns arrive at a critical juncture as healthcare institutions nationwide implement AI systems for clinical decision support, diagnostic assistance, and treatment recommendations. The Penn Medicine analysis suggests that proactive regulatory reform, rather than reactive measures following adverse events, represents the most prudent path forward.
The call for enhanced regulatory frameworks reflects growing recognition that artificial intelligence in healthcare requires governance structures as sophisticated as the technology itself.
