The Expanding AI Ecosystem: How PHI Can Quietly Leave the Healthcare Environment

April 29, 2026

The following is a guest article by Dennis P. Sweeney, MBA, Co-Founder of Vertebrai Solutions Inc., and Consulting Principal at Tellogic Inc.

Healthcare organizations are rapidly adopting artificial intelligence (AI) solutions to support clinical, administrative, and operational workflows. To manage privacy risk and control Protected Health Information (PHI), most healthcare organization deployments follow a familiar pattern. AI systems are hosted inside private, HIPAA-compliant cloud environments under Business Associate Agreements (BAAs) with the major cloud providers.

Hosting in a private HIPAA-compliant cloud environment provides infrastructure safeguards. These architectures, used by legacy healthcare systems with internal interfaces and custom-developed external APIs, manage PHI data exposure concerns. Platforms such as Microsoft Azure and Amazon Web Services provide strong security controls, encryption, audit logging, and established compliance frameworks. With a BAA in place, healthcare leaders can be reasonably confident that protected health information (PHI) stored and processed within those environments is being handled appropriately.

Many organizations deploying large language models (LLMs) believe they have addressed critical privacy concerns. The AI is operating inside a controlled HIPAA environment. Security controls are in place. Compliance requirements are satisfied.

The information technology architecture hosting the system feels safe.

The Valuable AI Work Inside Controlled Environments

AI systems in these healthcare environments are performing valuable work. They summarize patient charts, generate clinical documentation, assist with prior authorization workflows, triage patient messages, support population health analysis, link to research guidelines, and automate administrative tasks that consume large portions of the clinician’s workday. The realization that every system capable of reading the medical record eventually encounters the same reality, Electronic Health Record (EHR) systems are filled with protected health information.

PHI is more than structured data elements. It is a detailed narrative of an individual’s medical history, including diagnoses, medications, laboratory results, imaging findings, clinical notes, and social or behavioral context. Protecting PHI is not only a regulatory obligation under HIPAA, it is also essential to maintaining patient trust and preventing harms such as stigma, discrimination, identity theft, or financial loss resulting from unauthorized disclosure.

The Shifting Question: What Happens After the AI Accesses PHI?

For many healthcare leaders, the central question has historically been whether AI can safely operate within HIPAA-compliant environments. This can be compared to verifying if barn can safely house the farm animals, where the only exit for the farm animals is through the observed front barn door.

A different question is emerging as agentic AI expands in these LLM systems. What happens to patient data after the AI accesses it?

The Rapid Rise of Agentic AI in Healthcare

At the recent HIMSS 2026 conference, numerous vendors prominently promoted their agentic AI solutions, showcasing autonomous agents capable of handling everything from clinical documentation and revenue cycle tasks to patient communications and multi-step care coordination.

LLMs are increasingly being deployed within agentic architectures, where the LLM not only generates responses but also performs actions across multiple systems. Integration frameworks such as the Model Context Protocol (MCP) demonstrate the ease of connecting systems using this new architecture. MCP standardizes secure, structured communication between AI agents and external tools, resources, and data sources, enabling LLMs to discover capabilities, retrieve context, and execute workflows with greater reliability and control. A single AI assistant can retrieve clinical context from the EHR, assemble documentation, query scheduling systems, submit payer requests, and coordinate actions across multiple applications. 

An LLM might call external systems such as pharmacy benefit manager (PBM) databases for real-time formulary and drug-interaction checks, laboratory information systems (LIS) for results verification, revenue cycle management (RCM) platforms for claims processing, telehealth integration services, wearable data aggregators, or third-party population health analytics tools.

Each integration makes the system more useful. This might be compared to the barn housing farm animals; the building is rapidly being renovated to allow more light with new windows and doors, but at the same time, allowing new exits through which the animals might escape. Each agentic AI integration creates new pathways through which patient data can flow. 

Hidden Privacy Risks in Interconnected Ecosystems

A BAA governs how a cloud provider stores and processes PHI within its services. It does not automatically govern how information flows when an AI system communicates with external APIs, third-party software tools, or other connected platforms.

LLM increasingly functions as a bridge between systems by retrieving information from one environment, processing it, and then transmitting relevant context to another system to complete a workflow.

This LLM behavior is exactly what is intended and provides the expected benefit. 

Consider a use case such as prior authorization. The LLM accesses the patient data, including codes, history, and details that make up the patient’s life. It might pull in a quick formulary check from the Pharmacy Benefits Manager (PBM) or verify a lab result and transmit this data to the payer’s Interface. Overall, saving time and speeding up care, but behind the scenes, suspense builds in the quiet; the request can spill more context than planned. External logs gobble fragments of the record. Data is retained outside the controlled HIPAA environment. No malice. Just the task completed. Yet the patient data crossed the line. Slipped away into the unknown.

Figure 1: The Expanding Agentic AI Ecosystem

Agentic AI systems are particularly effective at multi-step workflows that retrieve information, reason about it, and pass structured data between systems, without the user’s intervention. The LLM/AI engine becomes an intelligent conduit through which patient information flows.

Mitigating Risks: The Technical Savior Using PHI Redaction

Mitigating this risk requires architectural safeguards as well as governance oversight.

The most reliable HIPAA Safe Harbor solution is technical PHI redaction. A de-identification layer prevents the LLM from ever receiving the protected data and transmitting it outside the private environment. It replaces the 18 HIPAA identifiers, including names, addresses, phone numbers, and medical record numbers, with pseudonymous tokens. It does this while preserving the clinical facts the LLM needs, including data on labs, vitals, allergies, encounters, diagnoses, clinical notes, and medications. Dates are shifted to maintain sequences without exposing exact values. A secure mapping in the application layer temporarily holds the link back to the original identifiers.

Clinicians act on the provided information, and tokens resolve back. Session ends, mapping gone. No persistent exposure. These safeguards reduce the risk dramatically. The AI flows data safely now. The expanding AI ecosystem? It is now tamed. Patient trust preserved.

Looking Ahead: Balancing Innovation and Protection

The productivity benefit of these systems is real, and their adoption will accelerate in the coming years, if not months. Healthcare leaders need to recognize that AI systems connected to multiple platforms behave differently than traditional software operating within a single controlled private environment.

Once an AI system learns how to navigate the patient chart, it eventually learns how to navigate everything connected to it.

In modern healthcare IT environments, that network of connections and data flows will end up extending farther than most organizations expect.

About Dennis P. Sweeney

Mr. Sweeney is the Co-Founder of Vertebrai Solutions Inc., which released the Vertebrai AI Clinical Assistant at HIMSS26. He is also a Consulting Principal with Tellogic Inc., as a trusted advisor, supporting healthcare organizations for over 30 years, leading the IT & Data/Information strategies, establishing Clinical Integration & Accountable Care Organization programs, leading cross-functional teams, providing program management, technical assessments, business transformation, organizational redesign, software product development, change management, and system implementations.


  

Search

RECENT PRESS RELEASES