Which ChatGPT plan does your organization use? Not the plan the IT department approved. The plan your clinical staff actually uses. The one a medical assistant discovered through a colleague. The one a billing specialist adopted to draft prior authorization appeals. The one processing patient symptoms, medication histories, and diagnostic codes through an interface with no Business Associate Agreement, no data retention controls, and no audit trail.
OpenAI does not sign a BAA for ChatGPT Free, Plus, or Team plans. User data on these tiers trains the model. Every patient message processed through an unsanctioned ChatGPT account constitutes an unauthorized PHI disclosure under HIPAA 164.502(a). At $50,000 per violation under current enforcement guidelines, 200 patient messages create $10 million in potential liability [HIPAA 164.502(a)].
The compliance answer depends entirely on the plan, the middleware connecting ChatGPT to your systems, and the configuration settings most deployments skip. ChatGPT Enterprise and the OpenAI API support HIPAA compliance. Everything below that tier does not.
ChatGPT Free, Plus, and Team plans are not HIPAA compliant because OpenAI does not sign a BAA for these tiers and user data trains the model. ChatGPT Enterprise and the OpenAI API support HIPAA compliance only with a signed BAA and Zero Data Retention configured. Every vendor in the data flow chain touching PHI needs a separate BAA [HIPAA 164.502(e)].
The ChatGPT Plan Compliance Matrix
HIPAA requires a Business Associate Agreement with every vendor creating, receiving, maintaining, or transmitting Protected Health Information on behalf of a covered entity [HIPAA 164.502(e)]. A BAA is not optional. Without it, the vendor has no legal obligation to protect your patient data, and your organization bears full liability for the disclosure.
Plan-by-Plan Breakdown
| ChatGPT Plan | BAA Available | HIPAA Status |
|---|---|---|
| Free / Plus ($0-$20/mo) | No | Not compliant. Data trains model [OpenAI Terms of Use] |
| Team ($25/user/mo) | No | Not compliant. No BAA offered regardless of payment |
| Enterprise (custom pricing) | Yes | Compliant with signed BAA and configuration |
| OpenAI API (usage-based) | Yes | Compliant with signed BAA and Zero Data Retention |
The Team plan is the most dangerous tier. Organizations purchase it assuming a paid business plan equals HIPAA compliance. It does not. OpenAI explicitly excludes Team plans from BAA eligibility. Practices running patient-facing ChatGPT workflows on Team plans operate with zero HIPAA protection regardless of how they configured the workspace [OpenAI Enterprise Privacy Policy].
Verify which ChatGPT plan your organization uses for any workflow touching patient data. If running Free, Plus, or Team: stop processing PHI through those accounts immediately. For compliant deployment, purchase ChatGPT Enterprise or configure the OpenAI API with a signed BAA. Document the BAA execution date, the specific API endpoints covered, and the Zero Data Retention configuration in your vendor management file. A paid plan without a BAA provides no HIPAA protection.
Why Does HIPAA-Compliant ChatGPT Require Middleware BAAs?
A signed BAA with OpenAI covers only the OpenAI infrastructure. It does not cover the middleware connecting your EHR to the API. If patient data passes through Zapier, Make.com, or any integration platform between your system and OpenAI, each platform in the chain requires its own BAA [HIPAA 164.502(e)].
The Chain of Custody Problem
Your EHR is HIPAA compliant. Your OpenAI Enterprise account has a signed BAA. The Zapier account connecting the two operates on a $20/month Standard plan. Zapier does not sign BAAs for Standard, Professional, or Team plans. BAA eligibility requires the Zapier Company tier (custom pricing). Patient data transiting through the non-compliant middleware constitutes an unauthorized disclosure at the Zapier processing step, regardless of the security at both endpoints.
Make.com operates under EU data protection regulations and does not provide US HIPAA BAAs for standard accounts. Agencies building healthcare automations on these platforms rarely verify middleware compliance because the standard plans lack BAA options entirely.
HIPAA-Compliant Middleware Alternatives
Microsoft Power Automate: Covered under the standard Microsoft 365 BAA for Business and Enterprise licenses. The default choice for organizations already using Microsoft 365. Keragon: Built specifically for healthcare automation. Signs BAAs for all users by design, not by tier. n8n (self-hosted): Open-source workflow automation deployed on your own HIPAA-compliant infrastructure (AWS, Azure, GCP). Data never leaves your environment. AWS Bedrock / Google Vertex AI: Enterprise AI platforms with native BAA support, requiring engineering resources to configure and maintain.
Map every vendor in your AI automation data flow from source (EHR) to destination (AI model). For each vendor in the chain, document: vendor name, tier/plan, BAA status (signed/not available/pending), data processed (PHI types), and encryption method (in transit and at rest). If any vendor in the chain lacks a BAA, the entire workflow is non-compliant. Replace non-compliant middleware with a BAA-eligible alternative before processing PHI. One unsecured link breaks the entire chain [HIPAA 164.308(b)(1)].
The Five-Question Agency Audit
At $50,000 per violation under current enforcement guidelines, 200 patient messages processed through an unsanctioned ChatGPT account create $10 million in potential liability, yet AI automation agencies rarely employ compliance professionals. They build functional workflows connecting APIs without verifying the legal requirements for handling PHI. Before signing any contract with an AI automation vendor, ask five questions. Document the answers in your vendor management file.
The Questions
1. “Show me the signed BAA for every vendor in the data flow.” Not a screenshot of a “compliance” page. The executed BAA document for OpenAI, the middleware platform, and any logging or storage service touching PHI. If the agency uses their own OpenAI account (not yours), the BAA must cover their account and name your organization.
2. “Where do you store prompt and response logs?” Agencies log API calls for debugging. If those logs contain patient data stored in Google Sheets, Notion, or an unencrypted database, each storage location needs a BAA. Logging PHI to a tool without a BAA creates a new unauthorized disclosure [HIPAA 164.530(j)].
3. “Is the API configured for Zero Data Retention?” OpenAI retains API data for 30 days by default for abuse monitoring. Zero Data Retention (ZDR) eliminates this retention window. Without ZDR, OpenAI stores your patient data on their servers for a month after each API call, creating a PHI exposure window the BAA alone does not close.
4. “Whose API key runs this workflow?” Your organization must own the API key. Patient data routed through an agency’s shared API key means the agency processes PHI on their infrastructure under their account. The BAA with OpenAI covers the key owner. If the agency owns the key, your organization has no direct contractual protection with OpenAI.
5. “What is your human-in-the-loop validation process?” AI models hallucinate. A chatbot providing incorrect medical information to patients creates clinical liability beyond HIPAA. The agency must document how human review validates AI outputs before they reach patients. Automation without clinical oversight is a malpractice vector independent of compliance.
Create a Vendor AI Automation Assessment form with these five questions. Require written answers from every agency before contract execution. If the agency refuses to answer or provides verbal assurances without documentation, terminate the evaluation. Document the assessment in your vendor management records alongside the BAA and data flow diagram. Verbal promises provide zero protection during an OCR investigation [HIPAA 164.308(b)(1)].
Configuring OpenAI for HIPAA Compliance
Purchasing ChatGPT Enterprise or API access does not automatically activate HIPAA compliance: two configuration steps and a signed BAA are required before processing any PHI through OpenAI’s platform. Two configuration steps are required before processing PHI through any OpenAI service.
Step 1: Execute the BAA
For Enterprise accounts, sign the BAA through the Enterprise admin portal. For API-only usage, email OpenAI’s privacy team (baa@openai.com) to request the BAA addendum. The BAA must be executed before any PHI touches the platform. Processing PHI during a “trial period” without a signed BAA creates liability for every transaction.
Step 2: Configure Zero Data Retention
Enable Zero Data Retention on every API endpoint processing PHI. Without ZDR, OpenAI retains prompt and completion data for 30 days. The retention creates a 30-day window where patient data sits on OpenAI servers. ZDR eliminates this window entirely. Verify ZDR configuration in the API settings dashboard and document the setting with a screenshot in your compliance records.
Create a ChatGPT HIPAA Configuration Checklist documenting: (1) BAA execution date and signatory, (2) API endpoints covered under the BAA, (3) ZDR activation date and screenshot, (4) list of users with API access and their roles, and (5) data flow diagram showing PHI path from source to OpenAI and back. Store this checklist alongside the executed BAA in your vendor management file. Review the configuration quarterly and after any API endpoint changes. An expired or misconfigured BAA provides no protection [HIPAA 164.308(b)(4)].
De-identification: Not the Shortcut You Think
HIPAA defines 18 categories of identifiers constituting PHI [164.514(b)(2)], and removing just the patient’s name while leaving other identifiers still constitutes PHI transmission. Remove the name, remove the problem. This approach fails for two reasons.
The 18 Identifier Rule
HIPAA defines 18 categories of identifiers constituting PHI [HIPAA 164.514(b)(2)]. Names are one. The other 17 include dates (birth, admission, discharge), geographic data smaller than a state, phone numbers, email addresses, Social Security numbers, medical record numbers, health plan beneficiary numbers, account numbers, certificate/license numbers, device identifiers, web URLs, IP addresses, biometric identifiers, full-face photographs, and any other unique identifying number. Removing a patient’s name while leaving their date of birth, zip code, and diagnosis in the prompt still constitutes PHI transmission.
Safe Harbor de-identification under HIPAA requires removing all 18 identifier categories and having no actual knowledge the remaining information identifies the individual [HIPAA 164.514(b)(2)]. Manual de-identification performed by clinical staff before pasting into ChatGPT introduces human error at every step. One missed identifier per thousand prompts creates one violation per thousand prompts.
Do not rely on manual de-identification as your primary HIPAA control for AI tool usage. If your workflow requires sending clinical data to an AI model, use a BAA-covered platform (Enterprise or API with ZDR). If de-identification is unavoidable, implement automated de-identification software validated against all 18 HIPAA identifiers, not manual review by staff. Document the de-identification method, the software used, and the validation results. Manual de-identification fails at scale and introduces uncontrolled human error [HIPAA 164.514(b)(2)].
ChatGPT is not inherently HIPAA compliant or non-compliant. The compliance status depends entirely on the plan (Enterprise or API only), the BAA (executed before PHI processing), the configuration (Zero Data Retention enabled), and the middleware (every vendor in the chain covered by its own BAA). The most dangerous scenario in 2026: healthcare practices hiring AI automation agencies building patient-facing workflows on ChatGPT Team plans routed through standard Zapier accounts. Every component in the data flow chain touching PHI needs a BAA. One gap in the chain makes the entire workflow non-compliant.
Frequently Asked Questions
Is ChatGPT HIPAA compliant?
ChatGPT Free, Plus, and Team plans are not HIPAA compliant. OpenAI does not offer BAAs for these tiers. ChatGPT Enterprise and the OpenAI API support HIPAA compliance only with a signed BAA and Zero Data Retention configured. The plan alone does not create compliance: the BAA execution and ZDR configuration are required steps [HIPAA 164.502(e)].
Is Microsoft Copilot HIPAA compliant?
Microsoft Copilot is HIPAA compliant under Microsoft 365 Business and Enterprise commercial licenses with a signed Microsoft BAA. The consumer “Pro” and “Personal” versions of Copilot are not covered under BAAs and are not compliant for PHI processing. Verify your Microsoft license tier and BAA status before deploying Copilot in clinical workflows.
Is Claude (Anthropic) HIPAA compliant?
Anthropic offers HIPAA compliance for their commercial API and Enterprise products with a signed BAA. The free consumer tier of Claude does not support BAAs. Organizations evaluating Claude for healthcare workflows should review the full BAA requirements for Claude AI before deployment. The same rule applies: any AI platform touching PHI requires an executed BAA with the covered entity regardless of the vendor’s general security posture [HIPAA 164.502(e)].
Does removing patient names make ChatGPT safe for PHI?
Removing names alone does not de-identify data under HIPAA. The Safe Harbor method requires removing all 18 categories of identifiers: names, dates, geographic data below state level, phone numbers, email addresses, Social Security numbers, medical record numbers, and 11 additional categories [HIPAA 164.514(b)(2)]. A prompt containing a patient’s birth date, zip code, and diagnosis still constitutes PHI even without the name.
What is Zero Data Retention and why does it matter?
Zero Data Retention (ZDR) is an OpenAI API configuration eliminating the default 30-day data retention period. Without ZDR, OpenAI stores prompt and completion data for 30 days for abuse monitoring. With PHI in those prompts, the 30-day retention creates a window where patient data sits on OpenAI servers. ZDR closes this window. Enable ZDR on every API endpoint processing PHI and document the configuration.
What middleware is HIPAA compliant for ChatGPT integrations?
Microsoft Power Automate (with Microsoft 365 BAA), Keragon (healthcare-native, BAA for all users), and self-hosted n8n (data stays on your infrastructure) support HIPAA-compliant workflows. Zapier requires the Company tier for BAA eligibility. Standard Zapier and Make.com plans do not offer BAAs. Verify the middleware BAA independently from the AI platform BAA [HIPAA 164.308(b)(1)].
What happens if my AI automation vendor violates HIPAA?
The covered entity (your organization) bears primary responsibility for unauthorized PHI disclosures, even when a vendor causes the breach. Without a BAA, the vendor has no contractual obligation to report breaches, cooperate with investigations, or share liability. OCR penalties range from $100 to $50,000 per violation based on the level of negligence, with annual maximums of $2,067,813 per violation category [HIPAA 164.404]. A BAA shifts contractual responsibility to the vendor and establishes breach notification requirements.
Get The Authority Brief
Weekly compliance intelligence for security leaders and technology executives. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.