Can Your Medical Practice Use AI Without Breaking HIPAA?
If you run a medical practice, you’ve probably noticed every EHR vendor, charting app, and scheduling tool is suddenly “powered by AI.” Your staff may already be using ChatGPT to draft patient communications. Your billing partner might be testing AI coding. And somewhere in the back of your mind, you’re wondering: is any of this actually allowed under HIPAA?
Short answer: it depends, and the rules haven’t changed just because AI showed up. This is the practical guide we give our medical clients across Central Florida when they ask about AI and HIPAA compliance.
What HHS and legal guidance actually say in 2026
There is no separate “HIPAA for AI” rule. Federal guidance from HHS and multiple legal commentaries in 2026 make this clear: existing HIPAA Privacy and Security Rules still govern any AI tool that touches protected health information (PHI). That covers patient names, dates of service, diagnoses, billing codes, dictated notes — anything that could identify a patient.
Three practical consequences for your practice:
- If an AI tool handles PHI, the vendor is a Business Associate. You need a signed BAA before a single patient record touches their system. No BAA means a violation, even if the tool never actually leaks anything.
- Consumer AI tools are not HIPAA-compliant by default. The free version of ChatGPT. Free transcription apps. The AI features built into your personal email. None of these come with a BAA. Staff pasting patient information into them is the most common avoidable violation we see in 2026.
- You must still document risk analysis. The 2026 HIPAA Security Rule updates, which take effect later this year, specifically call out AI-powered tools as something that belongs in your annual risk assessment. If you don’t already have a current assessment, our 2026 HIPAA compliance checklist is a good starting point.
Where small Central Florida practices actually get in trouble
Across our clients in Orlando, Kissimmee, Lakeland, and surrounding counties, the HIPAA AI exposure pattern is the same:
- A front-desk employee uses a free AI scheduling assistant that logs patient names.
- A physician dictates into a new voice-to-text app that stores audio in the cloud.
- A billing coordinator pastes an entire claim — with diagnosis codes — into a general-purpose chatbot to “ask a quick question.”
No one is acting in bad faith. They’re trying to get work done faster. But HHS doesn’t grade on intent. A breach notification can be triggered by something as small as a single unprotected patient record exposed through an unauthorized tool.
What safe AI use looks like for a small practice
You don’t have to ban AI. You just have to use tools that are built for healthcare:
- Choose AI features inside platforms that already have your BAA — your EHR, Microsoft 365 with a HIPAA-aligned configuration, or healthcare-specific AI vendors who will sign a BAA.
- Keep sensitive processing on infrastructure you control. Self-hosted AI models are now realistic for small practices. Patient data never leaves your network, which solves most of the compliance question upfront.
- Train your staff on what not to paste where. A 15-minute refresh this quarter is cheaper than a breach notification.
- Document every AI tool in your risk assessment — even the ones you think are benign. This is where most of our IT compliance engagements start.
5 questions to ask any AI vendor before you say yes
Before you sign up for a new AI tool — even a free trial — run it through these five questions. If the vendor hesitates on any of them, move on.
- Will you sign a Business Associate Agreement? If the answer isn’t an immediate “yes, here’s the link,” the tool is off the table for any PHI workflow. This is non-negotiable. The HHS sample BAA is a useful baseline to compare against.
- Where is the data processed and stored? You want specifics: which cloud region, whether it ever leaves the US, whether subcontractors (like OpenAI or Anthropic) have their own BAAs signed with the vendor. “In the cloud” is not an answer.
- Is my data used to train your models? For most HIPAA use cases the answer must be no, and the setting that turns training off should be the default. Ask for that in writing.
- What happens to my data if I cancel? A compliant vendor will delete PHI on request and provide a certificate of destruction. If the contract is silent on this, that’s a red flag.
- Can you provide a recent SOC 2 Type II and HITRUST report? You don’t need to read the 120-page PDF. You need to confirm one exists and was issued in the last 12 months.
The realistic HIPAA AI stack for a small Central Florida practice in 2026
Most of the practices we work with in Osceola, Polk, and Orange counties don’t need cutting-edge models. They need to stop burning time on charting, scheduling, and billing follow-up. Here’s what a compliant HIPAA AI medical practice stack typically looks like today:
- Inside your EHR: AI scribe and summary features from your existing vendor (Athena, eClinicalWorks, DrChrono, etc.) — already covered under your EHR BAA. Start here before adding anything new.
- Microsoft 365 with HIPAA configuration: Copilot inside a properly locked-down M365 tenant, with BAA in place, can handle internal drafting, summarization, and scheduling without patient data leaving your Microsoft tenant.
- Healthcare-specific AI vendors: Companies like Abridge, DeepScribe, and Suki are purpose-built for clinical documentation and will sign a BAA. Always confirm the current BAA language — don’t assume.
- Self-hosted open models for sensitive workflows: For billing letters, appeals, or narrative summaries, running a local model (Llama, Mistral, or similar) on hardware you control keeps PHI on your network entirely. This used to require a data scientist. It no longer does — our AI consulting engagements for medical practices increasingly start here.
Frequently asked questions about HIPAA and AI
Is ChatGPT HIPAA-compliant?
The free and personal versions are not. ChatGPT Enterprise and the OpenAI API can be HIPAA-compliant only if you’ve signed a BAA with OpenAI and configured the workspace correctly. No BAA = not compliant, regardless of plan name.
Does Microsoft Copilot require a separate BAA for HIPAA?
If you already have a Microsoft 365 BAA, Copilot is covered when used inside that tenant with compliant configuration. Don’t assume this — verify with your Microsoft partner that your BAA explicitly covers Copilot services and that tenant settings are locked down.
What’s the HIPAA fine for misusing an AI tool?
HHS OCR penalties in 2026 range from roughly $137 per violation for unknowing violations up to $68,928 per violation for willful neglect, capped at about $2.07M per category per year. Even a single unauthorized AI tool exposing one patient’s record can trigger breach notification costs well north of $50,000 when you add investigation, notification, and remediation.
Do my staff need special AI training for HIPAA compliance?
Yes. Your annual HIPAA training should now explicitly cover which AI tools are approved, what data can and can’t be pasted into them, and how to report an accidental exposure. A 15-minute update this quarter closes the most common exposure path we see.
Can AI be used for patient communication without violating HIPAA?
Yes, if the tool is BAA-covered and the patient has consented to electronic communication. Sending AI-drafted text messages through an unapproved tool is a violation even if the message itself contains no PHI — metadata and logs count.
What to do this week
Make a list of every AI-touching tool anyone on your staff is using, even casually. Confirm which ones have a BAA with you. For the ones that don’t, either get one, replace them, or lock them down.
If that sounds like a heavier lift than you have time for, that’s exactly the kind of project we handle for medical practices across Central Florida. Reach out and we’ll walk your practice through a practical HIPAA AI review — no long pitch, just a clear plan.







