Artificial intelligence tools like ChatGPT, Claude, and Gemini have become indispensable productivity boosters in healthcare settings. From drafting referral letters to summarizing clinical notes, staff across medical, dental, and therapy practices are using these tools daily. But every time an employee pastes patient information into an AI chatbot, your practice may be violating HIPAA.
This guide breaks down what you need to know about HIPAA compliance when your team uses AI tools, and provides practical steps to protect patient data without killing productivity.
Why AI Tools Create HIPAA Risk
HIPAA's Privacy Rule protects Protected Health Information (PHI), which includes any individually identifiable health data such as patient names, dates of birth, Social Security numbers, medical record numbers, and clinical details. When an employee copies patient information into ChatGPT or a similar AI tool, that data is transmitted to a third-party server that your practice does not control.
This creates several specific HIPAA violations:
- Unauthorized disclosure: Sharing PHI with a third party (the AI provider) without a Business Associate Agreement (BAA) in place
- Lack of safeguards: Most AI tools do not provide the encryption, access controls, and audit logs that HIPAA requires for PHI
- Training data risk: Some AI providers may use input data to train their models, meaning patient information could be retained and potentially exposed to other users
- No breach notification capability: If patient data leaks through an AI tool, you may never know, making it impossible to fulfill breach notification obligations
A 2025 study by Metomic Research found that 15% of employees have shared sensitive data with AI tools at work. In healthcare settings, that number rises to 22% when staff do not have clear AI usage policies in place.
What HIPAA Actually Requires
The HIPAA Security Rule does not specifically mention AI tools (the regulation predates modern AI). However, the requirements are clear and apply directly:
Administrative Safeguards
- Risk assessment that includes AI tool usage
- Written policies governing how staff may (or may not) use AI tools with patient data
- Workforce training on acceptable AI usage
- Sanctions for policy violations
Technical Safeguards
- Access controls that prevent unauthorized sharing of PHI
- Audit controls that log when and how PHI is accessed or transmitted
- Transmission security for any data sent over the internet
Business Associate Agreements
If your practice intends to use an AI tool for any purpose that involves PHI, you need a signed BAA with the AI provider. As of early 2026, most consumer AI tools (including ChatGPT's free tier, Claude's free tier, and Gemini) do not offer BAAs. Some enterprise tiers do. This is critical to verify before allowing any PHI-adjacent AI usage.
Practical Steps for Your Practice
1. Create a Written AI Usage Policy
Every healthcare practice needs a clear, written policy that specifies what employees can and cannot do with AI tools. This policy should list approved AI tools, prohibited data types (PHI, insurance information, contact details), and the consequences of violations. Download our free AI policy template to get started.
2. Train Your Staff
A policy is only effective if your team understands it. Conduct training sessions that use real examples relevant to your practice. Show staff what a HIPAA violation looks like in the context of AI, such as pasting a patient's chart notes into ChatGPT to generate a summary letter. Make the training practical and role-specific.
3. Implement Technical Controls
Policies and training reduce risk, but they rely on human behavior. Technical controls provide an automated safety net. Browser-based monitoring tools like Marshall can detect when employees attempt to paste PHI into AI platforms and either warn them or block the submission in real time. This is the most reliable way to prevent accidental data leaks.
4. Audit and Monitor
HIPAA requires audit logs. You need visibility into how AI tools are being used across your practice. Track which AI platforms employees visit, how often potential violations are detected, and whether your policies are being followed. Monthly compliance reports help demonstrate due diligence during audits.
5. Review AI Provider Terms
Before approving any AI tool for use in your practice, review the provider's terms of service and data handling practices. Key questions: Does the provider offer a BAA? Is data encrypted in transit and at rest? Will the provider use your inputs to train their models? Where is data stored, and for how long?
Common Mistakes to Avoid
- Banning AI entirely: This approach sounds safe but usually fails. Staff will use AI tools anyway, just without any safeguards. A controlled approach with monitoring is far more effective than prohibition.
- Relying on "honor system" policies: Verbal policies without monitoring are not sufficient for HIPAA compliance. You need documentation and enforcement.
- Ignoring the problem: If you have not addressed AI usage in your practice, you are likely already at risk. The question is not whether employees are using AI, but whether they are doing so safely.
The Bottom Line
AI tools are not going away, and your staff should not have to choose between productivity and compliance. The key is building a framework with clear policies, meaningful training, and automated technical controls that catch mistakes before they become breaches.
HIPAA penalties for unauthorized PHI disclosure can range from $100 to $50,000 per violation, with annual maximums of $1.5 million. Beyond fines, breaches damage patient trust and your practice's reputation. Investing in AI data protection is a fraction of that cost and provides peace of mind.
Protect Patient Data from AI Leaks
Marshall detects and blocks PHI before it reaches AI platforms. Set up in 10 minutes, no IT team required.
Start Free 14-Day Trial