Family Office Law Firms Private Equity Venture Capital Insights AI Discovery Contact
Insights / Secure AI

ZDR vs BAA: Which one actually matters for your firm?

Zero Data Retention and Business Associate Agreements solve different problems. Most firms need both, some only need one. A vertical-by-vertical guide for family offices, law firms, private equity, and venture capital.

Start the Discovery Questionnaire

Most firms need both, but for different reasons, and many firms only need one. Zero Data Retention is the contractual control that governs what an AI vendor does with your prompts after the inference call ends. A Business Associate Agreement is the HIPAA-required contract that allocates liability when Protected Health Information passes through a service provider. ZDR is about persistence and training; BAA is about HIPAA. Confusing them, or treating them as interchangeable, is the most common AI-vendor contracting mistake we see.

If your firm handles client medical bills, principal medical records, or any portfolio company in healthcare, you need a BAA on the AI tier touching that data. If your firm puts client material into any cloud LLM, you need a ZDR addendum on top of your standard data processing agreement. Healthcare-adjacent firms (a wealth manager serving physician families, a law firm with life-sciences clients, a private equity firm with a healthcare portco) need both. The rest of the page works through the cases.

What ZDR actually means (in your firm's contract)

A Zero Data Retention amendment, in the AI vendor context, is a contractual commitment that the vendor will not retain the content of your prompts and completions beyond the immediate inference call, will not log them for human review, and will not use them to train future models. It is an addendum to the master agreement and the data processing agreement, not the default posture for any major commercial AI vendor.

The default posture, even on enterprise tiers, is that the vendor retains prompts and completions for some period (often thirty days, sometimes ninety) to support abuse monitoring, safety review, and incident response. ZDR turns that off. It is the contractual layer that lets a regulated firm tell its general counsel, its insurance carrier, or an LP, "the vendor cannot read what we sent them, because the data is not there to read."

What ZDR does NOT do

ZDR is a useful control. It is also frequently misunderstood. The misunderstanding usually runs in two directions.

First, ZDR does not make a prompt forget itself in transit. The prompt still travels over the network to the vendor's inference servers, is decrypted at those servers for the model to read, and produces a completion. During the inference call, the data is in memory on the vendor's infrastructure. ZDR governs what happens after the call, not during it. If your data class cannot tolerate the data ever existing in plaintext on a third party's infrastructure (for some highest-sensitivity matters or trust documents, this is the rule), ZDR is not the control you are looking for. A private LLM deployment is.

Second, ZDR does not cover everything you might intuitively call "data." Most ZDR amendments specifically govern the content of prompts and completions. Metadata (your organization ID, your usage volume, the timestamps, the model selected, sometimes the API key fingerprint) is typically retained for billing, abuse, and operational reasons even under ZDR. That is fine for most firms. It is not fine for the small number of firms whose threat model includes a vendor's billing log being subpoenaed. Read the addendum.

ZDR governs persistence and training. BAA governs HIPAA. Confusing them, or treating them as interchangeable, is the most common AI-vendor contracting mistake we see.

What a BAA actually means

A Business Associate Agreement is a contract required by the HIPAA Privacy and Security Rules between a covered entity (a healthcare provider, a health plan, or a healthcare clearinghouse) and any service provider that creates, receives, maintains, or transmits Protected Health Information on the covered entity's behalf. The Health Information Technology for Economic and Clinical Health Act (HITECH) extended BAA obligations to subcontractors, which is why an AI vendor processing PHI is in scope.

The BAA does specific work that a generic data processing agreement does not. It binds the vendor (the Business Associate) to use and disclose PHI only as the BAA permits. It requires safeguards consistent with the HIPAA Security Rule. It requires breach notification within sixty days of discovery. And it allocates the regulatory and financial liability for a breach in a way that neither a DPA nor a confidentiality clause does.

If your firm is not a covered entity and does not handle PHI, a BAA is not a control you need. If your firm is a covered entity, or is a Business Associate of a covered entity (which catches more wealth managers, family offices serving medical-professional principals, and law firms representing healthcare clients than people realize), the BAA is not optional. The presence or absence of a BAA changes whose liability the breach lands on.

Who qualifies as a Business Associate

The HHS Office for Civil Rights has been clear that the question is functional, not titular. If a service provider, including an AI vendor, processes PHI on behalf of a covered entity, that vendor is a Business Associate even if no one has called them one. The exposure exists at the moment PHI is shared, regardless of whether a contract is in place.

Where this catches firms by surprise: a wealth advisor uploads a client's hospital invoice to ChatGPT to extract line items into a spreadsheet. The advisor is a Business Associate of the client's covered providers in that moment. The AI vendor is a Business Associate of the advisor. If neither relationship has a BAA, the entire chain is non-compliant the instant the upload completes.

The DPA layer underneath both

A Data Processing Agreement is the foundation. It is the GDPR-derived (and increasingly state-privacy-law-derived) contract that governs how a processor handles personal data on behalf of a controller. Every commercial AI vendor at scale has one. ZDR and BAA are both extensions on top of the DPA: they govern AI-specific or HIPAA-specific obligations that the standard DPA does not address.

Reading order for any AI vendor evaluation: the master agreement, then the DPA, then the ZDR addendum if applicable, then the BAA if applicable. Each layer says something the prior layer does not, and assumptions made at one layer do not survive to the next.

When you need each, by vertical

The four verticals we work with secure AI deployment for diverge sharply on which contractual controls actually apply.

Family office

A family office almost always needs ZDR on every cloud AI tool it uses. Trust documents, beneficiary correspondence, the principal's calendar, succession memoranda, prenups, and the principal's daily life are the data classes that pass through a family office's AI usage. None of that should be retained by a vendor, used for training, or available for human reviewer access.

A family office only needs a BAA if family medical data flows through the AI. The trigger is real: a controller summarizes the principal's medical bills for the family bookkeeper, an executive assistant uses an AI tool to schedule a medical procedure and shares discharge instructions in the prompt, the office processes long-term care documentation. In any of those cases, the AI tier handling that data needs to be on a BAA. Most family offices do not realize they are functionally a Business Associate of their principal's providers the moment they touch a medical bill.

Law firm

ZDR is table stakes for any law firm using a commercial AI tool against client material. Privilege concerns layer on top of the contractual layer: a vendor who can read your prompts is a vendor who could be subpoenaed, served, or breached, and a malpractice carrier or opposing counsel will eventually ask about the contractual posture. ABA Formal Opinion 512 makes the duty of competence on AI tooling an ethics matter, and a written ZDR is one of the cleanest artifacts for demonstrating that competence.

A BAA is only relevant when the matter touches healthcare. Life sciences clients, pharmaceutical clients, hospital clients, and any matter involving an individual's medical records will pull the firm into Business Associate territory. The conservative posture is to maintain a BAA-eligible AI tier as a separate environment for healthcare-touching matters, gated by matter classification, rather than to put the whole firm on a BAA tier and pay for capability the rest of the practice does not need.

Private equity and venture capital

ZDR is critical for PE and VC firms because deal-team prompts contain MNPI, target financial data, and the kind of competitive intelligence whose retention by a third party is a problem under the SEC's adopted rules and most LPA confidentiality language. The MNPI handling problem is separate from the contractual layer: even with ZDR, MNPI on a public-company target may require a private deployment, not just a contract.

A BAA only becomes relevant when a portfolio company is in healthcare. The portco brings the exposure. A mid-market PE fund with a single healthcare-services portco has a BAA problem the rest of the portfolio does not. The clean separation is to deploy the BAA-eligible tier at the portco level, not at the GP level, and to keep the GP's tooling on the standard ZDR tier the rest of the deal team uses.

Healthcare-adjacent firms

Insurance brokers, wealth managers serving physician families, family offices with medical-professional principals, and any advisory firm whose work product regularly summarizes medical material need both. ZDR is the floor; the BAA is the regulatory layer that makes the work lawful. We see this pattern most often at single-family offices serving healthcare-industry principals, where the office's AI usage moves between non-medical work (clearly ZDR-only) and medical work (BAA required) on a daily basis. The deployment architecture has to make that distinction enforceable.

What "ZDR" looks like in the real contracts

The major commercial AI vendors all have a path to ZDR or its functional equivalent, but each calls it something different and gates it differently. This is the current state of the market, factually:

Anthropic Zero Data Retention amendment

Available on the Commercial tier as a contracted addendum. Covers Claude API workloads. Requires sales engagement; not self-serve.

OpenAI Enterprise data retention controls

ChatGPT Enterprise, Team, and Edu do not train on customer data by default. A separate Zero Data Retention configuration is available for the API on Enterprise contracts and has to be explicitly requested and configured per organization.

Microsoft Copilot enterprise data protection

Microsoft 365 Copilot inherits the Microsoft 365 service boundary and is contractually committed not to use customer prompts to train foundation models. There is no separate ZDR SKU in Microsoft's product naming; the protection is built into the enterprise tier.

Google Gemini for Workspace and Vertex AI

Workspace-tier Gemini is contractually scoped to the Workspace data boundary. Vertex AI's enterprise contracts include a no-training commitment by default. Customer-managed encryption keys and additional controls are available on enterprise contracts.

The point is not to choose one vendor over another. The point is that the words "ZDR" and "no training" mean different things on each platform, are gated differently, and require different configuration steps. The contract you signed three months ago for ChatGPT Enterprise probably does not include ZDR unless you specifically asked for it.

Common failures we see

Three failure modes account for most of the AI-vendor contractual problems that surface in our work.

The firm assumes ZDR is on by default

It is not. On every major vendor, the default tier (even the enterprise tier) retains some content for some window. ZDR is an addendum that has to be requested, signed, and configured on the account. Firms regularly tell us they "have ChatGPT Enterprise, so we are fine," and the contract on file does not include the ZDR amendment.

The firm signs a DPA but never asks about training use

The standard DPA, on most vendors, addresses how the vendor handles personal data under privacy law. It does not necessarily prohibit using your prompts to train future models. That prohibition lives in a separate place: sometimes in the master agreement, sometimes in a product-specific addendum, sometimes in the ZDR amendment. The default behavior on consumer and prosumer tiers is that prompts may be used for training unless you opt out. Firms read the DPA, see standard data-processing language, and assume the training question is covered. It usually is not.

A BAA-required workload is deployed on a non-BAA tier

This is the failure mode with the highest regulatory consequence. A wealth manager handling a physician-client's medical bill summary on a non-BAA AI tier. A law firm summarizing a hospital client's discharge logs on the standard enterprise tier rather than the BAA-eligible tier. A family office processing a principal's long-term care paperwork through a tool the office never registered for HIPAA. Each of these is a HIPAA violation at the moment of upload. The remediation is structural: the BAA-eligible tier has to exist, has to be the only tier authorized for medical data, and the policy has to make the routing enforceable.

What we audit when we're brought in

When a Trifident engagement gets to the architecture phase, the contractual layer is one of six categories we work through with a firm. Our Discovery Phase reads every active AI vendor agreement (the master agreement, the DPA, the ZDR addendum if present, the BAA if present) with the same eye we read an MSA in cyber due diligence. We map each data class your firm handles to the AI tools it can lawfully receive, and we tell you, in plain English, where the contracts you have signed do not cover the data you are sending. The output is a written deployment plan that closes the gaps before they become a regulatory or insurance problem, not after.

Answers to the questions general counsel ask first

Does ChatGPT Enterprise have ZDR by default?
No. ChatGPT Enterprise, Team, and Edu do not train on customer data by default and have a stronger retention posture than the consumer tiers, but Zero Data Retention is a separate configuration that has to be requested and provisioned on the organization's account. Enterprise without the ZDR amendment retains content for an abuse-monitoring window. If you are unsure whether your account has ZDR, the answer is in the order form, not in the product UI.
Do I need a BAA if I'm a family office and not a healthcare provider?
Only if family medical data flows through the AI tool. The trigger is functional: if any prompt the office sends contains the principal's medical records, hospital bills, prescription information, insurance correspondence, or long-term care documentation, the AI vendor processing that prompt is acting as a Business Associate. The conservative approach for most family offices is to maintain one BAA-eligible AI tier specifically for medical-adjacent work and to keep the rest of the office's AI usage on the standard ZDR tier.
What's the difference between a DPA and a ZDR?
A Data Processing Agreement is the foundational privacy-law contract between you and the vendor; it governs how personal data is handled under GDPR and similar regimes. A ZDR amendment is an AI-specific addendum that constrains what the vendor does with prompts and completions after the inference call: no retention, no training, no human review. The DPA is the floor; ZDR is an extension. A vendor can have a strong DPA and still retain your prompts for thirty days unless ZDR is signed.
Will my Microsoft 365 E5 license cover Copilot ZDR?
Not exactly. Microsoft does not sell a product called "Copilot ZDR." The Microsoft 365 Copilot service is contractually committed not to use customer prompts and responses to train the underlying foundation models, and Copilot inherits the data boundary of your Microsoft 365 tenant. The protection is built into the enterprise product, not added as a separate amendment. The questions to confirm: which Copilot product (consumer Copilot is governed differently), which DPA you have signed, and your tenant's audit configuration.
If we don't have ZDR, can the AI vendor train on our prompts?
On consumer and prosumer tiers, often yes, unless you have explicitly opted out. On enterprise tiers, the no-training commitment is increasingly the default, but retention for abuse monitoring is still in place without ZDR. Without an explicit ZDR amendment, your prompts may live in vendor logs for thirty to ninety days, accessible to vendor staff under the vendor's internal review processes. Whether that is a problem depends on your data classes; for client material, privileged matter, or MNPI, it almost always is.
How is ZDR different from a no-logging policy?
A no-logging policy is a vendor's stated practice; a Zero Data Retention amendment is a contractual obligation. The distinction matters when the question is asked by a regulator, an LP, or an insurance carrier. A stated practice can change. A signed amendment is a control with legal force. We always recommend the contractual version, even when the stated practice is identical, because the artifact is what survives an audit.
We use multiple AI vendors. Do we need this for each one?
Yes, for each vendor that processes data your firm classifies as sensitive. ZDR and BAA are vendor-specific contracts; one vendor's amendment does not cover another's. Most firms with serious AI usage end up with two or three vendor relationships at a time (a primary chat tool, a Microsoft Copilot deployment, sometimes a specialized vendor for code or research). The contractual layer has to cover each tool that touches the data class in question.

Two ways to start. They lead to the same place.

If you want a clearer picture of where your firm actually stands on AI contracts, deployment, and the data classes flowing through them, take the Discovery Questionnaire. If you want to talk through a specific situation first, schedule a confidential briefing with a founding partner.

If you want a clearer picture first

Take the Discovery Questionnaire. It takes 60 to 90 minutes, walks your team through the same diagnostic we use on every engagement, and leaves you with a more honest internal answer than most firms have today, whether you hire us or not.

Start the Discovery Questionnaire

If you want to talk through a specific situation

Schedule a confidential briefing with a founding partner. Thirty minutes. We will tell you within that thirty whether we are the right firm for what you are trying to do. If we are not, we will say so.

You can also reach us directly at [email protected]. Founding partners read the inbox.

Discovery Questionnaire