Can my firm use ChatGPT with confidential client documents?
It depends on which ChatGPT, configured how. The free, Plus, and
Pro tiers retain prompts for up to thirty days and may train
future models on your inputs unless you opt out. ChatGPT
Enterprise, Team, and Edu do not train on your data and offer SOC
2 Type 2 attestation along with a Zero Data Retention (ZDR) option
for API workloads. Whether those tiers are appropriate for
privileged matter, MNPI, or trust documents is a separate question
of vendor risk, contractual posture, and your firm's data
classification, which is what we work through in discovery.
Is Microsoft Copilot safe for a 60-attorney firm?
Microsoft 365 Copilot inherits the data boundary of your Microsoft
365 tenant, which is a strong starting point. Whether it is safe
in your firm depends on tenant configuration (E5 versus E3,
Microsoft Information Protection labels, Conditional Access, audit
retention), license tier, the terms of your DPA, and what you let
Copilot index. With the right configuration it can be appropriate
for matter work; with the default configuration it can leak across
information barriers. We help firms reach the right configuration
before deployment, not after a near-miss.
What's the difference between a ZDR and a BAA?
A Zero Data Retention agreement (ZDR), in the AI context, is a
contractual commitment from the vendor that your prompts and
completions are not retained beyond the immediate inference call
and are not used to train future models. A Business Associate
Agreement (BAA) is a HIPAA-required contract between a covered
entity and a service provider that handles Protected Health
Information. They serve different purposes: ZDR governs
persistence and training; BAA governs HIPAA liability allocation.
Many AI vendors offer one and not the other, and which you need
depends on the data classes your firm actually handles.
Can my private equity firm use AI on data-room MNPI?
The honest answer: only with a deployment architecture designed
for it. Public consumer LLMs are not appropriate for MNPI under
any normal configuration. An enterprise-tier API with ZDR and a
DPA that prohibits training and persistence can be appropriate for
some MNPI workflows. A private LLM deployment, where the model and
prompts both stay on infrastructure you control, is appropriate
for the most sensitive deal-stage MNPI on a public-company target.
The harder question is auditability: can you prove to the SEC and
your LPs that the MNPI did not leak. That audit posture is what we
build.
Does Microsoft 365 Copilot use my data to train models?
Microsoft has stated, contractually, that Microsoft 365 Copilot
prompts and responses are not used to train the underlying
foundation models, that customer data stays inside the Microsoft
365 service boundary, and that grounding content is fetched at
query time. The protection depends on which Copilot product (the
consumer version is different), which license tier, and which DPA
you have signed. For regulated firms, we pair Copilot with Purview
labels and Conditional Access rather than rely on the contract
alone.
What is ABA Formal Opinion 512 and how does it affect our AI use?
Formal Opinion 512, issued by the ABA Standing Committee on Ethics
and Professional Responsibility in July 2024, addresses generative
AI under the Model Rules. In short: lawyers have a duty of
competence on the AI tools they use (Rule 1.1), must protect
client confidentiality including in prompt content (Rule 1.6),
must communicate with clients about AI use that materially affects
representation (Rule 1.4), and must supervise AI tools as they
would supervise non-lawyer staff (Rule 5.3). Most firms now need a
written AI policy, a client-disclosure approach, and a vetted
approved-tools list to credibly comply.
Should LP DDQs include AI governance questions?
Increasingly, yes. ILPA's diligence template and many large LPs
added AI governance questions to standard DDQs and side-letter
requests through 2025. Common asks include a written AI policy, an
approved-tools list, an acceptable-use program, vendor risk
process for AI vendors, audit log retention, and oversight at the
portfolio company level. A well-prepared GP has the artifacts
ready before the question is asked.
How do we prevent associates and analysts from leaking client data
into ChatGPT?
Three layers, in order of effectiveness. First, identity and DLP:
Microsoft Defender for Cloud Apps, Purview Endpoint DLP, or a
dedicated AI gateway can block uploads to unsanctioned LLMs at the
network or endpoint layer. Second, an approved alternative: when
you give the team a sanctioned tool that genuinely works for their
job, unauthorized usage drops sharply. Third, policy and training.
Tools without policy, or policy without tools, both fail.
What is a "private LLM deployment" and when do we need one?
A private LLM deployment is an instance of a language model that
runs in infrastructure you control, with no shared inference and
no data leaving your environment. Practically this looks like
running an open-weight model (Llama, Mistral, Qwen) on a server in
your office, your data center, or a single-tenant cloud, sometimes
inside a Confidential Computing Trusted Execution Environment
(TEE) for an additional hardware-level boundary. You need one when
your data has properties no third-party SaaS posture can satisfy:
trust documents at the highest sensitivity, attorney-client
privilege on bet-the-firm matters, deal-stage MNPI on
public-company targets, or contractual prohibitions on third-party
processors. Most firms do not need a private deployment for
everything, only for one or two specific data classes.
Should we run AI on-prem or in Azure?
Both can be appropriate. On-prem (your data center, your office, a
single-tenant rack) gives the strongest answer to "where did this
prompt go" and is the only option for some contractual postures.
Azure, AWS, or Google Cloud with a private deployment,
customer-managed encryption keys (BYOK), and a tenant
configuration that pins data residency can match nearly the same
posture with less operational burden. The choice depends on your
existing infrastructure, your team's operational maturity, the
data classes you intend to run, and your auditor's view.
How do you audit what an LLM saw?
Every AI deployment we build produces an immutable audit log:
which user (by identity, not by IP), which model, against which
data class, on which date, from which device, under which policy
version. For Microsoft 365 Copilot, that log lives in the Purview
audit pipeline. For OpenAI Enterprise and Anthropic Commercial, it
lives in the vendor's audit endpoint, and we configure ingestion
into your SIEM. For private deployments, we instrument the
inference layer directly.
How long does a Trifident secure-AI engagement take?
Discovery, including the questionnaire and a working session with
a partner, takes two to three weeks. The architecture phase, which
produces the written deployment plan, takes four to six weeks.
Implementation timing scales with the size of your environment;
firms of ten to fifty people typically reach a deployed and
validated state in six to ten weeks. We then move to a quarterly
operating cadence.
Do we have to abandon the AI tools our team is already using?
Almost never. The first step in discovery is inventorying what is
already in use. The goal is to bring productive shadow-AI usage
into a sanctioned posture, not to ban it. Sometimes the answer is
your team is already using the right tool, but configured wrong.
Sometimes it is this tool is fine for low-sensitivity work but not
for matter files, MNPI, or principal-grade material. We come out
of discovery with a tool-by-tool, data-class-by-data-class verdict
that reflects how your firm actually works.
How is this different from hiring a Big 4 firm to advise on AI?
Three differences. We are cybersecurity advisors first, not
management consultants who recently added an AI practice. The four
verticals on this page are our practice, not a horizontal we are
testing. And the deliverable is a working deployment plan executed
alongside your IT team, not a slide deck. The person who scopes
your engagement is the person who does the work.
Is the initial briefing actually confidential?
Yes. We do not publish client lists, do not name engagements in
marketing, and do not require an NDA before a first call, although
several clients engage us through their counsel for additional
privilege protection. Anything you share in the briefing is held
to the same confidentiality standard as a paid engagement. JMAP,
S/MIME, signal-grade voice, or a courier are available if your
situation calls for them.