Using AI internally is a productivity question. Using AI in work you hand to clients is a legal and regulatory one. The bar is meaningfully higher, the rules vary by jurisdiction and industry, and the gap between what businesses do and what they're required to disclose is widening. If your firm is delivering work product that touches AI in any way — drafts, analysis, communications, decisions — this is worth taking seriously before it becomes a problem.
The disclosure question is not optional anymore
Whether or not you have to tell a client that AI touched their work depends on jurisdiction, industry, and the type of work. In some regulated fields the answer is clearly yes. In others it's evolving fast. The EU AI Act has set the most aggressive baseline globally — high-risk uses require documentation, risk assessment, and transparency to affected parties. Colorado's AI Act and similar state-level rules in the U.S. are following the same playbook, with effective dates rolling in over the next eighteen months. The practical advice: assume disclosure will be required wherever your clients are, and design your engagement letters and workflows accordingly. It's much easier to disclose proactively than to retrofit.
Sector-specific constraints are tighter than the general rules
If you're in legal, healthcare, financial services, or any regulated profession, your professional bodies have already issued guidance. Legal ethics opinions in most U.S. states now address AI use — privilege, citation accuracy, supervisory duty, and client confidentiality are all in play. Healthcare has HIPAA constraints on what models you can send patient information to. Financial advisors have fiduciary and audit-trail expectations that don't go away because the work was AI-assisted. The general AI laws are a floor, not a ceiling. Your sector rules will usually be stricter.
Three things you need on paper
If you deliver work to clients and AI touches it anywhere in the process, three things should exist in writing somewhere in your business. First, an internal AI use policy that defines what tools are approved, what data can go into them, and who reviews the output before it goes out. Second, language in your engagement letters or terms of service that addresses AI use — even a short paragraph is dramatically better than silence. Third, a basic audit trail showing what was AI-generated, what was human-reviewed, and when. None of this needs to be elaborate. It needs to exist.
The honest caveat
Rules are still moving. What's required today may be tightened or expanded in the next twelve months, and what's adequate documentation now may not pass review in two years. This isn't a one-time fix — it's an ongoing operational discipline. The firms that handle it best treat it like any other compliance topic: assigned owner, quarterly review, updates pushed through the team. The firms that handle it worst treat it as something to deal with when a client complains. Don't be the second kind.
One takeaway
If you can't answer, in writing, the question "how does your firm use AI on client work?" — fix that this quarter. Whether the answer is "we don't" or "we do, here's the policy," the answer needs to exist and be defensible. That's the floor.