The Amendments in Plain Terms
The ABA's Commission on Ethics and Professional Responsibility released draft amendments this month that would, for the first time, explicitly address AI in the Model Rules of Professional Conduct.
Rule 1.1 (Competence): The proposed amendment adds commentary clarifying that competence in AI-assisted legal work includes the ability to understand the capabilities and limitations of AI tools used, verify AI-generated outputs, and identify when AI use creates ethical risks.
Rule 1.6 (Confidentiality): The proposed amendment explicitly categorizes the use of AI tools that process client information as a potential confidentiality disclosure requiring client consent or a finding that the use is "necessary to carry out the representation."
Key Takeaways: AI competency will become an explicit professional responsibility obligation. Unauthorized AI processing of client data may constitute a confidentiality breach. State bars will adopt or adapt the Model Rules — implementation will vary by state.
What Changes in Practice
The competency amendment is largely a codification of what the best-run practices already do. But it transforms what was good practice into a disciplinary minimum. Lawyers who use AI tools without adequate understanding of their limitations will no longer be merely negligent — they will potentially be in violation of their professional responsibility obligations.
The confidentiality amendment is more disruptive. Many law firms have deployed AI tools without explicit client consent, operating under a general implied authorization theory — AI is just another tool, like word processing software, that clients implicitly authorize when they engage counsel.
The amendment, if adopted, undermines that theory. It signals that AI processing of client information is a different category than passive software tools, closer to engagement of a third-party service provider.
The State Bar Response
The Model Rules are not binding law — they require adoption by state bars. As of this writing, fourteen state bars have active AI ethics working groups, and five (California, New York, Texas, Florida, and Illinois) have released preliminary guidance that previews how their adaptations of the Model Rules amendments are likely to land.
California's preliminary guidance is the most aggressive: it proposes an affirmative duty to disclose AI use in any matter, not just where confidential information is processed. If finalized, this would be the most far-reaching AI disclosure obligation for lawyers in any US jurisdiction.
New York and Illinois are taking a more measured approach, focused on competency obligations and limiting the confidentiality amendment to situations where the AI vendor has access to identifiable client data.
How to Get Ahead of This
The amendments, in some form, are coming. The question is when and in which state. The practical steps firms and corporate legal departments should be taking now:
Document your AI tool inventory with data flow maps showing what client information each tool processes. This is both a compliance requirement under GDPR-family regimes and will be the foundation of any client consent or disclosure process.
Update your engagement letters. Adding a clause authorizing AI-assisted work is a short-term fix that preserves optionality while the rules firm up. Several BigLaw firms added AI clauses to their standard engagement letters in 2024.
Train your lawyers. When competency becomes a professional responsibility standard, training becomes compliance, not best practice.
