Ethical Use of Chatbot AI for Attorneys

Practical Guidance Under the ABA Model Rules

Chatbot-style generative AI (GAI) can speed up drafting, summarize large records, and improve internal knowledge retrieval. It can also fabricate case citations, leak confidential information, and create billing and supervision problems—fast. The ABA’s position is not “don’t use AI.” It’s “use it competently, securely, and with lawyer-level oversight.” (American Bar Association)

Below is a direct, practice-oriented guide to using chatbot AI ethically, grounded in the ABA Model Rules and ABA Formal Opinion 512 (July 2024), with a focus on day-to-day law practice.


The core rule: the lawyer remains responsible

AI is not a co-counsel. Under ABA Formal Opinion 512, lawyers must maintain competence in the tools they use, protect confidentiality, supervise use within the firm, communicate appropriately with clients, ensure candor to tribunals, and charge reasonable fees. (American Bar Association)

This is the through-line: you can use AI to assist your work, but you cannot outsource professional judgment or professional responsibility to it. (California’s guidance states this explicitly: “A lawyer’s professional judgment cannot be delegated to generative AI.”) (The State Bar of California)

1) Competence: know what the tool does, where it fails, and how to control it

What the ABA requires. Model Rule 1.1 requires competent representation, and Comment 8 makes clear that competence includes keeping abreast of “the benefits and risks associated with relevant technology.” (American Bar Association)
ABA Formal Opinion 512 adds that lawyers do not need to become AI experts, but must have a reasonable understanding of capabilities and limitations, and can achieve competence via self-study or consulting qualified experts. (American Bar Association)

Best practices (operational).

  • Define approved use cases (e.g., first-draft emails, issue spotting checklists, summarizing non-confidential text, generating deposition outlines from sanitized facts).
  • Define prohibited use cases (e.g., final legal research citations without verification, unsupervised client advice, any use that requires pasting sensitive client data into a public tool).
  • Build a verification workflow: treat AI output as a draft prepared by a junior assistant who is sometimes overconfident and occasionally invents things.
  • Train the team on: hallucinations (fabricated facts/citations), prompt injection/data exfiltration risks, confidentiality constraints, and tool-specific settings and retention.
2) Confidentiality: assume prompts can breach confidentialty unless you can prove they won’t

What the ABA requires. Model Rule 1.6 covers confidentiality, and Rule 1.6(c) requires “reasonable efforts” to prevent unauthorized access or disclosure. (American Bar Association)
ABA Formal Opinion 512 states that before inputting client information into a GAI tool, lawyers must evaluate disclosure/access risks inside and outside the firm, and that the analysis is fact-driven (client, matter, task, tool). (American Bar Association)

When informed consent is required

ABA Formal Opinion 512 is explicit: for many self-learning tools—where inputs can influence future outputs—client informed consent is required before inputting information relating to the representation because the tool’s output could lead directly or indirectly to disclosure. (American Bar Association)
It also cautions that boilerplate engagement-letter language is not sufficient for informed consent in these scenarios. (American Bar Association)

Best practices (confidentiality controls).

  • Data minimization: do not paste client names, unique facts, account numbers, medical details, trade secrets, nonpublic deal terms, or anything that would be harmful if disclosed.
  • Use sanitized prompts: replace identifiers with placeholders (“Client A,” “Hospital X,” “Counterparty Y”), and remove dates/amounts if they are identifying.
  • Prefer private / enterprise / on-prem or dedicated-environment tools where you can control retention, training use, access controls, audit logs, and vendor obligations (contractually enforceable).
  • Read the Terms of Use, privacy policy, and data handling terms for any tool and confirm who can access inputs/outputs and how they are retained; ABA Formal Opinion 512 calls this a baseline requirement (or consult an internal/external expert). (American Bar Association)
  • Assess reasonableness using Rule 1.6 factors (sensitivity, likelihood of disclosure, cost/difficulty of safeguards, impact on representation). (American Bar Association)
  • Document your decision for higher-risk matters (what tool, what settings, what data categories are allowed, and why protections are reasonable).

A simple rule that prevents most incidents:
If you would not email the text to a third-party vendor without a confidentiality agreement, you should not paste it into a consumer chatbot.

3) Communication: disclose AI use when it is material to the representation or client expectations

What the ABA requires. Model Rule 1.4 requires reasonable consultation about the means of achieving client objectives and enough explanation for informed decisions. (American Bar Association)
ABA Formal Opinion 512 says it’s not possible to list every scenario, but lawyers should consider whether circumstances warrant client consultation about GAI use—particularly based on client needs/expectations, representation scope, information sensitivity, and how the tool processes client information. (American Bar Association)

It also notes a client may reasonably want to know whether the lawyer is exercising independent judgment or deferring to a GAI tool for important decisions. (American Bar Association)

Best practices (client communication).

  • Include an “AI tools” paragraph in engagement letters describing permissible uses, confidentiality posture, and any client restrictions—ABA Formal Opinion 512 identifies the engagement agreement as a logical place for these disclosures. (American Bar Association)
  • Disclose and discuss when:
    • AI will process sensitive client info,
    • AI output will materially influence strategy/advice,
    • the client hired you specifically for specialized judgment that the client expects you (not a tool) to apply,
    • the client or outside counsel guidelines restrict AI use.
  • Be prepared to answer directly if a client asks whether AI was used and how. (American Bar Association)
4) Candor to the tribunal: verify everything—especially citations

What the ABA requires. Model Rule 3.3 prohibits false statements of fact or law to a tribunal and requires correction of material false statements. (American Bar Association)
ABA Formal Opinion 512 notes real-world failures: nonexistent opinions, inaccurate authority analysis, and misleading arguments—and states outputs must be carefully reviewed to ensure submissions are not false. (American Bar Association)

Best practices (court-facing workflow).

  • No AI-generated citation goes to court without validation in primary sources (Westlaw/Lexis/official reporters/court websites as appropriate).
  • Check quotations against the cited authority; do not assume AI copied accurately.
  • Confirm controlling adverse authority is not omitted due to AI’s incomplete retrieval or bias.
  • Check local court rules and standing orders on AI use and disclosure; ABA Formal Opinion 512 flags this explicitly. (American Bar Association)
  • Maintain a human-authored “verification note” in the file: what was checked, by whom, and when (especially for briefs and affidavits).
5) Supervision and firm governance: treat AI like a powerful nonlawyer assistant (and govern it)

What the ABA requires. Model Rules 5.1 and 5.3 impose managerial and supervisory duties over lawyers and nonlawyers to ensure conduct conforms to professional obligations. (American Bar Association)
ABA Formal Opinion 512 applies these duties to GAI: firms should establish clear policies for permissible use, ensure training, and supervise both internal users and external providers. It also suggests labeling AI-produced materials in client/firm files so future users understand potential fallibility. (American Bar Association)

Best practices (governance controls).

  • Create an AI use policy with:
    • approved tools/models,
    • approved use cases,
    • prohibited data categories,
    • required review steps for work product tiers (internal memo vs. client-facing vs. court filing),
    • retention and audit requirements.
  • Implement role-based access control: limit who can use which tools and for what matters.
  • Require training and periodic refreshers, including “how to fail safely” (what to do when output seems wrong or includes suspicious citations).
  • Vendor due diligence (especially if AI is provided by third parties): ABA Formal Opinion 512 emphasizes checking vendor credentials, security practices, confidentiality agreements, conflicts screening, and enforceable obligations/notice of breach. (American Bar Association)
6) Fees and billing: bill for actual lawyer time and communicate AI-related charges

What the ABA requires. Model Rule 1.5 prohibits unreasonable fees. (American Bar Association)
ABA Formal Opinion 512 provides concrete guidance:

  • If billing hourly, you must bill for actual time spent, including prompt input and review/editing time—not “time saved.” (American Bar Association)
  • Rule 1.5(b) requires communicating the basis for fees/expenses; before charging for GAI tools or services, explain the basis, preferably in writing. (American Bar Association)
  • If AI makes tasks much faster, charging the same flat fee may be unreasonable in some circumstances; a fee for little or no work performed is unreasonable. (American Bar Association)

Best practices (billing hygiene).

A practical “ethical AI” workflow you can implement immediate
  • Time entries should reflect real work: “Drafted/edited motion; verified authorities; revised arguments” (and the time actually spent).
  • Disclose AI tool costs if they will be passed through as expenses, and align with the fee agreement.
  • Do not charge for “AI thinking time.” Charge for lawyer work: prompt engineering as a drafting method, validation, legal analysis, and final professional judgment.
  1. Classify the task: internal draft, client-facing advice, or court filing.
  2. Classify the data: public, internal, confidential, highly sensitive.
  3. Pick the right tool: use the lowest-risk tool that can do the job; avoid public/self-learning tools for confidential inputs unless you have informed consent and defensible protections. (American Bar Association)
  4. Minimize inputs: sanitize and limit what you provide.
  5. Generate output, then verify:
    • facts against the record,
    • law and citations against primary sources,
    • quotations against originals. (American Bar Association)
  6. Document: note tool used, high-level purpose, and validation steps (especially for high-stakes outputs).
  7. Communicate when material: disclose use where client expectations, sensitivity, or engagement terms make it important. (American Bar Association)
  8. Bill ethically: actual time, transparent basis for any AI charges. (American Bar Association)
Final note: ABA Model Rules are a baseline, not the whole map

Most jurisdictions adopt versions of the ABA Model Rules with variations, and courts may impose AI-specific disclosure or certification requirements. ABA Formal Opinion 512 explicitly advises consulting applicable court rules. (American Bar Association) A conservative approach is to treat ABA 512 as the minimum standard and layer on jurisdiction- and client-specific requirements.

This post is general information, not legal advice.