Circular 230 and AI: §10.22, §10.35, §10.37 mapped onto AI
Circular 230 binds your license; AI is what your license is now exposed to. §10.22 due diligence, §10.34 positions, §10.35 competence, §10.36 firm procedures, §10.37 written advice, and §10.51 disreputable conduct — mapped onto AI-assisted tax practice with verbatim regulatory text, the eight-case federal canon, and a defensible-workflow checklist.
Published ·22 min read ·Last reviewed
Step 1
§10.22 due diligence
Returns, representations to Treasury, representations to clients
Step 2
§10.34 / §10.37
Substantial authority on positions; written-advice six-factor test
Step 3
§10.35 + REG-116610-20
Competence today; technological competency on the way
Step 4
§10.36 firm procedures
Wadsworth template — written policy + verification + supervision
Step 5
§10.51(a)(13) gross incompetence
Gross indifference, grossly inadequate preparation, consistent failure
The 30-second answer
In a hurry? Jump straight to the documentation trail → Or read the seven facts an EA, CPA, or AFSP-credentialed solo needs in order to know whether their AI workflow survives a §10.22 review:
- §10.22 is already the operative rule. §10.22(a) requires due diligence in preparing returns, in representations to Treasury, and in representations to clients. AI assists. It does not discharge.
- §10.22(b) is a safe harbor written for human delegates. “Reasonable care in engaging, supervising, training, and evaluating the person” reads as a junior-associate framework. Whether an LLM counts as a “person” under the regulation is unsettled — the practitioner-canonical safer position is that it does not, which puts the AI-using practitioner under §10.22(a) directly.
- §10.34(d) maps onto AI cleanly. A practitioner may rely “in good faith without verification” on furnished information but “must make reasonable inquiries” where the information appears incorrect, inconsistent, or incomplete. Substitute “AI output” for “client information” and the rule is the same.
- §10.35 currently says nothing about technology. The proposed REG-116610-20 revision adds explicit technological-competency language. Comment closed February 24, 2025; public hearing March 6, 2025; not finalized as of May 2026.
- §10.37 attaches the moment AI-assisted output leaves your screen as advice. Six requirements per §10.37(a)(2). A hallucinated citation fails (i), (iii), and (v) on its face.
- §10.51(a)(13) is the disciplinary hook. “Gross incompetence” is defined to include “gross indifference, preparation which is grossly inadequate under the circumstances, and a consistent failure to perform obligations to the client” — language drafted for human conduct that reads as if it were written for the AI-without-verification practice pattern.
- OPR has published zero AI-tied sanctions through May 2026. The IRB 2026-7 and IRB 2026-18 semi-annual lists name §10.51(a)(2) and (a)(10) grounds. AI is not cited as a sanction basis in either announcement. The first published action will reset the field; it has not landed yet.
The Tool #1 Circular 230 §10.22 checklist operationalizes the workflow below into a pre-file score. The text that follows is the why.
Who Circular 230 actually binds
31 CFR §10.0 is the scope provision. The Part governs “the recognition of attorneys, certified public accountants, enrolled agents, enrolled retirement plan agents, registered tax return preparers, and other persons representing taxpayers before the Internal Revenue Service.” §10.3 lists the six categories of credentialed practitioner authorized to practice — attorneys, CPAs, EAs, enrolled actuaries, ERPAs, and registered tax return preparers (now functionally vacated).
Loving v. IRS, 742 F.3d 1013 (D.C. Cir. 2014), held that the IRS lacked statutory authority to mandatorily regulate non-credentialed return preparers as Part 10 practitioners. The Registered Tax Return Preparer program was struck down; AFSP replaced it as a voluntary record-of-completion regime that conditions participation on consent to be bound by §10.51 standards.
The asymmetry matters for the AI analysis. A CPA, EA, or attorney using AI is bound by every Part 10 section that applies to practice — §10.22, §10.34, §10.35, §10.36, §10.37, §10.51. An AFSP participant consents to §10.51 conduct standards but is otherwise outside the formal Part 10 frame. A non-credentialed PTIN-only preparer outside AFSP remains exposed to the IRC stack — §6694 preparer penalties, §6695 due-diligence penalties on EITC/CTC/AOTC/HoH, §6713 civil disclosure, §7216 criminal disclosure, §7407 injunctions — none of which depend on Part 10 standing. The Circular 230 culture is universal; the regulatory load is asymmetric.
§10.22 — due diligence as to accuracy
This is the load-bearing section. The verbatim text of §10.22(a):
A practitioner must exercise due diligence — (1) In preparing or assisting in the preparation of, approving, and filing tax returns, documents, affidavits, and other papers relating to Internal Revenue Service matters; (2) In determining the correctness of oral or written representations made by the practitioner to the Department of the Treasury; and (3) In determining the correctness of oral or written representations made by the practitioner to clients with reference to any matter administered by the Internal Revenue Service.
Three obligations, one practitioner. AI sits inside all three.
§10.22(a)(1) attaches at the signature. The practitioner signs Form 8879; the AI does not. §6695 ties preparer-penalty exposure to the signature, and §10.22(a)(1) ties due diligence to the same act. (a)(2) sweeps in everything sent to Treasury — CP2000 responses, PPS scripts, Form 2848 narratives, Tax Court pleadings. Transmitting AI’s hallucinated citation to Treasury is a (a)(2) breach. (a)(3) covers everything sent to the client — planning memos, K-1 explainers, year-end recommendations. A practitioner who delivers AI output to a client without verification has breached (a)(3) regardless of whether the IRS ever sees it.
The reliance safe harbor at §10.22(b) was added effective June 12, 2014 by T.D. 9668:
Except as modified by §§ 10.34 and 10.37, a practitioner will be presumed to have exercised due diligence for purposes of this section if the practitioner relies on the work product of another person and the practitioner used reasonable care in engaging, supervising, training, and evaluating the person, taking proper account of the nature of the relationship between the practitioner and the person.
The 2014 drafters had supervised juniors in mind — the senior who relies on staff work product after engaging, supervising, training, and evaluating that staff. There are two readings of “person” once AI shows up.
Reading A is analogical. Practitioners engage an LLM (procurement), supervise it (output review), train it (prompts and model selection), and evaluate it (quality judgment). The §10.22(b) presumption attaches.
Reading B is literal. “Person” in the legal-personhood sense; LLMs are tools, not persons. §10.22(b) is unavailable and the practitioner is evaluated under §10.22(a) directly. Reading B is the practitioner-canonical safer position. AICPA SSTS Section 1.4.4 reinforces it: “Use of a tool does not absolve the member of professional obligations under AICPA or other applicable ethical standards.” Until the IRS publishes guidance or the proposed §10.35 is finalized, the defensible posture is to treat AI as the practitioner’s instrument and to satisfy §10.22(a) directly, not to seek shelter under (b).
The “AI as junior associate” trope is the dominant practitioner mental model — verbatim in the Tax Adviser’s February 2024 ethics piece and recurring across r/taxpros. The trope works at workflow level. It hides one specific failure mode that practitioner discourse rarely names: when the AI is positioned as the second pair of eyes, the silent reviewer that catches the human’s mistakes — the second-most-stated AI wish in the field — the human is supervising the AI rather than the AI being supervised. §10.22(b)‘s presumption inverts. The supervisor cannot delegate the supervisory obligation.
§10.34 — standards when a model drafted the position
§10.34(a)(1) prohibits a practitioner from willfully, recklessly, or through gross incompetence signing a return that the practitioner knows or reasonably should know contains a position that lacks a reasonable basis, is an unreasonable position under §6694(a)(2), or reflects a willful understatement or reckless disregard under §6694(b)(2). §10.34(a)(2) names a pattern of conduct as a factor in the willfulness analysis.
The §6694 cross-reference does the substantive work. A position has substantial authority if the supporting weight of authorities is substantial relative to contrary authority under Treas. Reg. §1.6662-4(d)(3)(ii). A position has reasonable basis if it is “reasonably based on one or more of the [tax] authorities” under §1.6662-4(d)(3)(iii). A disclosed position needs reasonable basis; an undisclosed position needs substantial authority.
The treatise authority list is finite. IRC, Treasury Regulations, Revenue Rulings, Revenue Procedures, Notices, PLRs (with caveats), court decisions, congressional committee reports, GCMs, and IRS publication guidance. Blue J, TaxGPT, ChatGPT, Hive Tax AI, Thomson Reuters CoCounsel, and CCH Axcess Expert AI are not on the list. AI is not authority. If AI surfaces a real case that supports the position, the practitioner has authority — sourced through the tool but resting on the case itself. If AI cites a fabricated case, there is no authority, and signing the return is a §10.34(a)(1)(i)(A) “lacks a reasonable basis” violation at minimum.
§10.34(d) is the regulatory parallel to “AI can’t replace professional judgment”:
A practitioner advising a client to take a position on a tax return…generally may rely in good faith without verification upon information furnished by the client. The practitioner may not, however, ignore the implications of information furnished to, or actually known by, the practitioner, and must make reasonable inquiries if the information as furnished appears to be incorrect, inconsistent with an important fact or another factual assumption, or incomplete.
Read it twice with “AI output” substituted for “client information.” It works. Good-faith reliance is permitted where the AI’s output is internally consistent and facially correct; the practitioner may not ignore implications of that output; the practitioner must make reasonable inquiries where the output appears incorrect, inconsistent, or incomplete. Post-Mata, the presumption a senior practitioner should carry is that LLM output is presumptively inconsistent with the body of verified case citations until the cite is run through Westlaw, Bloomberg Tax, Checkpoint, or Cornell LII. The “verify every cite” pattern recurring on r/taxpros and TaxProTalk is §10.22 due diligence and §10.34(d) inquiry made operational.
§10.35 — competence, current and proposed
The current §10.35 text, effective June 12, 2014 under T.D. 9668:
(a) A practitioner must possess the necessary competence to engage in practice before the Internal Revenue Service. Competent practice requires the appropriate level of knowledge, skill, thoroughness, and preparation necessary for the matter for which the practitioner is engaged.
That is the entire substantive paragraph. The current rule does not name technology. It does not name AI. It binds the practitioner to the level of competence required for the matter engaged — and reads as if competence on AI workflow is implicitly part of any matter where AI is in the loop.
REG-116610-20, published December 26, 2024, makes the implicit explicit. The operative proposed addition — quoted in Current Federal Tax Developments (December 22, 2024) and corroborated by CPA Practice Advisor:
Competency includes understanding the benefits and risks associated with relevant technology that is used by the practitioner to provide services to clients or to store and transmit tax return and other confidential information.
The preamble does not use the word “artificial intelligence.” Per CPA Trendlines’ January 17, 2025 analysis, the operative phrase is: “Increasingly, competence also includes maintaining familiarity with technological tools used to represent a client.” The standard is technology-neutral by design. The drafter took the ABA Model Rule 1.1 Comment 8 framing — the legal-ethics technology-competence rule adopted by roughly forty state bars since 2012 — and ported it onto Circular 230. AI is one instantiation among cloud storage, encryption, e-signature platforms, document-management systems, and multi-factor authentication.
The procedural state as of May 2026. Comment period closed February 24, 2025. Public hearing held March 6, 2025, at the IRS Building auditorium in Washington. AICPA submitted a comment letter on February 20, 2025 generally supporting the technological-competency expansion while pushing back on the proposed §10.33 mental-fitness assessment. NASBA filed February 20, 2025. NSTP and NATP also commented. The final rule has not issued. A reasonable forecast given the scale of revision is late 2026 or 2027.
The interim regulatory reality is that practitioners operate under current §10.22, §10.34, §10.35, and §10.37 — not under the proposed technological-competency rule. The proposed §10.35 signals direction of travel without creating an immediate enforcement pathway. The direction is unambiguous.
§10.36 — firm procedures, the AI-policy hook
§10.36 obligates “any individual subject to the provisions of this part who has (or individuals who have or share) principal authority and responsibility for overseeing a firm’s practice of providing advice concerning Federal tax matters” to “take reasonable steps to ensure that the firm has adequate procedures in effect for all members, associates, and employees for purposes of complying with subparts A, B, and C of this part, as applicable.” Discipline attaches under §10.36(b) where the principal-authority individual through willfulness, recklessness, or gross incompetence fails to ensure adequate procedures, or knows of a pattern of noncompliance and fails to act.
This is where the firm AI policy lives. A managing partner whose staff drafts CP2000 responses via ChatGPT without verification, with no written firm policy on AI use, no approved-vendor list, no §7216 consent process, and no required secondary review on AI-generated positions is exposed under §10.36(b)(3) — the “should-have-known” pattern provision.
The federal-court template arrived in February 2025. In Wadsworth v. Walmart Inc. (D. Wyo. February 24, 2025), three Morgan & Morgan attorneys cited eight non-existent cases in a motion in limine, sourced from the firm’s in-house AI research tool. The drafter lost his pro hac vice and paid $3,000; the supervising attorneys paid $1,000 each. The firm escaped sanction because, per Magistrate Judge Rankin, it had “already taken steps to ensure that its lawyers in the future independently verify any AI-generated information before relying on it.” Chief Transformation Officer Yath Ithayakumar’s firm-wide directive — warning 1,000+ attorneys that “blindly relying on AI could result in disciplinary action, including termination” — is the load-bearing comparable for §10.36 procedures.
Per the Blue J / CPA.com AI Tax Research Solution Outlook Report (September 2025), 54.4 percent of practitioners personally use AI while only 33.1 percent of firms have formally adopted AI tools. That twenty-one-point gap describes the population of firms operationally exposed under §10.36 today. Adequate procedures do not require perfection; they require the procedures actually existing, in writing, with staff acknowledgment.
§10.37 — when an AI-assisted answer becomes “written advice”
§10.37(a)(2) imposes six requirements on written advice on a Federal tax matter. §10.37(c)(1) measures compliance against “a reasonable practitioner standard, considering all facts and circumstances, including, but not limited to, the scope of the engagement and the type and specificity of the advice sought by the client.”
| § | Requirement (verbatim) | How AI output fails |
|---|---|---|
| (i) | Base the written advice on reasonable factual and legal assumptions (including assumptions as to future events). | A hallucinated case is not a “reasonable legal assumption.” |
| (ii) | Reasonably consider all relevant facts and circumstances that the practitioner knows or reasonably should know. | Training-cutoff gaps on post-OBBBA provisions (NCTI for GILTI, FDDEI for FDII, §174A for §174, tiered §1202) are failures here. |
| (iii) | Use reasonable efforts to identify and ascertain the facts relevant to written advice on each Federal tax matter. | Querying an LLM is not “reasonable efforts to identify and ascertain facts.” |
| (iv) | Not rely upon representations, statements, findings, or agreements (including projections, financial forecasts, or appraisals) of the taxpayer or any other person if reliance on them would be unreasonable. | AI’s confident fabrications are exactly the kind of representation where reliance would be unreasonable. |
| (v) | Relate applicable law and authorities to facts. | An AI-cited nonexistent regulation is not “applicable law and authorities.” |
| (vi) | Not, in evaluating a Federal tax matter, take into account the possibility that a tax return will not be audited or that a matter will not be raised on audit. | An AI prompt asking “will the IRS catch this?” is a (vi) violation on its face. |
The threshold question is when AI output becomes “written advice.” The dividing line is internal versus client-facing or IRS-facing. AI’s prompt-output in a research workspace, AI drafts the practitioner is still editing, and AI-generated internal memos sit on the internal side. The moment the output crosses to a client memo, a portal-delivered opinion, a planning recommendation, or an IRS-correspondence draft, §10.37(a)(2) attaches to every word.
AI output is also not a “factual assumption.” The factual assumption is what the practitioner concludes after verification. Skip verification and there is no factual assumption — only an unverified guess that fails (a)(2)(i) at the threshold. §10.37(b) governs reliance “on the advice of another person” and requires reasonableness, good faith, and that the advisor is not known to lack competence. The §10.37(b)(2) competence test is structurally hard to meet for consumer-tier LLMs that are pattern-matching systems, not legal practitioners. Tax-specific paid tools — Blue J, Hive Tax, CPA Pilot, CoCounsel, CCH Axcess Expert AI — sit closer to the line because they market tax-domain training and primary-authority retrieval. §10.37(b)‘s good-faith standard still requires verifying the underlying authority. AI output is not a senior partner’s signed opinion.
§10.51 — when AI failure becomes disreputable conduct
§10.51(a) lists eighteen grounds for which a practitioner may be censured, suspended, or disbarred under §10.50. Four subsections intersect with AI use.
§10.51(a)(4) — false or misleading information to Treasury — captures AI-fabricated citations transmitted to Treasury without verification once the practitioner knew or recklessly disregarded the hallucination risk. §10.51(a)(7) — willful assistance in tax violations — applies where an AI-generated tax-shelter pitch is forwarded to a client. §10.51(a)(15) — willful unauthorized disclosure or use of tax return information — is the §7216 overlay; the same act that violates §7216 separately violates §10.51(a)(15) at the OPR level.
§10.51(a)(13) is the load-bearing subsection. The regulatory text:
Giving a false opinion, knowingly, recklessly, or through gross incompetence, including an opinion which is intentionally or recklessly misleading, or engaging in a pattern of providing incompetent opinions on questions arising under the Federal tax laws.
The regulation defines “gross incompetence” in language drafted for human conduct that reads as if it were written for the AI-without-verification practice pattern: “Gross incompetence includes conduct that reflects gross indifference, preparation which is grossly inadequate under the circumstances, and a consistent failure to perform obligations to the client.” Gross indifference (using consumer ChatGPT for tax research), grossly inadequate preparation (no verification step), consistent failure to perform (the Mata / Thomas pattern repeated across multiple matters). The three-element test maps cleanly.
The §10.50 sanction menu — censure, suspension, disbarment, monetary penalty up to the gross income derived from the conduct — is structurally available now. The first published OPR action specifically tied to AI use has not landed. The empty set is itself a finding, not a defense.
Mata, Thomas, and the federal-court parallel
The federal-court canon arrived on June 22, 2023. In Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y.), Judge Castel sanctioned Levidow Levidow & Oberman attorneys Steven Schwartz and Peter LoDuca $5,000 jointly for filing a brief citing six federal cases that did not exist. When opposing counsel could not locate them, Schwartz returned to ChatGPT, which confirmed the cases “indeed exist” and “can be found in reputable legal databases such as LexisNexis and Westlaw”. Schwartz submitted the screenshots without independent verification. Castel’s load-bearing passage:
Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.
That sentence — quoted in every post-Mata analysis, including the Volokh Conspiracy coverage the same day — is the cleanest federal-court explication of what §10.22 requires of a tax practitioner relying on AI. The “existing rules” are the practitioner’s substantive obligations. The “gatekeeping role” is §10.22 in attorney-ethics form.
The tax-specific analog arrived October 23, 2024. In Gary Thomas v. Commissioner, No. 10795-22 (T.C.), the Tax Court struck a pretrial memorandum where, per Jeremy Wells’s summary, “the names of the cases cited were real, but they were not at the locations given, and the actual cases were unrelated to the petitioner’s issue.” Three of four citations were fabricated. The judge observed the document had “the hallmarks of a document prepared with the assistance of a large language model” and declined monetary sanctions, citing the court’s posture that “the court is not in the business of dictating to attorneys the extent to which they can or should rely on advancing technology.” The remedy was the strikethrough. The order is the canonical Tax Court AI-fabrication case and is the closest tax-specific analog to Mata.
The 2026 case practitioners should also be tracking is United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026; written opinion Feb. 17, 2026). Defendant Bradley Heppner inputted attorney-discussed information into a consumer-tier AI platform without counsel direction; the FBI seized 31 AI-generated documents during his arrest. Judge Rakoff held the defendant “had no reasonable expectation of confidentiality in his communications” with the AI tool. Attorney-client privilege did not attach; work-product protection did not attach. The overarching principle quoted by Gibson Dunn: “AI’s ‘novelty’ does not mean its use ‘is not subject to longstanding legal principles.’”
Proskauer’s tax-practice analysis draws the §7525 implication directly. A practitioner who pastes a client’s audit-strategy email into consumer Claude or consumer ChatGPT has transmitted the communication to a third party under terms that disclose training and third-party use. The §7525 federally-authorized-tax-practitioner privilege that would otherwise attach is plausibly waived at the moment of the paste. Heppner layers a separate doctrinal trap on top of §10.22 and §7216 — three regulatory failures from a single act.
Damien Charlotin’s AI Hallucination Cases Database logs 1,428 cases as of May 10, 2026 in which a court found that a party relied on hallucinated content — 984 US cases, 543 lawyers among the sanctioned parties, geometric growth from single digits at Mata’s mid-2023 baseline. The predicted post-Mata chilling effect did not materialize.
The eight-case map below is the analysis the field currently does not have in published form. Every case in the canon would map onto a §10.22(b) supervision-and-evaluation failure for a tax practitioner under Circular 230.
| Case | Conduct | §10.22 failure | §10.35 / §10.51 hook |
|---|---|---|---|
| Mata v. Avianca (SDNY 2023) | 6 ChatGPT-fabricated cases; reconfirmed with ChatGPT | (a)(1)/(2) due diligence; (b) reliance | §10.35 tech competency; §10.51(a)(13) gross incompetence |
| Gary Thomas v. Commissioner (T.C. 2024) | 3 of 4 fabricated cites in pretrial memo | (a)(2) Treasury representation | §10.51(a)(13) plausible; (a)(4) misleading |
| United States v. Heppner (SDNY 2026) | Consumer AI pasted; privilege + work product waived | §7525 analog; §10.22(b) supervisor failure | §10.35 tech competency |
| Park v. Kim (2d Cir. 2024) | Single fabricated case in reply brief | (a) “no inquiry, much less the reasonable inquiry required” | §10.51(a)(13) plausible |
| Wadsworth v. Walmart / Morgan & Morgan (D. Wyo. 2025) | 8 fabricated cases via in-house AI tool | (a) + (b) failed for drafter & signers | §10.51(a)(13) plausible; firm escaped via §10.36 procedures |
| Coomer v. Lindell / MyPillow (D. Colo. 2025–2026) | ~30 defective cites; repeat conduct after first sanction | (a) due diligence; (b) supervision | §10.51(a)(13) gross incompetence on second incident |
| Dehghani v. Castro (D.N.M. 2025) | Bought-brief AI use; no review pre-filing | (a)(1); (b) failed contractor supervision | First bar-self-reporting sanction; §10.51(a)(13) plausible |
| Delano Crossing v. Wright (Minn. Tax Ct. 2025) | 5 fabricated AI cites; “honesty, trustworthiness, fitness” question | (a) Rule 11.02(b) analog | LPRB referral = state-bar analog to OPR |
Eight cases. Eight Circular 230 §10.22(b) supervision-and-evaluation failures if the conduct had been tax practice. Zero of the eight has been classified at the §10.51(a)(13) gross-incompetence level by OPR. The doctrinal hook exists; the published action has not arrived.
OPR’s stated position — what’s said and what isn’t
The 2024 IRS Nationwide Tax Forum presentation titled “Circular 230: Professional Responsibility in Today’s Tax Practice” topically references “use of artificial intelligence tools” alongside remote work, cybersecurity competency, and social-media use. The OPR has published 2025-3 on due-process procedures and 2025-4 on in-house tax professionals and Circular 230; neither appears to address AI specifically based on the available metadata.
The semi-annual disciplined-practitioner announcements are the empirical record. IRB 2026-7, Announcement 2026-5 (February 9, 2026) lists suspensions citing §10.51(a)(2) admitted violations and §10.51(a)(10) admitted violations — conventional grounds, no AI cited. IRB 2026-18, Announcement 2026-9 (April 27, 2026) repeats the pattern. The four most-recent OPR cycles back to IRB 2024-19 read the same way. AI does not appear as a sanction basis in any of them. Sharyn Fisk’s September 2025 dual role as OPR Director and Acting RPO Director is the operational news; the AI-specific case is not.
The AICPA SSTS framework is the parallel professional-ethics overlay for AICPA-member CPAs. The revised SSTSs effective January 1, 2024 added Section 1.4 on Reliance on Tools — the only US tax-ethics standard that names “artificial intelligence” by word. Per Holets’ September 2025 Tax Adviser explainer, SSTS 1.4.2 defines a tool to include “tax preparation software, tax research publications…tax-related calculation aids, tax planning software, state and local tax aids, online data search engines, data analytics, statistical models, artificial intelligence, and relevant professional publications and resources.” SSTS 1.4.4 reinforces the §10.22(a) reading: “Use of a tool does not absolve the member of professional obligations under AICPA or other applicable ethical standards.” SSTS 1.4.8: “Tools should be used to enhance or improve the member’s understanding of a tax issue, not to supplant the member’s professional judgment.” The Tax Adviser’s February 2024 generative-AI ethics piece is the practitioner-canonical synthesis: “the tax professional retains all professional obligations whether or not the professional uses a GAI system.”
AFSP participants and non-credentialed PTIN-only preparers are not bound by SSTS but face the same §10.22 + §10.51(a)(15) overlay through AFSP consent terms or — for non-credentialed preparers outside AFSP — the §6694 / §6695 / §6713 / §7216 stack. The asymmetry runs through the rest of Part 10; the disclosure exposure does not.
The documentation trail — what to retain when an examiner asks
A §10.22 audit trail does three things. It demonstrates the practitioner’s supervision-and-evaluation work on the AI output, it supports the §10.34(d) reasonable-inquiry record, and it preserves the §10.37 advice-foundation in writing. The closest published precedent is the §7216 audit-trail discipline covered in our companion article on §7216 and AI consent; the §10.22 extension covers the supervisory layer.
CAMICO — the largest mutual-only E&O carrier serving CPAs — has not issued an AI-specific exclusion or endorsement as of May 2026. Its advisory-hotline guidance recommends “AI governance frameworks” covering security, compliance, and ethical AI use, and distinguishes AI assisting practitioners “under their supervision and review” from AI “autonomously generat[ing] advice.” Wiley Rein’s 2026 state-AI-bills tracker frames the direction: “Claims professionals and underwriters should be aware of these novel pathways to liability” and “Underwriters should consider adding certain AI-specific policy terms.” The renewal-application AI question is reported across r/taxpros and NAEA Member Connect but not yet documented at the carrier-policy level. The first formal AI exclusion endorsement is plausibly 2026-2027.
How §10.22 connects to §7216 and SSTS 1.4
The three rules read as concurrent floors rather than alternatives. §7216 governs whether the disclosure to the AI vendor occurs lawfully. §10.22 governs whether the practitioner accountably stands behind the output. SSTS 1.4 governs how the AICPA-member CPA explains tool reliance to the client and within the practice. None displaces the others.
A single act of pasting a client K-1 narrative into a third-party AI to summarize the partner allocation can trigger §7216 disclosure under Treas. Reg. §301.7216-1(b)(5); §7216 use under (b)(4)(i); §10.22(a)(2) or (a)(3) due-diligence failure if the summary is transmitted; §10.34(d) inquiry failure if the summary is facially inconsistent; §10.37(a)(2) written-advice failure once the summary lands in a client memo; §10.51(a)(13) gross-incompetence exposure where the pattern repeats; §10.51(a)(15) unauthorized-disclosure exposure on the same conduct; and SSTS 1.4.4 / 1.4.8 violations for AICPA-member CPAs. Most practitioners think they have a §7216 problem. They have a seven-rule stack on the same conduct. Tom Gorczynski’s September 2024 §7216 + AI analysis is the practitioner-canonical treatment on the disclosure side; the §10.22 layer compounds the exposure rather than replaces it.
FAQ
Does §10.22 bind a non-credentialed PTIN-only preparer who is not an AFSP participant?
Not as a Part 10 practitioner — Loving v. IRS (D.C. Cir. 2014) closed that route. The preparer remains exposed to §6694, §6695, §6713, §7216, and §7407 — none of which depend on Part 10 standing. AFSP participants consent to §10.51 conduct standards by joining the program. CPAs, EAs, and attorneys are bound by the full Part 10 stack.
My tax engine bundles AI — Intuit Tax Assist, Drake’s AI features, UltraTax CS Assistant, CCH Axcess Expert AI. Does §10.22 apply differently?
No. §10.22 binds the practitioner regardless of where the AI sits. The vendor-bundled framing tends to satisfy §7216 because the AI module is plausibly inside the permitted-purpose exception under the master tax-engine agreement — covered in the §7216 article. §10.22(a) is unchanged. You still verify. You still sign.
I only use AI for client emails, not the return itself. Does §10.22 attach?
§10.22(a)(3) covers “representations made by the practitioner to clients with reference to any matter administered by the Internal Revenue Service.” A client email explaining a §199A position is a representation to the client on an IRS matter. The (a)(3) obligation attaches. §10.37 attaches separately once the email crosses into written advice on a tax matter.
Does anonymizing the prompt fix the §10.22 problem?
It addresses the §7216 disclosure side incompletely (depending on whether structural facts remain identifying) and does not address §10.22 at all. §10.22(a) is about whether the practitioner verified the AI output before signing or transmitting. Anonymization does not change verification.
The proposed §10.35 hasn’t been finalized. Can I wait?
The proposed rule signals direction; the current §10.22 already does the substantive work. Mata, Thomas, and the eight-case canon are all §10.22(b) supervision-and-evaluation failures translated into federal-court form. The “wait for §10.35 to land” posture is structurally similar to the “wait for native AI in my tax engine” reflex — neither is a defense.
What’s the consequence if my AI hallucinates a citation and I sign the return?
§10.34(a)(1)(i)(A) — “lacks a reasonable basis” — at minimum. §6694 preparer-penalty exposure runs parallel. If the pattern repeats, §10.51(a)(13) gross-incompetence exposure attaches. AICPA-member CPAs face SSTS 1.4 violations. Mata and Thomas are the federal-court templates; OPR has not yet brought the first §10.51 analog.
Does my state CPA board have AI-specific rules?
Not as of May 2026, across California, New York, Texas, and Florida. State boards have updated ethics CPE to reference AI, data privacy, and technological competency without yet promulgating AI-specific disciplinary rules. The direction follows the ABA Model Rule 1.1 Comment 8 cascade — state-bar adoption (2014-2024), then Treasury proposed §10.35 (December 2024), with state CPA-board adoption plausibly 2026-2028.
Related reading on this site
- Section 7216 AI consent: the rules and the template — the companion regulatory article on disclosure rather than due diligence. The §7216 audit-trail discipline anchors the §10.22 documentation extension.
- Tool #1 — Circular 230 §10.22 due-diligence checklist — the pre-file verification score for AI-assisted return preparation.
- Circular 230 regulation reference — the full Part 10 status page with the proposed §10.35 modernization tracking.
- §10.22 glossary entry — definition, statute citation, scope.
- §10.34 glossary entry — standards-for-positions overview.
- §10.35 glossary entry — competence and the proposed technological-competency overlay.
- §10.37 glossary entry — written-advice rule scope.
- Office of Professional Responsibility glossary entry — OPR scope and recent direction.
Sources and citations
All URLs verified live as of 2026-05-12. The article was assembled from primary regulatory, case-law, and practitioner-canonical analysis layers — every load-bearing claim is sourced inline. 63 external URLs total — 60 verified live (200/202) on direct curl; 3 bot-blocked but human-readable (CAMICO, Fast Company, Tax Notes — canonical sources). Categories below.
Federal regulations and statutes:
- 31 CFR §10.0 — Scope
- 31 CFR §10.3 — Who may practice
- 31 CFR §10.22 — Diligence as to accuracy
- 31 CFR §10.34 — Standards with respect to tax returns and documents
- 31 CFR §10.35 — Competence
- 31 CFR §10.36 — Procedures to ensure compliance
- 31 CFR §10.37 — Requirements for written advice
- 31 CFR §10.50 — Sanctions
- 31 CFR §10.51 — Incompetence and disreputable conduct
- 31 CFR Part 10 — Circular 230 on eCFR
- IRS Circular 230 PDF (June 2014 revision)
- IRS Pub 947 — Practice Before the IRS
- IRC §6694, §6695, §6713, §7216, §7407, §7525
Proposed Circular 230 modernization (REG-116610-20):
- Federal Register NPRM, December 26, 2024
- IRS press release on the proposal
- Current Federal Tax Developments analysis, December 22, 2024
- CPA Practice Advisor analysis, December 23, 2024
- CPA Trendlines, “Major Changes to Circular 230,” January 17, 2025
- AICPA comment letter announcement, February 20, 2025
- NASBA comment letter, February 20, 2025
2014 T.D. 9668 baseline:
OPR public materials:
- OPR landing page
- OPR 2024 National Tax Forum presentation
- Announcements of Disciplinary Sanctions hub
- IRB 2026-7, Announcement 2026-5 (February 9, 2026)
- IRB 2026-18, Announcement 2026-9 (April 27, 2026)
AICPA standards and AI:
- Revised SSTSs No. 1-4, effective January 1, 2024
- Holets, “Technology and tax standards: Understanding new SSTS Section 1.4 — Reliance on Tools,” The Tax Adviser, September 2025
- Jenkins and Sansone, “Tax ethics and use of generative AI systems,” The Tax Adviser, February 2024
- AICPA AI Tax Resource Center
Case law:
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. June 22, 2023)
- Mata v. Avianca overview (Wikipedia)
- Volokh / Reason coverage, June 22, 2023
- Gary Thomas v. Commissioner, No. 10795-22 (T.C. October 23, 2024) — Jeremy Wells summary
- United States v. Heppner — Gibson Dunn analysis
- Proskauer Tax Talks on Heppner and §7525
- Park v. Kim, 91 F.4th 610 (2d Cir. 2024)
- Volokh / Reason on Park v. Kim referral, January 30, 2024
- Wadsworth v. Walmart / Morgan & Morgan — LawSites coverage
- Morgan & Morgan directive — Fast Company
- Coomer v. Lindell / MyPillow — Colorado Sun, July 7, 2025
- Coomer v. Lindell supplemental sanction — Volokh, May 9, 2026
- Dehghani v. Castro — Volokh, May 16, 2025
- Delano Crossing v. Wright — Volokh, June 3, 2025
Empirical aggregate:
Practitioner-canonical analysis:
- Tom Gorczynski, “AI and the §7216 Disclosure and Use Rules,” Tom Talks Taxes
- Bloomberg Tax, “Don’t Trust AI, Always Verify. Tax Law Still Needs Humans”
Practitioner adoption data:
OPR Director update:
Insurance and professional liability:
- CAMICO Generative AI FAQ
- Wiley Rein, “2026 State AI Bills That Could Expand Liability, Insurance Risk”
Cross-reference:
- ABA Model Rule 1.1 Comment 8 — technology competence
- Rev. Proc. 2013-14 — §7216 consent format procedure
- IRS Pub 4557 — Safeguarding Taxpayer Data
Last reviewed: 2026-05-12. Published by AI Tax Practitioner Editorial. A reference, not legal advice. Your firm’s compliance posture should be confirmed with counsel and updated as IRS guidance and state-specific rules evolve.
Notice an outdated citation or broken link? Email [email protected] — every source is reviewed at minimum quarterly and updated when underlying authority changes.