SEC AI Oversight: What Investment Advisors Should Prepare for in 2026
At a Glance
- Existing securities laws — the Advisors Act, Marketing Rule, antifraud provisions — already govern how firms use and describe AI
- The SEC charged two advisors in 2024 for misleading AI claims; enforcement is active, with precedent established
- Employee use of unauthorized AI tools is a documented supervision risk — contractors included
- Marketing language about AI functions as a regulated disclosure; compliance review is required
AI Oversight Is Now an Examination Category
AI governance is an active examination category in 2026 — and the SEC has the enforcement record to back it up. For firms navigating SEC AI regulation 2026, the direction is clear: existing securities laws already define expectations for AI compliance in investment advisors.
The agency charged two investment advisors in 2024 for misrepresenting AI capabilities in marketing materials and regulatory filings, establishing that existing securities law applies fully to how firms describe their technology. The Advisors Act, the Marketing Rule, and antifraud provisions were sufficient. No new AI-specific rules were required.
The SEC’s 2026 Examination Priorities make the current direction explicit: examiners will scrutinize how firms represent AI capabilities, whether policies exist to monitor AI use across fraud prevention, back-office operations, AML, and trading functions, and how firms are integrating regulatory technology. That mandate covers general-purpose tools like ChatGPT alongside purpose-built investment platforms.
The posture from regulators includes a collaborative dimension. In February 2026, Brian Daly, Director of the SEC’s Division of Investment Management, described AI as a transformative opportunity for investment management and invited firms to engage directly with the Division on deploying new technologies while preserving investor protections. That openness runs parallel to active examination activity. Governance infrastructure has to come first.
What SEC Examiners Will Ask Investment Advisors About AI
The exam process is becoming more predictable, and that’s useful information for firms building toward RIA AI policy requirements.
At the Future Proof Citywide conference in March 2026, Alec Crawford, founder and CEO of AI risk management platform Verapath, was direct: the first thing the SEC will request during an AI-focused examination is a firm’s AI policy. The second is proof that the firm actually implemented it. A written policy that diverges from operational reality compounds the problem — it adds a misrepresentation layer on top of the underlying gap.
Examiners are also likely to probe:
- Data sources and vendor relationships — including controls around material non-public information
- Conflict of interest disclosures — particularly where AI influences recommendations or client-facing outputs
- Marketing and communications review processes — specifically whether AI claims in external materials went through compliance
Why Employee AI Use Is a Compliance Risk for RIAs
The highest-frequency, lowest-visibility risk comes from AI tools employees brought in themselves — separate from anything the firm deployed intentionally.
Thomas Stewart, founder and CEO of compliance software firm Hadrius, has flagged “BYOAI” — bring your own AI — as a primary governance gap. Employees introducing unauthorized tools create liability for the firm, and that liability extends to contractors: if a contractor enters client data into a public AI model, the RIA that hired them is responsible.
Crawford reinforced the practical standard: firms should require employees and contractors to access AI through a sanctioned portal that tracks all activity. Firms that lack documented visibility into AI use across the organization leave themselves exposed on supervision.
A single employee using an unsanctioned tool to draft a client communication — with no firm-level record — qualifies as a supervision failure under rules that already exist.
AI Washing Enforcement: Why Marketing Claims Are Disclosure Statements
For many firms, the fastest path to regulatory scrutiny runs through communications, and the standard is clear.
The SEC’s AI washing enforcement actions established direct precedent: AI-related language in websites, pitch decks, investor presentations, and regulatory filings is treated as a regulated disclosure. It must be accurate, substantiated, and consistent with how AI actually functions inside the firm.
Stewart put it plainly at Future Proof: firms must be accurate and transparent in how they report AI use — with clients and with the market broadly — or they risk misrepresentation exposure. Claims about AI-driven research, portfolio construction, or operational efficiency require the same review rigor as performance disclosures or fee descriptions. Marketing and compliance need shared accountability here.
AI Compliance Checklist for Investment Advisors
For firms still building out AI governance, the priority sequence matters:
- Inventory AI tools and use cases across the firm. Inventory tools, use cases, and internal owners across every function. Governance requires visibility first.
- Write an AI policy — and operationalize it. Cover approved tools, data restrictions, human review requirements, and contractor access. The policy is a starting point; implementation is what regulators will verify.
- Audit external AI claims. Every reference to AI in marketing materials, websites, and pitch decks functions as a disclosure statement. Review it accordingly.
- Control employee AI access. Implement sanctioned portals or equivalent oversight mechanisms. Document the controls.
- Align compliance, technology, and marketing. AI governance requires shared accountability across functions — compliance ownership alone leaves gaps.
Crawford’s framing at Future Proof is worth keeping: demonstrating a proactive, documented approach — even a policy still in development — signals to regulators that the firm is engaged rather than reactive. That distinction matters in examinations.
Credibility Is the Other Compliance Risk
Governance is the foundation, but how firms talk about AI — to clients, prospects, and regulators — is increasingly a credibility variable. Firms that articulate their AI use accurately, explain their oversight processes clearly, and align their external narratives with internal reality are better positioned across examinations, client conversations, and competitive contexts alike.
That alignment between technology practice and communications strategy is where the reputational and regulatory dimensions of AI governance converge.
Investment managers navigating AI oversight face a communications challenge as much as a compliance one. MBC Strategic helps firms align their regulatory messaging, investor-facing content, and marketing language with the governance standards examiners expect. If your external narrative and internal reality need to be in closer alignment, that’s the work we do.
FAQ: SEC AI Compliance for Investment Advisors
1. What are the SEC’s 2026 AI examination priorities for investment advisors?
A: Examiners are focusing on three areas central to SEC AI compliance: how firms represent AI capabilities in disclosures and marketing, whether written policies govern AI use across trading, fraud prevention, and back-office functions, and how firms are integrating regulatory technology under evolving SEC AI regulation 2026 expectations.
2. What is AI washing and why has the SEC taken enforcement action?
A: AI washing SEC enforcement refers to misrepresenting how artificial intelligence is used within a firm’s investment process or operations. The SEC charged two advisors in 2024 under existing securities law, reinforcing that AI disclosure requirements for investment advisors fall under the Advisors Act and antifraud provisions.
3. What should an investment advisor’s AI policy include?
A: A compliant framework aligned with RIA AI policy requirements should include approved tools, data handling restrictions, human review requirements, contractor access controls, and documentation procedures. Examiners will request both the policy and evidence it supports broader AI governance for RIAs.
4. What is BYOAI and why does it create compliance risk?
A: BYOAI — “bring your own AI” — refers to employees or contractors using unauthorized AI tools for work tasks. This creates BYOAI compliance risk, as firms retain supervisory responsibility regardless of whether the tool was sanctioned, including for contractor use involving client data.
5. Do AI references in marketing materials require compliance review?
A: Yes. Under AI marketing compliance SEC standards, AI claims in websites, pitch decks, and investor presentations are treated as regulated disclosures. These statements must meet the same accuracy and substantiation requirements tied to broader AI compliance for investment advisors.
MBC Strategic advises investment managers and advisory firms on communications strategy, regulatory messaging, and investor-facing content. As AI becomes embedded in investment operations and oversight expectations continue to rise, the firms best positioned are those that govern and communicate with equal precision.