In early 2023, JPMorgan Chase restricted employee use of ChatGPT. Goldman Sachs, Citigroup, Bank of America, Deutsche Bank, and Wells Fargo followed with their own restrictions. The financial services industry moved faster to restrict AI tools than any other sector. The reasons go beyond the data leakage concerns that affect all industries and touch the foundations of market integrity.
The Unique Risk Profile of Financial Services
Every industry faces data leakage risk from AI tools. Financial services faces that risk plus several categories of exposure that are specific to the sector:
- Material Nonpublic Information (MNPI). Employees at investment banks, asset managers, and broker-dealers routinely handle information that could move markets if disclosed. Earnings previews, merger discussions, trading positions, and regulatory findings are all MNPI. Submitting any of this to an external AI tool is a potential securities law violation, regardless of whether anyone actually trades on it.
- Client confidentiality. Financial institutions hold detailed information about client portfolios, transactions, financial positions, and strategic intentions. This data is protected by contractual obligations, regulatory requirements, and professional standards that predate AI by decades.
- Trading strategies. Proprietary trading algorithms, quantitative models, risk parameters, and position sizing logic represent significant competitive advantages. These are precisely the types of content that a quantitative analyst might paste into an AI tool seeking optimization suggestions.
- Regulatory communications. Draft regulatory filings, compliance assessments, and internal audit findings are sensitive documents that could trigger market reactions or regulatory consequences if disclosed prematurely.
What Regulators Are Watching
Financial regulators have been notably attentive to AI-related risks. The SEC, FINRA, OCC, and their international counterparts have all signaled increased scrutiny of how financial institutions manage AI adoption.
The regulatory concern is not limited to data leakage. Regulators are examining whether AI-generated content in customer communications, research reports, and regulatory filings meets existing accuracy and disclosure standards. They are also assessing whether AI tool usage creates record-keeping obligations under existing securities regulations.
For broker-dealers subject to SEC and FINRA oversight, all business communications are subject to retention and review requirements. If an employee uses an AI tool to draft a client communication, the AI interaction itself may constitute a business record that must be preserved. Most organizations are not capturing these records.
Why Financial Institutions Chose Restriction
The financial sector’s rapid move to restrict AI tools reflects a rational calculation. The regulatory exposure from a single incident involving MNPI or client data could result in enforcement actions, fines, and reputational damage that far exceeds any productivity gains from AI adoption.
Unlike a technology company where a code leak is primarily an intellectual property concern, a financial institution where an employee shares pre-announcement earnings data with an AI tool faces potential securities fraud implications. The risk is not just data loss. It is market integrity, and the enforcement mechanisms are correspondingly severe.
The restriction approach, however, carries the same limitations described in our earlier article on why blocking AI is not a sustainable answer. Financial institutions report the same pattern: employees find workarounds, usage moves to personal devices, and the organization loses visibility into exactly the interactions it most needs to monitor.
The Path Forward for Financial Services
Leading financial institutions are now moving beyond blanket restrictions toward governed AI adoption. This typically involves several parallel workstreams:
- Approved AI environments. Deploying internal or enterprise-tier AI tools with contractual protections, data residency guarantees, and no-training clauses that address the most acute data handling concerns.
- Content monitoring. Implementing AI DLP solutions that can detect MNPI patterns, client identifiers, and financial data in AI interactions, providing the visibility that regulators increasingly expect.
- Record keeping. Extending communication archiving systems to capture AI interactions that constitute business records under securities regulations.
- Policy frameworks. Developing AI acceptable use policies that are specific to financial services obligations, including explicit guidance on MNPI, client data, and trading-related information.
The Competitive Pressure
Financial institutions face a tension that will only intensify. AI tools offer genuine advantages in research analysis, document review, code development, risk modeling, and client service. Firms that figure out how to harness these capabilities within a compliant framework will have a meaningful edge over those that maintain indefinite bans.
The institutions that restricted AI first were right to recognize the risk. The institutions that will win long-term are those that solve the governance problem rather than avoiding it.