CFTC Staff Takes a “Measured First Step” in Artificial Intelligence

22 Desember 2024

Introduction1

On 5 December 2024, the Commodity Futures Trading Commission (CFTC) Divisions of Clearing and Risk, Data, Market Oversight, and Market Participants issued a staff advisory on the use of artificial intelligence (AI) by CFTC-regulated entities (the Advisory).2 The Advisory comes nearly a year after CFTC staff (Staff) issued a Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets, which garnered 26 responses and helped to inform the CFTC’s guidance.3  

Importantly, the Advisory does not create any new compliance obligations for derivatives market participants who use AI solutions.  Instead, consistent with the CFTC’s “technology neutral” approach, Staff took this opportunity to remind registered entities that they must continue to comply with existing compliance obligations, whether using AI or any other technology, either directly or with a third-party service provider. The Advisory highlights a number of AI use cases by derivatives market participants, and the Commodity Exchange Act (CEA) and CFTC regulatory requirements that may be implicated by each of the use cases.

Chairman Rostin Behnam, in what is likely to be one of his last key acts as head of the agency, remarked that the Advisory is the CFTC’s first step engaging with market participants on the topic of AI. However, as noted in the Advisory, there is likely more to come. As AI technology evolves and as derivatives market participants develop other innovative use cases, there is potential for future rulemakings or guidance by the CFTC. 

Below, we set forth an overview of the key elements of the Advisory.

AI Risk Assessment

Under the Advisory, Staff explicitly states its expectation that all CFTC-regulated entities will assess the risks of using AI and will update their policies, procedures, controls, and systems, as appropriate under applicable CEA and CFTC regulatory requirements. Whether developing its own AI solutions or using a third-party AI offering, a regulated entity remains responsible for compliance with existing laws and regulations. Although Staff articulates this expectation with respect to entities that are registered with the CFTC in some capacity, all market participants should consider adhering to this standard, i.e., performing a risk assessment and following generally accepted standards for the development, operation, reliability, capacity, and security of the systems that use AI technology.  As AI usage evolves and as existing AI tools are materially updated, market participants should consider conducting another risk assessment. 

AI Use Cases

As discussed below, Staff articulated use cases for which various registration categories may consider deploying AI technology and identified core principles and regulatory obligations that could be implicated by these uses. We consider a number of these below.

  1. Order Processing and Trade Matching: Designated contract markets (DCMs) may anticipate trades before they are entered by using AI’s analytic and predictive capabilities, reducing post-trade message latencies and optimizing system resources.   Staff notes that DCMs must continue to provide open and competitive markets and protect their price discovery function.
  2. Market Surveillance: DCMs and swap execution facilities (SEFs) could use AI to investigate rule violations, detect abusive trade practices, and perform real-time market monitoring. According to Staff, the use of AI is not a substitute for adequate compliance staff and resources.
  3. System Safeguards:  When using AI, DCMs, SEFs, and swap data repositories (SDRs) should continue to develop and maintain appropriate controls across (1) enterprise risk management and governance, (2) information security, (3) business continuity and disaster recovery, (4) capacity and performance planning, (5) systems operations, (6) systems development and quality assurance, and (7) physical security and environmental controls. Staff notes that derivatives clearing organizations (DCOs) may use AI to identify cyber vulnerabilities and enhance defenses or to update computer code, but these uses could be applied by all market participants. Staff reminds market participants to consider whether they continue to comply with system safeguard regulations when using AI in these manners.
  4. Notifications: The use of AI does not eliminate notification requirements. When a DCO, DCM, SEF, or SDR makes a material planned change to an automated system that could impact the system’s reliability, security, or adequate scalable capacity, it must provide Staff timely advance notice of such change. In addition to this notice, a DCO must also provide Staff with notice of all material changes to its risk analysis and oversight program.
  5. Member Assessment and Interaction: While Staff identifies DCOs as using AI for clearing member compliance with rules and using AI chatbots to communicate with members (especially for nonintermediated clearing), these uses could be applied by all market participants. Staff explains that DCOs must continue to comply with participant admission and continuing eligibility requirements and monitor credit exposure.
  6. Settlement: DCOs may use AI to validate data, detect data anomalies before settlement, and identify failed trades. These uses could facilitate netting or position offset. Staff explains that DCOs must continue to timely complete settlement and limit their exposure to settlement bank risks.
  7. Risk Assessment and Risk Management: Staff anticipates that swap dealers may use AI to calculate and collect margin for uncleared swaps. In this case, Staff notes that swap dealers would need to manage the risk associated with any such system. 
  8. Compliance and Recordkeeping: Other registrants—such as swap dealers, futures commission merchants, commodity pool operators, commodity trading advisors, introducing brokers, retail forex dealers, and associated persons—might use AI to support the accuracy and timeliness of financial information and risk disclosures provided to the CFTC, National Futures Association, or their customers. Staff reminds these registrants that relevant compliance obligations will continue to apply even if AI is used to satisfy those obligations. 
  9. Customer Protection: If futures commission merchants use AI to account for customer segregated funds, Staff confirms that they must ensure that they continue to satisfy the applicable regulatory requirements.  

Statement of Commissioner Johnson

Commissioner Kristin N. Johnson, who has long been an advocate for enhanced oversight and protective measures related to AI, issued a statement concurrent with the publication of the Advisory. In it, she described her vision for an AI Fraud Task Force within the Division of Enforcement and increased enforcement resources to effectively supervise market participants. She also called for a formal policy of enhanced penalties on those who use AI to engage in fraud or other illegal activities, especially when they lure vulnerable investors using AI (including the use of so-called “deepfakes”). Finally, Commissioner Johnson advocated for an interagency task force focused on AI and an open dialogue to gather information about market participants’ use and adoption of AI technologies.

Key Takeaways

The risk of AI technology has been on the CFTC’s radar and will continue to be a priority, even under the new administration. CFTC-regulated entities should anticipate continued engagement by the CFTC on this topic and should take Staff’s expectations set forth in the Advisory seriously, despite the fact that it is not formal CFTC guidance or a rulemaking. In light of this Advisory, market participants may consider documenting each use case for how they deploy AI, any risk assessments that have taken place, and descriptions of how policies and procedures were updated to reflect the risk of the use of AI technology. Any market participant contemplating a new AI tool may need to consider their existing compliance obligations and whether any of these obligations are implicated by the use of the technology.