On January 27, 2026, the Federal Trade Commission (FTC) signaled the agency’s reduced appetite for regulating artificial intelligence. At the Privacy State of the Union Conference in Washington, DC, FTC Bureau of Consumer Protection Director Chris Mufarrige stated there is “no appetite for anything AI-related” in the FTC’s rulemaking pipeline, while adding that the agency has other rule ideas in development. Mufarrige’s statement follows the FTC’s December 2025 decision to reopen and set aside a 2024 consent order involving AI writing assistant Rytr that had barred the company from providing AI-enabled services that was alleged to help users write false or misleading product reviews.
This shift aligns with the current federal administration’s broader deregulatory stance on AI, which emphasizes removing barriers to innovation rather than expanding agency-made rules. The FTC specifically cited President Trump’s AI Action Plan as part of its rationale for revisiting the Rytr matter, pointing to a policy preference for rolling back rules and decisions viewed as standing in the way of AI development. Mufarrige also indicated the Commission will pursue more “sparing” rulemaking than the Biden-era FTC, suggesting the agency may lean more heavily on selective enforcement priorities and existing legal authorities instead of launching new AI-specific regulations.
Importantly, the FTC is not stepping back from privacy enforcement altogether. Mufarrige emphasized that protecting children’s privacy online will “play a big role” in the coming year’s enforcement docket, with particular focus on how age verification interacts with the Children’s Online Privacy Protection Act (COPPA), including any “tension between the two” and how it might be resolved. The agency’s recent COPPA track record, including a recent $10 million settlement with Walt Disney Co., reflects what Mufarrige described as a consistent theme: ensuring “that parents have control over their kids’ data.”