FTC to Review AI Chatbot Risks With Focus on Privacy Harms

The U.S. Federal Trade Commission (FTC) is preparing a major review of AI chatbot risks, focusing on privacy harms and dangers to children. The study will compel tech giants like OpenAI, Google, and Meta to disclose how their chatbots store and share user data.

The U.S. Federal Trade Commission (FTC) is preparing to investigate AI chatbot risks, a move that could reshape how tech giants manage data privacy and user safety. According to people familiar with the matter, the review will focus on potential harms to children and explore how companies like OpenAI, Google, and Meta handle sensitive information collected through their chatbot services.

Study to Examine Privacy Harms

The planned study will look into the dangers of interacting with artificial intelligence chatbots, particularly privacy harms and risks for young users. Regulators will examine how these services store, use, and share personal data, while also assessing potential dangers such as emotional manipulation and exposure to harmful content.

Although the FTC has not yet made a formal announcement, the move reflects growing pressure on regulators to respond to concerns about the safety of fast-growing AI technologies.

White House Stance on AI Oversight

A White House spokesperson declined to comment directly on the FTC study but emphasized that the administration prioritizes safety while supporting innovation. The statement highlighted President Trump’s pledge to strengthen America’s leadership in artificial intelligence, cryptocurrency, and other emerging technologies without compromising user well-being.

The administration is hosting an AI-focused event with industry leaders, including executives from Meta, Apple, OpenAI, and Microsoft, underscoring the heightened national attention on AI regulation.

Rising Scrutiny of Chatbot Safety

The surge in AI adoption has triggered mounting concerns about how well developers are safeguarding users. Just last week, OpenAI faced a lawsuit from parents of a California student who allegedly used ChatGPT before taking his own life. The case has amplified worries about the role of chatbots in influencing vulnerable users.

Similarly, experts point out that AI chatbot risks include encouraging unsafe behavior, providing dangerous instructions, and eroding traditional privacy protections. In response, companies like Meta and Alphabet have already pledged to strengthen safeguards, particularly around interactions with minors.

Regulatory Pressure Builds

The FTC’s move comes despite earlier White House guidance urging regulators to show restraint in order to avoid stifling innovation. Nonetheless, regulators argue that protecting children and addressing privacy harms must take precedence.

The agency plans to use its “6(b)” authority, which allows it to compel companies to hand over data, to gather information from nine of the largest chatbot providers, including OpenAI’s ChatGPT and Google’s Gemini.

FTC Commissioner Melissa Holyoak has previously called for a review of AI services, warning of “alarming” risks such as chatbots encouraging crime, self-harm, or inappropriate interactions with children.

Industry Response and Next Steps

AI companies have so far responded cautiously. OpenAI and Meta declined to comment on the review but pointed to recent safety improvements, while Alphabet has yet to issue a statement.

The FTC has conducted similar studies in the past, including investigations into drug pricing and big tech’s investments in AI startups. Typically, the agency issues a report once it has reviewed company data and analyzed the risks.

As the review unfolds, the outcome could significantly shape future regulations around AI. For now, experts agree that addressing AI chatbot risks will be crucial in balancing innovation with public safety.