Tech
OpenAI

Nonprofit files FTC complaint against OpenAI's GPT-4

It says children may be in danger (among other things).
By Mike Pearl  on 
The OpenAI logo on a phone with economics-related imagery in the background
Credit: Photo Illustration by Omar Marques / SOPA Images / LightRocket via Getty Images

On Thursday, the Center for AI and Digital Policy (CAIDP), an advocacy nonprofit, filed a complaint with the Federal Trade Commission(Opens in a new tab) (FTC) targeting OpenAI. The complaint argues that the company's latest large language model, GPT-4, which can be used to power ChatGPT, is in violation of FTC rules against deception and unfairness. This comes on the heels of an open letter signed by major figures in AI, including Elon Musk, which called for a six-month pause on the training of systems more powerful than GPT-4.

The complaint asks the Commission "to initiate an investigation into OpenAI and find that the commercial release of GPT-4 violates Section 5 of the FTC Act." This section, the complaint explains, provides guidance about AI and outlines the "emerging norms for the governance of AI that the United States government has formally endorsed."

What's so scary about GPT-4, according to this complaint? It is allegedly "biased, deceptive, and a risk to privacy and public safety," The complaint also says that GPT-4 makes unproven claims and is not sufficiently tested.

The CAIDP also points out — using quotes from past reports written by OpenAI itself — that OpenAI knows about the potential to bring about, or worsen, "disinformation and influence operations," and that the company has expressed concerns about "proliferation of conventional and unconventional weapons" thanks in part to AI. OpenAI has also, the complaint notes, warned the public that AI systems could "reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement."

The complaint also rips into OpenAI for apparently not conducting safety checks aimed at protecting children during GPT-4's testing period. And it quotes Ursula Pachl, Deputy Director of the European Consumer Organization (BEUC) who said, "public authorities must reassert control over [AI algorithms] if a company doesn't take remedial action."

By quoting Pachl, the CAIDP is clearly invoking — if not directly calling for — major government moves aimed at regulating AI. European regulators are already weighing a much more heavy-handed, and rules-based approach to this technology(Opens in a new tab). And this comes as companies are looking to make money in the generative AI space. Microsoft Bing's GPT-4-powered chatbot, for instance, is now generating ad revenue(Opens in a new tab). Such companies are no doubt eagerly awaiting the FTC's response.

More in OpenAI


Recommended For You

Spotify's big update isn't just annoying, it misses the point

Google is giving early access to its AI assistant to Pixel Superfans


The best GPS dog trackers to help you keep tabs on your pet around the clock
By Stephanie Valera and Dylan Haas

The best vibrators for maximum satisfaction

Trending on Mashable

Donald Trump is melting down on Truth Social over his indictment

Wordle today: Here's the answer, hints for March 30

Twitter's new API pricing is killing many Twitter apps that can't pay $42,000 per month


Wordle today: Here's the answer, hints for March 29
The biggest stories of the day delivered to your inbox.
By signing up to the Mashable newsletter you agree to receive electronic communications from Mashable that may sometimes include advertisements or sponsored content.
Thanks for signing up. See you at your inbox!