Home > Article > Technology peripherals > U.S. AI policy group sues OpenAI, saying its GPT-4 model threatens public safety
On March 31, a U.S. AI policy organization accused OpenAI of violating consumer protection rules by publishing the GPT-4 model and requested the U.S. Federal Trade Commission to immediately Prevent OpenAI from launching new GPT models and conduct independent evaluations.
The organization called the Center for AI and Digital Policy (CAIDP) filed a complaint today, arguing that the AI text generation tool launched by OpenAI is "biased, deceptive and risky to public safety," Breach of consumer protection rules.
IT House noticed that a high-profile open letter previously called for a moratorium on large-scale generative AI experiments. CAIDP Chairman Mark Rotenberg is also one of the signatories of the letter, along with a number of AI researchers and OpenAI co-founder Elon Musk. Like the letter, the complaint calls for slowing the development of generative AI models and imposing stricter government regulation.
CAIDP points to possible threats posed by OpenAI’s GPT-4 generative text model, announced in mid-March. These threats include the potential for GPT-4 to generate malicious code, as well as the possibility of perpetuating stereotypes or unfairly distinguishing between race and gender in things like hiring due to biased training data. The complaint also points to serious privacy issues with OpenAI’s product interface — such as a recently discovered vulnerability that allowed OpenAI ChatGPT’s chat history and possible payment information to be seen by other users.
OpenAI has previously publicly acknowledged the possible threats posed by AI text generation, but CAIDP believes that GPT-4 exceeds the limit of causing harm to consumers and should trigger regulatory action. CAIDP seeks to hold OpenAI accountable for violating Section 5 of the Federal Trade Commission Act, which prohibits unfair and deceptive trade practices. The complaint alleges that "OpenAI made GPT-4 available to the public for commercial purposes with full knowledge of these risks." CAIDP also calls generative models that confidently fabricate facts that don't exist a form of deception.
In the complaint, CAIDP asked the FTC to halt any further commercial deployment of the GPT model and to require an independent evaluation before any future model rollout. It also calls for the creation of a publicly accessible reporting tool similar to one where consumers can submit fraud complaints. and seeks clarity from the FTC on rules for generative AI systems, building on the agency’s ongoing but still relatively informal research and evaluation of AI tools.
The U.S. Federal Trade Commission has previously expressed interest in regulating AI tools, warning that biased AI systems could trigger enforcement actions, and at an event this week hosted jointly with the Justice Department, FTC Chairwoman Lena Khan said the agency will look for signs that large, existing technology companies are trying to squeeze out competition.
The above is the detailed content of U.S. AI policy group sues OpenAI, saying its GPT-4 model threatens public safety. For more information, please follow other related articles on the PHP Chinese website!