


GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC
A few days ago, Musk, Yoshua Bengio and others jointly signed an open letter calling on all AI laboratories to immediately suspend the training of AI models more powerful than GPT-4. Now, someone wants to stop the released GPT-4.
This time it was the non-profit organization Center for Artificial Intelligence and Digital Policy (CAIDP) that stopped GPT-4. CAIDP requested the U.S. Federal Trade Commission (FTC) to investigate OpenAI and prohibit the company from further releasing GPT-4.
File address: https://cdn.arstechnica.net/wp-content/uploads/2023/03/CAIDP-FTC -Complaint-OpenAI-GPT-033023.pdf
CAIDP filed this application with the FTC because they believed that "the consumer product GPT-4 released by OpenAI is biased, Deceptive and posing a risk to privacy and public safety. The model's output cannot be proven or reproduced and is not independently evaluated before deployment."
CAIDP calls for independent oversight and evaluation of all commercial artificial intelligence products in the United States and ensures necessary "safeguards" are in place to protect consumers, businesses, and the commercial market.
Previously, the FTC has announced an artificial intelligence standard that "while promoting accountability, requires the use of artificial intelligence to be transparent, explainable, fair, and "Empirically reasonable", CAIDP stated that "OpenAI's GPT-4 does not meet any of these requirements."
It’s only been two weeks since the release of GPT-4, and people are already deeply divided over this type of powerful AI model. On the one hand, those who want to stop powerful models such as GPT-4 believe that these models will pose greater risks to information security and even human society; on the other hand, some people believe that this is a good time for artificial intelligence to flourish. Accelerated technological progress should be promoted.
Interestingly, OpenAI CEO Sam Altman posted a new tweet: "Stay calm in the center of the storm." This may be his response to the recent series of "suspended GPT-4, etc." A response to the call for “model research”.
In terms of risk, OpenAI said at the time of the release that it had asked external experts to assess the potential risks posed by GPT-4. However, CAIDP made it clear in its filing to the FTC which acts GPT-4 violated.
CAIDP believes that GPT-4 has seriously affected commercial fairness, saying: "The commercial release of GPT-4 violates Section 5 of the FTC Act and the FTC's regulations on the use and advertising of artificial intelligence products. Standards set by enterprises, as well as emerging norms for artificial intelligence governance, etc."
In addition, OpenAI did not disclose any technical details of GPT-4. This is one of the reasons why CAIDP filed an application with the FTC.
CAIDP said: "OpenAI did not disclose details about the architecture, model size, hardware, computing resources, training techniques, dataset construction or training methods, and it is common practice in the research community to document Training data and training technology for large language models, but OpenAI chose not to do these things for GPT-4. In particular, generative artificial intelligence models are not ordinary consumer products, because they may exhibit some abnormal behaviors during use, These behaviors may not have been discovered by the issuing company before."
Complaint details
Specifically, CAIDP targeted GPT-4 and A related model, ChatGPT, points out a range of potential risks.
For example, OpenAI has made it clear in the article "GPT-4 System Card" that GPT-4 may reinforce and reproduce specific biases and worldviews, including against certain marginalized groups. of stereotypes and derogatory associations. CAIDP also cited a blog post from the company OpenAI, which said that a similar large model, ChatGPT, sometimes responded to harmful instructions or exhibited biased behavior.
In the complaint submitted to the FTC, CAIDP stated that "OpenAI released GPT-4 to the public for commercial use despite fully understanding the risks." The complaint also alleges that "the GPT-4 System Card does not provide details about the security checks OpenAI conducted during its testing, nor does it detail any steps OpenAI takes to protect children, which also raises questions about the use of GPT-4 on children." Concerns."
CAIDP also pointed out concerns raised by the European consumer organization BEUC: "If ChatGPT is used for consumer credit or insurance scores, is it possible that it will produce inappropriate results? Fair and biased outcome?" This tweet was also cited in CAIDP's complaint application.
In addition, in terms of network security, ChatGPT can be used for phishing, creating fake texts, or generating malicious code. On the privacy front, CAIDP said there were reports this month that OpenAI displayed private chats to other users.
In another case, an AI researcher described how ChatGPT could take over someone's account, view chat history, and access their bill without them realizing it information. However, OpenAI fixed the vulnerability.
CAIDP also stated that GPT-4 can provide text responses from image inputs, a feature that has a huge impact on personal privacy and personal autonomy, and allows users to convert personal images to linked to detailed personal data. It is understood that OpenAI has suspended the image-to-text function, but it is difficult to say what the actual situation is.
CAIDP believes that the FTC should prohibit OpenAI from further commercial deployment of GPT, require independent evaluation of GPT products before deployment and throughout the GPT AI life cycle, require OpenAI to comply with FTC AI standards, and Establish a publicly accessible incident reporting mechanism for GPT-4, similar to the FTC's mechanism for reporting consumer fraud.
CAIDP also urged the FTC to further publish standard specifications to serve as "basic standards for products in the generative artificial intelligence market area."
AI academic community, a debate
Thousands of people signed a petition in the past two days to suspend the development of GPT-4 follow-up AI large models. Today, CAIDP asked the FTC to investigate OpenAI’s ban Release a new commercial version of GPT-4. In just one or two days, various discussions have exploded, and AI bosses and various experts have come out to respond publicly. There are those who oppose it and those who support it.
In terms of suspending the research and development of GPT-4 follow-up AI large models, Turing Award winner Yoshua Bengio, Tesla CEO (OpenAI co-founder) Elon Musk, New York University Emeritus Professor Gary Marcus and UC Berkeley Professor Stuart Russell are all in favor. They have signed the open letter to suspend giant AI experiments. Notably, Marc Rotenberg, president and founder of CAIDP, also signed the open letter.
Open letter address: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
#However, Yann LeCun, who has been criticizing ChatGPT, publicly stated that he would not sign the open letter and disagreed with the content of the open letter.
Thomas G. Dietterich, professor emeritus at Oregon State University, said on Twitter, "I didn't sign it either. This letter is filled with a bunch of scary stuff. Rhetoric and ineffective or non-existent policies. There are important technical and policy issues that are being worked on." LeCun publicly stated "I agree."
Ng Enda also issued an article publicly opposing the thousands of people who signed the petition, and said: "GPT-4 has many new applications in education, health care, food, etc., and will help many people. Unless the government intervenes, the suspension will be implemented And preventing all teams from scaling LLM is unrealistic. Furthermore, asking governments to suspend emerging technologies they don’t understand is anti-competitive, sets a bad precedent, and is bad policy.”
Tian Yuandong later also supported Ng Enda’s view, “stated that he would not sign a moratorium. And said that once this kind of thing starts, there is no way to stop or reverse this trend. This It is the inevitable necessity of evolution. We should look forward from a different perspective, better understand LLM, adapt to it and utilize its power, and feel the heat."
# #Yi Tay (formerly a senior researcher at Google Brain), who just announced his departure from Google Brain, said: "If people who discuss LLM randomly on the Internet are banned for 6 months, I will sign."
The above is the detailed content of GPT-4 was asked to be banned due to complaints: OpenAI does not meet any of the artificial intelligence standards issued by the FTC. For more information, please follow other related articles on the PHP Chinese website!

The term "AI-ready workforce" is frequently used, but what does it truly mean in the supply chain industry? According to Abe Eshkenazi, CEO of the Association for Supply Chain Management (ASCM), it signifies professionals capable of critic

The decentralized AI revolution is quietly gaining momentum. This Friday in Austin, Texas, the Bittensor Endgame Summit marks a pivotal moment, transitioning decentralized AI (DeAI) from theory to practical application. Unlike the glitzy commercial

Enterprise AI faces data integration challenges The application of enterprise AI faces a major challenge: building systems that can maintain accuracy and practicality by continuously learning business data. NeMo microservices solve this problem by creating what Nvidia describes as "data flywheel", allowing AI systems to remain relevant through continuous exposure to enterprise information and user interaction. This newly launched toolkit contains five key microservices: NeMo Customizer handles fine-tuning of large language models with higher training throughput. NeMo Evaluator provides simplified evaluation of AI models for custom benchmarks. NeMo Guardrails implements security controls to maintain compliance and appropriateness

AI: The Future of Art and Design Artificial intelligence (AI) is changing the field of art and design in unprecedented ways, and its impact is no longer limited to amateurs, but more profoundly affecting professionals. Artwork and design schemes generated by AI are rapidly replacing traditional material images and designers in many transactional design activities such as advertising, social media image generation and web design. However, professional artists and designers also find the practical value of AI. They use AI as an auxiliary tool to explore new aesthetic possibilities, blend different styles, and create novel visual effects. AI helps artists and designers automate repetitive tasks, propose different design elements and provide creative input. AI supports style transfer, which is to apply a style of image

Zoom, initially known for its video conferencing platform, is leading a workplace revolution with its innovative use of agentic AI. A recent conversation with Zoom's CTO, XD Huang, revealed the company's ambitious vision. Defining Agentic AI Huang d

Will AI revolutionize education? This question is prompting serious reflection among educators and stakeholders. The integration of AI into education presents both opportunities and challenges. As Matthew Lynch of The Tech Edvocate notes, universit

The development of scientific research and technology in the United States may face challenges, perhaps due to budget cuts. According to Nature, the number of American scientists applying for overseas jobs increased by 32% from January to March 2025 compared with the same period in 2024. A previous poll showed that 75% of the researchers surveyed were considering searching for jobs in Europe and Canada. Hundreds of NIH and NSF grants have been terminated in the past few months, with NIH’s new grants down by about $2.3 billion this year, a drop of nearly one-third. The leaked budget proposal shows that the Trump administration is considering sharply cutting budgets for scientific institutions, with a possible reduction of up to 50%. The turmoil in the field of basic research has also affected one of the major advantages of the United States: attracting overseas talents. 35

OpenAI unveils the powerful GPT-4.1 series: a family of three advanced language models designed for real-world applications. This significant leap forward offers faster response times, enhanced comprehension, and drastically reduced costs compared t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

WebStorm Mac version
Useful JavaScript development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 English version
Recommended: Win version, supports code prompts!
