


The first government review of ChatGPT may come from the US Federal Trade Commission, OpenAI: not yet trained GPT5
produced by Big Data Digest
Following the receipt of an open letter signed by hundreds of well-known artificial intelligence experts, technology entrepreneurs and scientists, ChatGPT became the " The target of public criticism.”
On March 30, the U.S. Federal Trade Commission (FTC) received a new report from the Center for Artificial Intelligence and Digital Policy (CAIDP), requesting an investigation into OpenAI and its product GPT-4.
The report claims that the FTC has stipulated that the use of artificial intelligence should be "transparent, explainable, fair, and empirically reasonable while promoting accountability," but claims that OpenAI's ChatGPT4 "does not meet these requirements." ” and is “biased, deceptive, and poses a risk to privacy and public safety.”
CAIDP is an independent, nonprofit research organization based in Washington, D.C., that specializes in “evaluating national AI policies and practices, training AI policy leaders, and promoting democratic values in AI.”
ChatGPT’s first government review may come from FTC
Legal experts say that with the growth and development speed of artificial intelligence, the FTC may formulate rules on artificial intelligence in 2023.
A December 2022 article revealed that the U.S. Federal Trade Commission (FTC) may be introducing federal regulations on artificial intelligence, although AI-focused bills filed in Congress have yet to gain significant support. ,
The article stated: “In recent years, the Federal Trade Commission has issued two publications, indicating that the focus on artificial intelligence regulation will further increase.” At the same time, the Federal Trade Commission has developed in enforcing various regulations AI expertise such as the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and the FTC Act.
In addition to the regulations that have been introduced and the upcoming rules for AI, articles published by the FTC also show trends in AI supervision.
10 days ago, the U.S. Federal Trade Commission published an article titled "Chatbots, deepfakes, and voice clones: AI deception for sale" Business blog post by Michael Atleson, attorney in the FTC's Advertising Practice Unit.
The FTC Act’s “prohibition of deceptive or unfair conduct may apply to the manufacture, sale, or use of instruments that are effectively designed to deceive,” the blog post said. Even if that’s not its intent or sole purpose.” Businesses should even consider whether they should make or sell AI tools and whether they effectively reduce risk.
"If you decide to manufacture or offer such a product, you should take all reasonable precautions before placing it on the market," the blog post states. “The U.S. Federal Trade Commission (FTC) prosecutes companies that spread potentially harmful technologies without taking reasonable steps to prevent consumers from harm.”
In another article in February, “Control Your Artificial Intelligence Claims," Atleson also wrote that the FTC may be "questioning" whether a company advertising an AI product is aware of the risks.
“Before you launch it on the market, you need to understand the reasonably foreseeable risks and impacts of your AI product. If something goes wrong (perhaps failing or producing biased results), you can’t just blame it. Third-party developers of this technology. You can’t say that you have no responsibility because this technology is a “black box” that you can’t understand or don’t know how to test.”
More than a hundred technology tycoons “denounced” ChatGPT, OpenAI: Not yet trained ChatGPT5
Federal scrutiny may not come soon enough, but the tech community has expressed concentrated concerns about the rapidly growing ChatGPT.
Recently, an open letter signed by hundreds of well-known artificial intelligence experts, technology entrepreneurs and scientists called for a moratorium on the development and testing of artificial intelligence technologies more powerful than the OpenAI language model GPT-4 in order to Carry out appropriate research into the possible risks.
The report warns that language models like GPT-4, which can already compete with humans in an increasing number of tasks, can be used to automate work and spread misinformation , and even raised concerns that artificial intelligence systems could replace humans and reshape civilization.
"We call on all artificial intelligence laboratories to immediately suspend the training of artificial intelligence systems more powerful than GPT-4 (including GPT-5 currently being trained) for at least 6 months." The letter reads.
Signatories of the letter include Yoshua Bengio, a professor at the University of Montreal, one of the founders of modern AI, and Jaan Tallinn, a historian and co-founder of Skype ), and the famous Elon Musk.
The letter was written by the Future of Life Institute, an organization focused on technological risks facing humanity. The letter added that the pause in research should be "public and verifiable" and should involve everyone working on advanced AI models like GPT-4.
Microsoft and Google did not respond to requests for comment on the letter. The signers appear to include employees from a number of tech companies building high-level language models, including Microsoft and Google.
In response, OpenAI spokesperson Hannah Wong said that the company spent more than six months studying the security and calibration of GPT-4 after training the model.
Hannah Wong added that OpenAI is not yet training GPT-5.
Related reports:
https://www.php.cn/link/9ad97add7f3d9f29cd262159d4540c96
The above is the detailed content of The first government review of ChatGPT may come from the US Federal Trade Commission, OpenAI: not yet trained GPT5. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version
