


The father of ChatGPT is arguing on Capitol Hill! OpenAI wants to join forces with the government to gain power
Last time it was Zhou Shouzi, this time it was Sam Altman’s turn.
But this time, the members of Congress had a completely different attitude towards him - friendly, patient, done enough homework, and asked for advice humbly.
Last night, Beijing time, OpenAI CEO Sam Altman testified before the U.S. Senate about the potential dangers of AI technology and urged lawmakers to implement licensing requirements and other regulations for organizations that manufacture advanced AI.
##Sam Altman didn’t have to accept tricky questions. He sat in his seat and talked with ease, and once again told the world Proof: As the most high-profile startup CEO in the world, he is writing the rules and future of the technology world.
Facing the U.S. Congress, Sam Altman once again categorically guaranteed that OpenAI will not train GPT-5 in the next six months.
At the same time, he also warned the world: AI may be harmful to the world. In order to deal with the increasingly powerful AI risks, we need to strengthen supervision and legislation, and government intervention is extremely important.
Why is Altman so active in government regulation?
Obviously, as long as you become the rule maker, you can win all the competition.
For Altman, who has made a name for himself in Silicon Valley by relying on his "social cow" attributes, dealing with the government is as easy as picking something out of a bag.
Opening speech generated by AIAs a new force that has suddenly emerged in the technology world, OpenAI has stirred up the world with lightning speed this year, 8 years after its establishment. All over the world, all technology companies have been forced to participate in a global involution starting with ChatGPT.
This global AI arms race has alarmed many experts.
However, during this hearing, the Senate members did not criticize the chaos caused by OpenAI’s technology, but humbly solicited the opinions of witnesses on the potential rules of ChatGPT. Sam Atlman's attitude is visibly friendly and respectful.
At the beginning of the hearing, Senator Richard Blumenthal used voice cloning software to copy his own voice and let ChatGPT write An opening sentence, a text-to-speech generator trained using hours of speech.
This move proves that Congress has a clear-cut attitude towards "embracing AI".
AI is dangerous, please regulate usAt this hearing, the legislators were obviously very excited, and they were very strict with Xiao Zha and Zhou Shouzi. The questioning is in stark contrast.
Rather than harping on the mistakes of the past, senators are eager for the benefits that AI can bring.
And Altman told the Senate straight to the point: AI technology can go wrong.
He said that he was very worried about the artificial intelligence industry causing significant harm to the world.
"If AI technology goes wrong, the consequences will be disastrous. We need to speak out about this: we want to work with the government to prevent this from happening."
"We believe that government regulatory intervention is critical to mitigate the risks of increasingly powerful AI models. For example, the U.S. government could consider combining licensing and testing requirements to develop and release AI that exceeds capability thresholds Model."
Altman said that he is very worried that the election will be affected by AI-generated content, so there needs to be adequate supervision in this regard.
In response, Senator Dick Durbin said that it is remarkable for large companies to come to the Senate to "please for our regulation."
Altman proposed a three-point plan:
How to supervise? Altman has already thought of the government.
At the hearing, he proposed a systematic plan.
1. Create a new government agency responsible for licensing large AI models and revoking licenses for models that do not meet standards.
He believes that there is no need to implement this licensing regulatory system for technologies that cannot reach the level of state-of-the-art large-scale models. To encourage innovation, Congress could set competency thresholds to shield smaller companies and researchers from regulatory burdens.
2. Create a set of safety standards for AI models, including an assessment of their dangerous capabilities.
For example, models must pass security tests, such as whether they can "self-replicate" and "flow outside supervision."
3. Require independent experts to conduct an independent audit of the model’s performance on various metrics.
When asked by senators if he would be willing to take on the role, Altman said: "I am satisfied with the current job, but he would be happy to provide a list for Congress to choose from."
Altman said that because AI models can "persuad, manipulate, influence a person's behavior, beliefs" and even "create new biological agents," licensing is very much needed.
It would be simpler to license all systems above a certain threshold of computing power, but Altman said he prefers to draw regulatory lines based on specific capabilities.
So is OpenAI’s own model safe?
Altman has repeatedly said that everyone can rest assured.
He said that the GPT-4 model will respond more intentionally and truthfully than any other similar model, and will definitely reject harmful requests because GPT-4 has undergone extensive Pre-release testing and auditing.
"Before releasing any new system, OpenAI conducts extensive testing, engages external experts to conduct detailed reviews and independent audits, improves the model's behavior, and implements strong security and monitoring systems ."
"Prior to releasing GPT-4, we spent more than six months conducting extensive assessments, external red teaming, and hazardous capability testing."
And last month, ChatGPT users were able to turn off chat records to prevent their personal data from being used to train AI models.
However, there are also sharp-eyed people who have discovered the "hua points". Altman's proposal does not involve the two points that are hotly debated by the public——
1. Require AI models to disclose sources for their training data.
2. It is prohibited for AI models to use works protected by intellectual property rights for training.
Well, that is to say, Altman avoided these two controversial points very cleverly.
Lawmakers have applauded Altman’s proposals for AI safety rules and occasionally thanked him for his testimony. Sen. Altman, R-LA, even reached out to Altman, asking if he would be interested in working at the regulatory agency created by Congress.
Congress is determined to regulate artificial intelligence, and there are early signs. Earlier this month, Altman, along with the CEOs of Google, Microsoft and Nvidia, met with Vice President Kamala Harris at the White House to discuss the development of responsible AI.
As early as last year, the White House proposed the "Artificial Intelligence Bill of Rights" to put forward various requirements to the industry, such as preventing discrimination.
Compared to the atomic bomb, it is recommended to establish an international organization similar to the International Atomic Energy Agency
The senator proposed comparing AI to the atomic bomb.
Referring to the practices of governments around the world in regulating nuclear weapons, Altman proposed the idea of forming an agency similar to the International Atomic Energy Agency to formulate global rules for the industry.
OpenAI will not train GPT-5 for the next six months
In April’s Round 2 interview with Lex Fridman, Sam Altman said conclusively: "We are not training GPT-5 now, we are just doing more work on the basis of GPT-4."
At this hearing, Altman directly admitted that OpenAI has no plans to train a new model that may become GPT-5 in the next six months.
And this should mean that later this year Google will have its most powerful artificial intelligence system yet - Project Gemini.
It is said that Gemini is designed for future innovations such as storage and scheduling. It is not only multi-modal from the beginning, but also very efficient in integrating tools and APIs. It is currently being developed by the newly established Google Deepmind team.
Marcus: OpenAI claims to be for all mankind, but the data is not transparent
Gary Marcus, a professor of psychology and neuroscience at New York University, also appeared on the witness stand.
He is even more aggressive than the members of Congress.
His question to Sam Altman can be described as "deadly".
Isn’t the purpose of OpenAI’s establishment to benefit all mankind? Why is it now running to form an alliance with Microsoft?
OpenAI is not Open, and the training data of GPT-4 is not transparent. What does it mean?
Marcus concluded: We have unprecedented opportunities, but we are also faced with corporate irresponsibility, widespread The dire risks of deployment, lack of proper regulation, and unreliability
In Marcus’ view, both Open and Microsoft are doing something very wrong.
Microsoft’s Bing AI Sydney once showed a series of shocking behaviors.
"Sydney has a big problem. If it were me, I would take it off the market immediately, but Microsoft didn't."
Marcus said that this incident was a wake-up call for him-even a non-profit organization like OpenAI can be bought by a large company and then do whatever it wants.
But now, people’s views and lives are being subtly shaped and changed by AI. What if someone deliberately uses AI technology for bad purposes?
Marcus was very worried about this.
"If you combine a technocratic and an oligarchy, then a few companies can influence people's beliefs. That's where the real risk lies... It scares me to have a handful of players do this using data that we don't even know about"
Altman said the AI community There is no monopoly
Regarding some common legal and regulatory issues, it can be seen that Altman has already had a plan in mind and has made clear arrangements for the senators.
The senator said one of his "biggest concerns" about artificial intelligence is "this massive corporate monopoly."
He cited OpenAI’s collaboration with technology giant Microsoft as an example.
Altman said that he believes that the number of companies that can manufacture large models is relatively small, which may make it easier to regulate.
For example, only a few companies can manufacture large-scale generative AI, but competition has always existed.
Establishing legal liability for large models
The rise of social media was facilitated by Section 230 passed by the U.S. Congress in 1996, which protected websites Release from liability for users' posts.
Altman believes that large models currently have no way to be protected by Section 230. New laws should be enacted to protect large models from legal liability for the content they output.
Cleverly avoiding the most deadly question
Altman initially avoided the senator's suggestion that "AI may cause the most serious consequences."
But after Marcus gave a friendly reminder that Altman had not answered the question, the senator repeated his question.
Altman ultimately did not answer this question directly.
He said OpenAI had tried to be very clear about the risks of artificial intelligence, which could come in "a lot of different ways." Causing "significant harm to the world".
He clarified again. Addressing this problem is why OpenAI was founded. "If something goes wrong with this technology, it could go terribly wrong."
In fact, in an interview with "StrictlyVC" earlier this year, Altman said that human extinction is the most serious problem. Bad situation.
Eventually, even Marcus seemed to soften towards Ultraman.
Toward the end of the hearing, Marcus, who was sitting next to Altman, said, "The sincerity he spoke about fear was so obvious that it wasn't reflected through the television screen. I can’t feel it.”
experienced technology leader
Compared with Xiao Zha, Altman’s performance in this hearing was very sophisticated. I think he will be a social bull. He is already comfortable dealing with politicians. After all, Altman was someone who considered running for governor of California years ago.
And compared to Xiao Zha, who had already "stuck a lot of trouble" because of data privacy and currency before going to the hearing, the OpenAI behind Altman has not only received almost no public criticism, but it is also the current AI The main founder of the situation in the field of "all things happen".
Faced with Altman, who showed good will from the beginning and called for the regulation of AI, these legislators, who are almost all "technical amateurs", will naturally appear gentle in front of this "authority" A lot of kindness.
So in the same situation, the pressure on Altman is not at the same level as Xiao Zha's.
Business model of large model
The Senate has raised this concern. If, like Internet social platforms, AI products are mainly based on advertising, The business model will allow manipulative product design and addictive algorithms to be abused.
Altman said he “really likes” the subscription model.
But OpenAI did consider the possibility of running ads in the free version of ChatGPT to make money from its free users.
The above is the detailed content of The father of ChatGPT is arguing on Capitol Hill! OpenAI wants to join forces with the government to gain power. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1
Easy-to-use and free code editor

WebStorm Mac version
Useful JavaScript development tools
