


The father of ChatGPT is arguing on Capitol Hill! OpenAI wants to join forces with the government to gain power
Last time it was Zhou Shouzi, this time it was Sam Altman’s turn.
But this time, the members of Congress had a completely different attitude towards him - friendly, patient, done enough homework, and asked for advice humbly.
Last night, Beijing time, OpenAI CEO Sam Altman testified before the U.S. Senate about the potential dangers of AI technology and urged lawmakers to implement licensing requirements and other regulations for organizations that manufacture advanced AI.
##Sam Altman didn’t have to accept tricky questions. He sat in his seat and talked with ease, and once again told the world Proof: As the most high-profile startup CEO in the world, he is writing the rules and future of the technology world.
Facing the U.S. Congress, Sam Altman once again categorically guaranteed that OpenAI will not train GPT-5 in the next six months.
At the same time, he also warned the world: AI may be harmful to the world. In order to deal with the increasingly powerful AI risks, we need to strengthen supervision and legislation, and government intervention is extremely important.
Why is Altman so active in government regulation?
Obviously, as long as you become the rule maker, you can win all the competition.
For Altman, who has made a name for himself in Silicon Valley by relying on his "social cow" attributes, dealing with the government is as easy as picking something out of a bag.
Opening speech generated by AIAs a new force that has suddenly emerged in the technology world, OpenAI has stirred up the world with lightning speed this year, 8 years after its establishment. All over the world, all technology companies have been forced to participate in a global involution starting with ChatGPT.
This global AI arms race has alarmed many experts.
However, during this hearing, the Senate members did not criticize the chaos caused by OpenAI’s technology, but humbly solicited the opinions of witnesses on the potential rules of ChatGPT. Sam Atlman's attitude is visibly friendly and respectful.
At the beginning of the hearing, Senator Richard Blumenthal used voice cloning software to copy his own voice and let ChatGPT write An opening sentence, a text-to-speech generator trained using hours of speech.
This move proves that Congress has a clear-cut attitude towards "embracing AI".
AI is dangerous, please regulate usAt this hearing, the legislators were obviously very excited, and they were very strict with Xiao Zha and Zhou Shouzi. The questioning is in stark contrast.
Rather than harping on the mistakes of the past, senators are eager for the benefits that AI can bring.
And Altman told the Senate straight to the point: AI technology can go wrong.
He said that he was very worried about the artificial intelligence industry causing significant harm to the world.
"If AI technology goes wrong, the consequences will be disastrous. We need to speak out about this: we want to work with the government to prevent this from happening."
"We believe that government regulatory intervention is critical to mitigate the risks of increasingly powerful AI models. For example, the U.S. government could consider combining licensing and testing requirements to develop and release AI that exceeds capability thresholds Model."
Altman said that he is very worried that the election will be affected by AI-generated content, so there needs to be adequate supervision in this regard.
In response, Senator Dick Durbin said that it is remarkable for large companies to come to the Senate to "please for our regulation."
Altman proposed a three-point plan:
How to supervise? Altman has already thought of the government.
At the hearing, he proposed a systematic plan.
1. Create a new government agency responsible for licensing large AI models and revoking licenses for models that do not meet standards.
He believes that there is no need to implement this licensing regulatory system for technologies that cannot reach the level of state-of-the-art large-scale models. To encourage innovation, Congress could set competency thresholds to shield smaller companies and researchers from regulatory burdens.
2. Create a set of safety standards for AI models, including an assessment of their dangerous capabilities.
For example, models must pass security tests, such as whether they can "self-replicate" and "flow outside supervision."
3. Require independent experts to conduct an independent audit of the model’s performance on various metrics.
When asked by senators if he would be willing to take on the role, Altman said: "I am satisfied with the current job, but he would be happy to provide a list for Congress to choose from."
Altman said that because AI models can "persuad, manipulate, influence a person's behavior, beliefs" and even "create new biological agents," licensing is very much needed.
It would be simpler to license all systems above a certain threshold of computing power, but Altman said he prefers to draw regulatory lines based on specific capabilities.
So is OpenAI’s own model safe?
Altman has repeatedly said that everyone can rest assured.
He said that the GPT-4 model will respond more intentionally and truthfully than any other similar model, and will definitely reject harmful requests because GPT-4 has undergone extensive Pre-release testing and auditing.
"Before releasing any new system, OpenAI conducts extensive testing, engages external experts to conduct detailed reviews and independent audits, improves the model's behavior, and implements strong security and monitoring systems ."
"Prior to releasing GPT-4, we spent more than six months conducting extensive assessments, external red teaming, and hazardous capability testing."
And last month, ChatGPT users were able to turn off chat records to prevent their personal data from being used to train AI models.
However, there are also sharp-eyed people who have discovered the "hua points". Altman's proposal does not involve the two points that are hotly debated by the public——
1. Require AI models to disclose sources for their training data.
2. It is prohibited for AI models to use works protected by intellectual property rights for training.
Well, that is to say, Altman avoided these two controversial points very cleverly.
Lawmakers have applauded Altman’s proposals for AI safety rules and occasionally thanked him for his testimony. Sen. Altman, R-LA, even reached out to Altman, asking if he would be interested in working at the regulatory agency created by Congress.
Congress is determined to regulate artificial intelligence, and there are early signs. Earlier this month, Altman, along with the CEOs of Google, Microsoft and Nvidia, met with Vice President Kamala Harris at the White House to discuss the development of responsible AI.
As early as last year, the White House proposed the "Artificial Intelligence Bill of Rights" to put forward various requirements to the industry, such as preventing discrimination.
Compared to the atomic bomb, it is recommended to establish an international organization similar to the International Atomic Energy Agency
The senator proposed comparing AI to the atomic bomb.
Referring to the practices of governments around the world in regulating nuclear weapons, Altman proposed the idea of forming an agency similar to the International Atomic Energy Agency to formulate global rules for the industry.
OpenAI will not train GPT-5 for the next six months
In April’s Round 2 interview with Lex Fridman, Sam Altman said conclusively: "We are not training GPT-5 now, we are just doing more work on the basis of GPT-4."
At this hearing, Altman directly admitted that OpenAI has no plans to train a new model that may become GPT-5 in the next six months.
And this should mean that later this year Google will have its most powerful artificial intelligence system yet - Project Gemini.
It is said that Gemini is designed for future innovations such as storage and scheduling. It is not only multi-modal from the beginning, but also very efficient in integrating tools and APIs. It is currently being developed by the newly established Google Deepmind team.
Marcus: OpenAI claims to be for all mankind, but the data is not transparent
Gary Marcus, a professor of psychology and neuroscience at New York University, also appeared on the witness stand.
He is even more aggressive than the members of Congress.
His question to Sam Altman can be described as "deadly".
Isn’t the purpose of OpenAI’s establishment to benefit all mankind? Why is it now running to form an alliance with Microsoft?
OpenAI is not Open, and the training data of GPT-4 is not transparent. What does it mean?
Marcus concluded: We have unprecedented opportunities, but we are also faced with corporate irresponsibility, widespread The dire risks of deployment, lack of proper regulation, and unreliability
In Marcus’ view, both Open and Microsoft are doing something very wrong.
Microsoft’s Bing AI Sydney once showed a series of shocking behaviors.
"Sydney has a big problem. If it were me, I would take it off the market immediately, but Microsoft didn't."
Marcus said that this incident was a wake-up call for him-even a non-profit organization like OpenAI can be bought by a large company and then do whatever it wants.
But now, people’s views and lives are being subtly shaped and changed by AI. What if someone deliberately uses AI technology for bad purposes?
Marcus was very worried about this.
"If you combine a technocratic and an oligarchy, then a few companies can influence people's beliefs. That's where the real risk lies... It scares me to have a handful of players do this using data that we don't even know about"
Altman said the AI community There is no monopoly
Regarding some common legal and regulatory issues, it can be seen that Altman has already had a plan in mind and has made clear arrangements for the senators.
The senator said one of his "biggest concerns" about artificial intelligence is "this massive corporate monopoly."
He cited OpenAI’s collaboration with technology giant Microsoft as an example.
Altman said that he believes that the number of companies that can manufacture large models is relatively small, which may make it easier to regulate.
For example, only a few companies can manufacture large-scale generative AI, but competition has always existed.
Establishing legal liability for large models
The rise of social media was facilitated by Section 230 passed by the U.S. Congress in 1996, which protected websites Release from liability for users' posts.
Altman believes that large models currently have no way to be protected by Section 230. New laws should be enacted to protect large models from legal liability for the content they output.
Cleverly avoiding the most deadly question
Altman initially avoided the senator's suggestion that "AI may cause the most serious consequences."
But after Marcus gave a friendly reminder that Altman had not answered the question, the senator repeated his question.
Altman ultimately did not answer this question directly.
He said OpenAI had tried to be very clear about the risks of artificial intelligence, which could come in "a lot of different ways." Causing "significant harm to the world".
He clarified again. Addressing this problem is why OpenAI was founded. "If something goes wrong with this technology, it could go terribly wrong."
In fact, in an interview with "StrictlyVC" earlier this year, Altman said that human extinction is the most serious problem. Bad situation.
Eventually, even Marcus seemed to soften towards Ultraman.
Toward the end of the hearing, Marcus, who was sitting next to Altman, said, "The sincerity he spoke about fear was so obvious that it wasn't reflected through the television screen. I can’t feel it.”
experienced technology leader
Compared with Xiao Zha, Altman’s performance in this hearing was very sophisticated. I think he will be a social bull. He is already comfortable dealing with politicians. After all, Altman was someone who considered running for governor of California years ago.
And compared to Xiao Zha, who had already "stuck a lot of trouble" because of data privacy and currency before going to the hearing, the OpenAI behind Altman has not only received almost no public criticism, but it is also the current AI The main founder of the situation in the field of "all things happen".
Faced with Altman, who showed good will from the beginning and called for the regulation of AI, these legislators, who are almost all "technical amateurs", will naturally appear gentle in front of this "authority" A lot of kindness.
So in the same situation, the pressure on Altman is not at the same level as Xiao Zha's.
Business model of large model
The Senate has raised this concern. If, like Internet social platforms, AI products are mainly based on advertising, The business model will allow manipulative product design and addictive algorithms to be abused.
Altman said he “really likes” the subscription model.
But OpenAI did consider the possibility of running ads in the free version of ChatGPT to make money from its free users.
The above is the detailed content of The father of ChatGPT is arguing on Capitol Hill! OpenAI wants to join forces with the government to gain power. For more information, please follow other related articles on the PHP Chinese website!

The term "AI-ready workforce" is frequently used, but what does it truly mean in the supply chain industry? According to Abe Eshkenazi, CEO of the Association for Supply Chain Management (ASCM), it signifies professionals capable of critic

The decentralized AI revolution is quietly gaining momentum. This Friday in Austin, Texas, the Bittensor Endgame Summit marks a pivotal moment, transitioning decentralized AI (DeAI) from theory to practical application. Unlike the glitzy commercial

Enterprise AI faces data integration challenges The application of enterprise AI faces a major challenge: building systems that can maintain accuracy and practicality by continuously learning business data. NeMo microservices solve this problem by creating what Nvidia describes as "data flywheel", allowing AI systems to remain relevant through continuous exposure to enterprise information and user interaction. This newly launched toolkit contains five key microservices: NeMo Customizer handles fine-tuning of large language models with higher training throughput. NeMo Evaluator provides simplified evaluation of AI models for custom benchmarks. NeMo Guardrails implements security controls to maintain compliance and appropriateness

AI: The Future of Art and Design Artificial intelligence (AI) is changing the field of art and design in unprecedented ways, and its impact is no longer limited to amateurs, but more profoundly affecting professionals. Artwork and design schemes generated by AI are rapidly replacing traditional material images and designers in many transactional design activities such as advertising, social media image generation and web design. However, professional artists and designers also find the practical value of AI. They use AI as an auxiliary tool to explore new aesthetic possibilities, blend different styles, and create novel visual effects. AI helps artists and designers automate repetitive tasks, propose different design elements and provide creative input. AI supports style transfer, which is to apply a style of image

Zoom, initially known for its video conferencing platform, is leading a workplace revolution with its innovative use of agentic AI. A recent conversation with Zoom's CTO, XD Huang, revealed the company's ambitious vision. Defining Agentic AI Huang d

Will AI revolutionize education? This question is prompting serious reflection among educators and stakeholders. The integration of AI into education presents both opportunities and challenges. As Matthew Lynch of The Tech Edvocate notes, universit

The development of scientific research and technology in the United States may face challenges, perhaps due to budget cuts. According to Nature, the number of American scientists applying for overseas jobs increased by 32% from January to March 2025 compared with the same period in 2024. A previous poll showed that 75% of the researchers surveyed were considering searching for jobs in Europe and Canada. Hundreds of NIH and NSF grants have been terminated in the past few months, with NIH’s new grants down by about $2.3 billion this year, a drop of nearly one-third. The leaked budget proposal shows that the Trump administration is considering sharply cutting budgets for scientific institutions, with a possible reduction of up to 50%. The turmoil in the field of basic research has also affected one of the major advantages of the United States: attracting overseas talents. 35

OpenAI unveils the powerful GPT-4.1 series: a family of three advanced language models designed for real-world applications. This significant leap forward offers faster response times, enhanced comprehension, and drastically reduced costs compared t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

WebStorm Mac version
Useful JavaScript development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 English version
Recommended: Win version, supports code prompts!
