Home  >  Article  >  Technology peripherals  >  Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.

Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.

WBOY
WBOYOriginal
2024-08-08 04:21:12693browse

Is AI really dangerous enough to warrant such regulation?


In Silicon Valley, a hot spot for innovation, AI scientists such as Li Feifei and Andrew Ng are engaged in a tug-of-war with regulatory authorities over safety and innovation.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
At the center of this tug of war is a bill called SB-1047. The full name of the bill is "Safe and Secure Innovation for Frontier Artificial Intelligence Act", which attempts to establish clear security standards for high-risk AI models to prevent them from being abused or causing catastrophe. as a result of.

The bill was introduced in the Senate in February this year and subsequently caused a lot of controversy. Many scientists believe that the provisions of the bill are too unreasonable and will have a devastating impact on scientific and technological innovation.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
Bill link: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

Specifically, this bill aims to regulate artificial intelligence from the model level , for models trained above certain computational and cost thresholds.

The model coverage is as follows:

1. Artificial intelligence models trained using more than 10^26 integer or floating point operations of computing power cost more than one hundred million U.S. dollars ($100,000,000). Costs are calculated based on the developer's reasonable assessment of the average market price for cloud computing at the start of training.

2. Artificial intelligence models created by fine-tuning models within the scope using computing power equal to or greater than three times 10^25 integer or floating point operations.

This range basically covers all mainstream large models on the market today. If the bill passes, these models will be classified as "potentially dangerous" or require additional oversight.

The bill also requires model developers to be legally responsible for the downstream use or modification of their models. Before training can begin, developers need to prove that their models will not enable or provide "dangerous functionality" and implement a series of protections to prevent such use. This will hinder the development of the open source community.

Supervising the implementation of the new law will be a "frontier model division", a newly established supervisory and regulatory agency. Misrepresenting a model’s capabilities to the agency, which will set safety standards and advise on AI laws, could land developers in jail for perjury.

The bill also adds whistleblower protection provisions to protect and encourage whistleblowers within AI development entities and ensure that employees can report corporate non-compliance without retaliation.

If the bill passes, a single signature from Governor Gavin Newsom would make it into California law. If this bill passes in California, it will set a precedent for other states and have ripple effects across the U.S. and abroad — essentially creating a huge butterfly effect on the state of innovation, said a16z General Partner Anjney Midha.

A hearing on the bill will be held on the morning of August 7th, PDT. Time is running out for scientists to protest. Therefore, Li Feifei personally wrote an article stating the pros and cons of the bill. Some scientists are signing a letter to stop the bill from passing.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
                                                                                                                                                                                                                                           

Li Feifei wrote an article criticizing SB-1047

Li Feifei said in the article: "California's SB-1047 will have significant and unintended consequences. If passed into law, SB-1047 will Harming the budding AI ecosystem. SB-1047 will unnecessarily penalize developers, stifle the open source community, and hinder academic AI research while failing to solve the real problems it is designed to solve," she wrote. Dow:
First, SB-1047 will unduly punish developers and stifle innovation.SB-1047 holds responsible parties and the model’s original developers accountable if an AI model is misused. It’s impossible for every AI developer (especially budding programmers and entrepreneurs) to predict every possible use for their models. SB-1047 will force developers to step back and act defensively—exactly what we are trying to avoid.

Secondly, SB-1047 will restrict open source development. SB-1047 requires all models that exceed a certain threshold to include a "kill switch," a mechanism that can shut down the program at any time. If developers worry that the programs they download and build will be deleted, they will be more hesitant to code and collaborate. This kill switch will destroy the open source community, which is the source of countless innovations. The impact is not limited to the field of artificial intelligence, but in everything from GPS to MRI to the Internet itself.

Third, SB-1047 will cripple public sector and academic AI research. Open source development is important for the private sector, but it's also crucial for academia. Academics cannot advance without collaboration and access to model data. How will we train the next generation of AI leaders if our institutions don’t have access to appropriate models and data? A kill switch would even further undermine the efforts of students and researchers, who are already at a data and computing disadvantage compared to big tech companies. SB-1047 will sound the death knell for academic AI at a time when we should be doubling down on public sector AI investments.

Most worryingly, the bill does not address the potential harms of advances in artificial intelligence, including bias and deepfakes. Instead, SB-1047 sets an arbitrary threshold that regulates models that use a certain amount of computing power or cost $100 million to train. Far from providing safeguards, this measure will only limit innovation across sectors, including in academia. Today, academic AI models fall below this threshold, but if we are to rebalance investment in private and public sector AI, academia will be regulated by SB-1047. Our AI ecosystem will be worse off as a result.

SB-1047’s restrictions are too arbitrary and we must do the opposite.

I am not against AI governance. Legislation is critical for the safe and effective development of artificial intelligence. But AI policy must empower open source development, propose unified and reasonable rules, and build consumer confidence. SB-1047 does not meet these standards.

Dozens of scientists have signed a petition to oppose

In response to SB-1047, in addition to Li Feifei, a group composed of teachers and students from 7 campuses of the University of California and researchers from more than 20 other institutions are also taking active actions. They co-authored and signed an open letter opposing SB-1047, outlining the bill’s harm to California’s AI research and education goals from a researcher’s perspective.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
The joint letter discusses the unreasonableness of SB-1047 from the following aspects:

1. The bill will bring a "chilling effect" to the release of open source models, thus damaging research

The bill requires "security audits" and the ability to "completely shut down" "cutting-edge models", which may seriously hinder the release of open source and open weight models. These strict regulations may be easier to implement for a proprietary model controlled by a private entity, but more difficult for an open model used by a nonprofit organization or a consortium of universities. The bill's provisions for safety demonstrations and audits are not specific enough, relying on tests that may not yet exist and may lack scientific rigor. The potential cost of such an audit may be easily affordable for commercial entities with profitable products, but for scientific open releases by commercial entities like Meta's LLaMA series, or open models trained by non-profit organizations or university consortiums, This may not be the case.

Due to these onerous restrictions, developers of open source models may choose to build systems outside of California or the United States and release their models without liability. In this case, private actors with no regard for compliance may secretly use these models, while academic researchers who are constrained by the nature of their public work, which prompts them to change research topics or transfer to ones that do not infringe on their academic freedom, will be excluded. Jurisdictions. The availability of open source models is critical to modern academic AI research, as they enable academics to explore how models work, what increases in capabilities during training, and how they can be improved and hacked.

2. Artificial intelligence risk prediction and "capability" assessment are unscientific

As experts in the fields of artificial intelligence, machine learning and natural language processing, these researchers emphasize: mentioned in SB-1047 The proposed approach to assessing model risk is highly questionable. There is no scientific consensus on whether and how language models or other cutting-edge artificial intelligence systems pose a threat to the public.

3. Insufficient protection for open source models

Although the bill mentions that special cases may be provided for open source models in the future, due to the rapid growth in the number of parameters and the reduction in computing costs, existing protection measures may be difficult to continued. Without strong protections in place, the consequences for these models could quickly become apparent. In addition, small models with comparable performance require higher computational costs than large models. Therefore, the amendments in the bill are not expected to mitigate the negative impact on open source model releases, while strict reporting and review requirements will unnecessarily impact research activities.

4. Concerns about students’ job placement and career outcomes

SB-1047 may prevent students interested in artificial intelligence from further learning related knowledge in the future, and may even prevent new talents from entering computer science and other key areas. Additionally, as the tech industry shifts from large companies to startups, additional regulatory hurdles could cripple emerging innovators by favoring bigger, better businesses. This shift could narrow career paths for students.签 Sign signed by some scholars.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
In addition to open letters, some researchers also choose to speak out on social media. Among them, one systems biologist noted that SB-1047 is like activating the inflammatory response before we even know what the pathogen is, when it will infect us, and where the infection will occur.

Before this, Ng Enda also spoke out on this matter many times. He believes regulators should regulate applications rather than technology. For example, the electric motor is a technology.When we put it into a blender, an electric car, a dialysis machine or a guided bomb, it becomes an application. Imagine if the law held motor manufacturers liable when anyone used the motor in a harmful way. That motor manufacturer either discontinues production or makes the motor so small that it is useless for most applications. If we passed laws like this, we might stop people from building bombs, but we'd also lose blenders, electric cars, and dialysis machines. Instead, if we focus on specific apps, we can more rationally assess the risks and judge how to keep them safe, or even ban certain types of apps.
Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.
Is AI really dangerous enough to warrant such regulation? What do you think of it?

Reference link: https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/
https://drive. google.com/file/d/1E2yDGXryPhhlwS4OdkzMpNeaG5r6_Jxa/view
https://fortune.com/2024/08/06/godmother-of-ai-says-californias-ai-bill-will-harm-us -ecosystem-tech-politics/?abc123

The above is the detailed content of Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California’s AI restriction bill.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn