First, SB-1047 will unduly punish developers and stifle innovation.SB-1047 holds responsible parties and the model’s original developers accountable if an AI model is misused. It’s impossible for every AI developer (especially budding programmers and entrepreneurs) to predict every possible use for their models. SB-1047 will force developers to step back and act defensively—exactly what we are trying to avoid. Secondly, SB-1047 will restrict open source development. SB-1047 requires all models that exceed a certain threshold to include a "kill switch," a mechanism that can shut down the program at any time. If developers worry that the programs they download and build will be deleted, they will be more hesitant to code and collaborate. This kill switch will destroy the open source community, which is the source of countless innovations. The impact is not limited to the field of artificial intelligence, but in everything from GPS to MRI to the Internet itself.
Third, SB-1047 will cripple public sector and academic AI research. Open source development is important for the private sector, but it's also crucial for academia. Academics cannot advance without collaboration and access to model data. How will we train the next generation of AI leaders if our institutions don’t have access to appropriate models and data? A kill switch would even further undermine the efforts of students and researchers, who are already at a data and computing disadvantage compared to big tech companies. SB-1047 will sound the death knell for academic AI at a time when we should be doubling down on public sector AI investments.
Most worryingly, the bill does not address the potential harms of advances in artificial intelligence, including bias and deepfakes. Instead, SB-1047 sets an arbitrary threshold that regulates models that use a certain amount of computing power or cost $100 million to train. Far from providing safeguards, this measure will only limit innovation across sectors, including in academia. Today, academic AI models fall below this threshold, but if we are to rebalance investment in private and public sector AI, academia will be regulated by SB-1047. Our AI ecosystem will be worse off as a result.
SB-1047’s restrictions are too arbitrary and we must do the opposite.
I am not against AI governance. Legislation is critical for the safe and effective development of artificial intelligence. But AI policy must empower open source development, propose unified and reasonable rules, and build consumer confidence. SB-1047 does not meet these standards.
Dozens of scientists have signed a petition to oppose In response to SB-1047, in addition to Li Feifei, a group composed of teachers and students from 7 campuses of the University of California and researchers from more than 20 other institutions are also taking active actions. They co-authored and signed an open letter opposing SB-1047, outlining the bill’s harm to California’s AI research and education goals from a researcher’s perspective. The joint letter discusses the unreasonableness of SB-1047 from the following aspects: 1. The bill will bring a "chilling effect" to the release of open source models, thus damaging research The bill requires "security audits" and the ability to "completely shut down" "cutting-edge models", which may seriously hinder the release of open source and open weight models. These strict regulations may be easier to implement for a proprietary model controlled by a private entity, but more difficult for an open model used by a nonprofit organization or a consortium of universities. The bill's provisions for safety demonstrations and audits are not specific enough, relying on tests that may not yet exist and may lack scientific rigor. The potential cost of such an audit may be easily affordable for commercial entities with profitable products, but for scientific open releases by commercial entities like Meta's LLaMA series, or open models trained by non-profit organizations or university consortiums, This may not be the case. Due to these onerous restrictions, developers of open source models may choose to build systems outside of California or the United States and release their models without liability. In this case, private actors with no regard for compliance may secretly use these models, while academic researchers who are constrained by the nature of their public work, which prompts them to change research topics or transfer to ones that do not infringe on their academic freedom, will be excluded. Jurisdictions. The availability of open source models is critical to modern academic AI research, as they enable academics to explore how models work, what increases in capabilities during training, and how they can be improved and hacked. 2. Artificial intelligence risk prediction and "capability" assessment are unscientificAs experts in the fields of artificial intelligence, machine learning and natural language processing, these researchers emphasize: mentioned in SB-1047 The proposed approach to assessing model risk is highly questionable. There is no scientific consensus on whether and how language models or other cutting-edge artificial intelligence systems pose a threat to the public. 3. Insufficient protection for open source models Although the bill mentions that special cases may be provided for open source models in the future, due to the rapid growth in the number of parameters and the reduction in computing costs, existing protection measures may be difficult to continued. Without strong protections in place, the consequences for these models could quickly become apparent. In addition, small models with comparable performance require higher computational costs than large models. Therefore, the amendments in the bill are not expected to mitigate the negative impact on open source model releases, while strict reporting and review requirements will unnecessarily impact research activities. 4. Concerns about students’ job placement and career outcomes SB-1047 may prevent students interested in artificial intelligence from further learning related knowledge in the future, and may even prevent new talents from entering computer science and other key areas. Additionally, as the tech industry shifts from large companies to startups, additional regulatory hurdles could cripple emerging innovators by favoring bigger, better businesses. This shift could narrow career paths for students.签 Sign signed by some scholars. In addition to open letters, some researchers also choose to speak out on social media. Among them, one systems biologist noted that SB-1047 is like activating the inflammatory response before we even know what the pathogen is, when it will infect us, and where the infection will occur.
Before this, Ng Enda also spoke out on this matter many times. He believes regulators should regulate applications rather than technology. For example, the electric motor is a technology.When we put it into a blender, an electric car, a dialysis machine or a guided bomb, it becomes an application. Imagine if the law held motor manufacturers liable when anyone used the motor in a harmful way. That motor manufacturer either discontinues production or makes the motor so small that it is useless for most applications. If we passed laws like this, we might stop people from building bombs, but we'd also lose blenders, electric cars, and dialysis machines. Instead, if we focus on specific apps, we can more rationally assess the risks and judge how to keep them safe, or even ban certain types of apps. Is AI really dangerous enough to warrant such regulation? What do you think of it? Reference link: https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/https://drive. google.com/file/d/1E2yDGXryPhhlwS4OdkzMpNeaG5r6_Jxa/viewhttps://fortune.com/2024/08/06/godmother-of-ai-says-californias-ai-bill-will-harm-us -ecosystem-tech-politics/?abc123The above is the detailed content of Li Feifei personally wrote an article, and dozens of scientists signed a joint letter opposing California's AI restriction bill.. For more information, please follow other related articles on the PHP Chinese website!