Home >web3.0 >AI's new world challenges: What happened to security and privacy?

AI's new world challenges: What happened to security and privacy?

王林
王林forward
2024-03-31 18:46:37457browse

The rapid development of generative AI has created unprecedented challenges in privacy and security, triggering urgent calls for regulatory intervention.

AI 的新世界挑战:安全和隐私怎么了?

Last week, I had the opportunity to discuss the security-related impacts of AI with some members of Congress and their staff in Washington, D.C.

Today’s generative AI reminds me of the Internet in the late 1980s, with basic research, latent potential, and academic uses, but it’s not yet ready for the public. This time, unchecked vendor ambition, fueled by minor league venture capital and inspired by Twitter echo chambers, is rapidly advancing AI’s “brave new world.”

The "public" base model is flawed and unsuitable for consumer and commercial use; privacy abstractions, if present, leak like a sieve; and security structures are important because attack surfaces and threat vectors are still being understood. As for the illusory guardrails, the less said the better.

So how did we get here? What happened to security and privacy?

Basic model of “compromise”

"Open" mode sometimes has limitations. Different vendors advertise their degree of openness through open mode weights, access to documentation or testing. Still, none of the major vendors provide anything close to the training dataset or its manifest or lineage information to be able to replicate and reproduce their models.

If you want to use one or more models to train a data set, then you as a consumer or organization do not have any ability to verify or confirm the extent of data contamination, whether with respect to intellectual property, copyright, etc., or as illegal content .

Crucially, without a manifest of the training data set, there is no way to verify or confirm non-existent malicious content. Malicious actors, including state-sponsored actors, plant Trojan content on the network that, if ingested during model training, results in unpredictable and potentially malicious side effects during inference.

Remember, once a model is compromised, there is no way to make it forget, the only option is to destroy it.

“Pervasive” security issues

Generative AI models are the ultimate safe honeypot because "all" data is ingested into a container. New categories of attack vectors have emerged in the AI ​​era; the industry has yet to understand how these models are protected from cyber threats and the impact of how these models can be used as tools by cyber threat actors.

Malicious hint injection techniques may be used to pollute indexes; data poisoning may be used to corrupt weights; embedding attacks, including inversion techniques, may be used to extract rich data from embeddings; membership inference may be used to determine certain Whether the data is in the training set, etc., is just the tip of the iceberg.

Threat actors may gain access to confidential data through model inversion and programmatic querying; they may corrupt or otherwise influence the underlying behavior of the model; and, as mentioned earlier, large-scale uncontrolled data ingestion can result in Threats embedded in state-sponsored cyber activities, such as Trojan horses.

"Leaked" Privacy

AI models are only useful because of the data sets they are trained on; indiscriminate large-scale data ingestion creates unprecedented privacy risks for individuals and the public. In the AI ​​era, privacy has become a social concern; regulations that primarily address personal data rights are insufficient.

In addition to static data, prompts for dynamic conversations must also be protected and maintained as intellectual property. If you are a consumer involved in co-creating an artifact with a model, you hope that the prompts you use to guide this creation activity will not be used to train the model or be shared with other model consumers.

If you are an employee using the model to achieve business outcomes, your employer expects your tips to be confidential; furthermore, tips and responses require a secure audit trail in case liability issues arise for either party. This is mainly due to the stochastic nature of these models and the changes in their responses over time.

What happens next?

We are dealing with an unprecedented technology that is unique in our computing history in that it exhibits emergent and latent behavior at scale; the methods used in the past for security, privacy, and confidentiality are no longer adequate.

Industry leaders threw caution to the wind, leaving regulators and policymakers with no choice but to step in.

The above is the detailed content of AI's new world challenges: What happened to security and privacy?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:finacerun.com. If there is any infringement, please contact admin@php.cn delete