Home  >  Article  >  Technology peripherals  >  More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.

More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.

WBOY
WBOYforward
2023-11-03 14:49:041285browse

In recent days, the discussion on how to supervise AI has become more and more heated, and the opinions of the big guys are very different.

For example, there are two views among the three Turing Award giants Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Hinton and Bengio are a team. They strongly call for strengthening the supervision of AI, otherwise it may lead to the risk of "AI exterminating mankind." LeCun does not share their views. He believes that strong regulation of AI will inevitably lead to monopoly of giants, with the result that only a few companies control AI research and development.

In order to express their thoughts, many people convey their views by signing joint letters. Take the past few days as an example. Bengio, Hinton and others issued a joint letter "In "Managing Artificial Intelligence Risks in an Era of Rapid Development" calls on researchers to take urgent governance measures before developing AI systems.

A recent open letter titled "Joint Statement on Artificial Intelligence Safety and Openness" caused an uproar on social media

More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.

Please click the following link to view the open letter: https://open.mozilla.org/letter/

As of now, LeCun, one of the three giants of deep learning, More than 370 people, including Professor Andrew Ng of the Department of Computer Science at Stanford University, have signed the open letter. The list is still being updated.

LeCun said, "Openness, transparency, and broad access make software platforms safer and more reliable. I also signed this open letter from the Mozilla Foundation, which is open The artificial intelligence platform and system provide the reason."

More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.


##The following is the content of the open letter:

#We are at a critical moment in the governance of artificial intelligence. To mitigate the current and future harms of AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority

Indeed, there are risks and vulnerabilities in publicly available models – AI models can be abused by malicious actors or deployed by under-equipped developers. Yet we see this time and time again with proprietary technologies of all kinds – increased public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary controls over underlying AI models are the only way to protect us from harm at scale is naive at best and dangerous at worst.

Furthermore, the history of human development tells us that quickly adopting the wrong regulations can lead to a concentration of power that harms competition and innovation. Open models can inform public debate and improve strategy development. If our goal is safety, security and accountability, then openness and transparency are important elements in achieving that goal.

We are in the midst of a dynamic discussion about the meaning of "openness" in the era of artificial intelligence. This important debate should not slow down. Instead, it should speed things up and encourage us to try, learn, and develop new ways to leverage openness in the AI ​​safety race.

We need to establish an approach to promote open source and openness. These methods can serve as a foundation for:

Through individual research, collaboration, and knowledge sharing, we can accelerate our understanding of the risks and hazards of artificial intelligence capabilities

2. Increase public oversight and accountability by helping regulators adopt tools to monitor large-scale AI systems.

Lowering the barrier to entry for new players is an important aspect of focusing on creating responsible artificial intelligence

As a signatory of this open letter , we come from a diverse group of people from all walks of life, including scientists, policymakers, engineers, activists, entrepreneurs, educators, journalists, and more. We represent different perspectives on how open source AI should be managed and released. However, we strongly agree on one point: in the era of artificial intelligence, taking an open, responsible and transparent approach is crucial to ensuring our safety.

In the field of artificial intelligence safety, openness A solution to a problem, not aggravation of the problem

Signed by: Needs to be rewritten

  • Arthur Mensch, co-founder and CEO of French startup MistralAI
  • Andrew Ng, founder of DeepLearning.AI, founder and CEO of Landing AI Officer, Professor of Computer Science at Stanford University
  • Yann Lecun, Turing Award winner, Chief AI Scientist at Meta
  • ##Julien Chaumond, CTO of Hugging Face
  • Brian Behlendorf, founding member of Apache, CTO of OpenSSF
  • Eric Von Hippel, American economist, professor at MIT Sloan School of Management
  • ……

Currently, this open letter, which has only appeared for a few days, continues to ferment, and has caused a stir in the foreign AI community. Great attention and discussion, everyone can continue to pay attention to the list update.

If you agree with the views of this joint letter, you can also submit your own signature.

It will take time to answer whether strong supervision of AI will do more harm than good, or whether the good will outweigh the harm.

The above is the detailed content of More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete