search
HomeTechnology peripheralsAI16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

After a one-year hiatus, the annual Artificial Intelligence Debate organized by Montreal.AI and New York University Professor Emeritus Gary Marcus returned last Friday night and was once again held as an online conference like in 2020.

This year’s debate – AI Debate 3: The AGI Debate – focused on the concept of artificial general intelligence, i.e., machines capable of integrating myriads of near-human-level reasoning capabilities.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

## Video link: https://www.youtube.com/watch?v=JGiLz_Jx9uI&t=7s

This discussion lasted for three and a half hours, focusing on five topics related to artificial intelligence: Cognition and Neuroscience, Common Sense, Architecture, Ethics and Morality, and Policy and Contribution.

In addition to many big names in computer science, 16 experts including computational neuroscientist Konrad Kording also participated. ​​

This article briefly summarizes the views of five of the big guys. Interested readers can watch the full video through the link above.

Moderator: Marcus

As a well-known critic, Marcus quoted his article in the New Yorker ""Deep Learning" is Is artificial intelligence a revolution in development? 》, once again poured cold water on the development of AI.

Marcus said that contrary to the decade-long wave of enthusiasm for artificial intelligence after Li Feifei's team successfully released ImageNet, the "wishes" to build omnipotent machines have not been realized. .

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

DeepMind neuroscientist Dileep George

Dileep George, a neuroscientist from Google DeepMind, once proposed a method called "innateness" the concept of.

To put it simply, it is certain ideas that are "built in" in the human mind.

So for artificial intelligence, should we pay more attention to innateness?

In this regard, George said that any kind of growth and development from an initial state to a certain stable state involves three factors.

The first is the internal structure in the initial state, the second is the input data, and the third is the universal natural law.

"It turns out that innate structures play an extraordinary role in every area we discover."

For those who are considered The classic example of learning, like acquiring a language, once you start to break it down, you realize that the data has almost no impact on it.

Language has not changed since the dawn of man, as evidenced by the fact that any child in any culture can master it.

George believes that language will become the core of artificial intelligence, giving us the opportunity to figure out what makes humans such a unique species.

University of Washington professor Yejin Choi

Yejin Choi, a professor of computer science at the University of Washington, predicts that the performance of AI will become increasingly amazing in the next few years.

But since we don’t know the depth of the network, they will continue to make mistakes on adversarial and corner cases.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

"For machines, the dark matter of language and intelligence may be common sense."

Of course , the dark matter mentioned here is something that is easy for humans but difficult for machines.

Jürgen Schmidhuber, the father of LSTM

Marcus said that we can now obtain a large amount of knowledge from large language models, but in fact this paradigm needs to be transformed. Because the language model is actually "deprived" of multiple types of input.

Jürgen Schmidhuber, director of the Swiss Artificial Intelligence Laboratory IDSIA and the father of LSTM, responded, "Most of what we are discussing today, at least in principle, has been adopted by General Purpose many years ago. Neural Networks" are solved." Such systems are "less than human."

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

Schmidhuber said that as computing power becomes cheaper every few years, the "old theory" is coming back. "We can do a lot of things with these old algorithms that we couldn't do at the time."

Then, IBM researcher Francesca Rossi asked Schmidhuber a question: "How can we still see A system without the functions we want? What do you think? Those defined technologies still have not entered the current system?"

In this regard, Schmidhuber believes that the current main issue is computing cost:

Recurrent networks can run arbitrary algorithms, and one of the most beautiful aspects of them is that they can also learn learning algorithms. The big question is what algorithms can it learn? We may need better algorithms. Options for improving learning algorithms.

The first such system appeared in 1992. I wrote my first paper in 1992. There was little we could do about it at that time. Today we can have millions and billions of weights.

Recent work with my students has shown that these old concepts, with a few improvements here and there, suddenly work so well that you can learn new learning algorithms that are better than backpropagation.

Jeff Clune, associate professor at the University of British Columbia

The topic discussed by Jeff Clune, associate professor of computer science at the University of British Columbia, is "AI Generating Algorithms: The Fastest Path to AGI."

Clune said that today’s artificial intelligence is taking an “artificial path”, which means that various learning rules and objective functions need to be completed manually by humans.

In this regard, he believes that in future practice, manual design methods will eventually give way to automatic generation.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

Subsequently, Clune proposed the "three pillars" to promote the development of AI: meta-learning architecture, meta-learning algorithm, and automatically generating effective Learning environment and data.

Here, Clune suggests adding a "fourth pillar", which is "utilizing human data." For example, models running in the Minecraft environment can achieve "huge improvements" by learning from videos of humans playing the game.

Finally, Clune predicts that we have a 30% chance of achieving AGI by 2030, and that doesn’t require a new paradigm.

It is worth noting that AGI is defined here as "the ability to complete more than 50% of economically valuable human work."

To summarize

At the end of the discussion, Marcus gave all participants 30 seconds to answer a question: “If you could give students one piece of advice, e.g. , Which question of artificial intelligence do we need to study most now, or how to prepare for a world where artificial intelligence is increasingly becoming mainstream and central, what are the suggestions?"

Choi said: "We The alignment of AI with human values ​​has to be addressed, particularly with an emphasis on diversity; I think that's one of the really key challenges we face, more broadly, addressing challenges like robustness, generalization and explainability."

George gave advice from the perspective of research direction: "First decide whether you want to engage in large-scale research or basic research, because they have different trajectories."

Clune: "AGI is coming. So, for researchers developing AI, I encourage you to engage in technologies based on engineering, algorithms, meta-learning, end-to-end learning, etc., because these are most likely to be absorbed into our AGIs are being created. Perhaps the most important for non-AI researchers is the question of governance. For example, what are the rules when developing AGIs? Who decides the rules? And how do we get researchers around the world to follow them? Set rules?"

At the end of the evening, Marcus recalled his speech in the previous debate: "It takes a village to cultivate artificial intelligence."

"I think that's even more true now," he said. "AI used to be a child, but now it's a bit like a rambunctious teenager who has not yet fully developed mature judgment."

He concluded: "This moment is both exciting and dangerous. 》

The above is the detailed content of 16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor