Home > Article > Technology peripherals > AI industry event gathers big names! Sam Altam, "Godfather of AI"... understand the latest views in one article
For the AI industry, the 2023 Zhiyuan Artificial Intelligence Conference held in Beijing in recent days can be said to be a gathering of big names. In addition to Sam Altam, the founder of OpenAI, the Turing Award Geoffrey Hinton, Yann LeCun, well-known David Holz, the founder of the AI drawing software Midjourney, and others appeared one after another. The speeches of the big names were very forward-looking for the future development of the industry.
Let’s take a look at what these top experts in the AI industry have to say.
People both desire and fear intelligence
Midjourney founder David Holz is a serial entrepreneur. In 2011, he founded Leap Motion, a software and hardware company in the VR field. In 2019, he sold Leap Motion to competitor Ultrahaptics. In 2021, he raised funds to start the company again. The current popular AI drawing software Midjourney.
David Holz believes that artificial intelligence, as far as I understand it, is somewhat similar to a part of our body, and it is also closely related to history. It also intertwines itself with history in an interesting way.
Holz believes that One of the goals of Midjourney is to build new human infrastructure. The world will need a lot of new things and will need infrastructure to build new things. So I think a lot about building new forms of human infrastructure, like new pillars of infrastructure. So I need my themes, my pillars are reflection, imagination and coordination. You have to reflect on who you are, what you want, and imagine what could be. Because that's the perspective we use on everything, we're starting to see some breakthroughs happening in image synthesis that are qualitatively different than anything I've encountered in artificial intelligence before.
Holz introduced that Midjourney is not just about learning how to use this tool, but about learning all the art and history, as well as all the knowledge about cameras, lenses and lights. Users want to understand the language and concepts they can now use in their creations. In the past, I used to think that knowledge was just some kind of historical accumulation, but now I have realized that knowledge is actually the ability to create things.
Holz believes that people are worried about the rapid development of artificial intelligence not only because of technology, but also because of fear of intelligence. If they are smart, can I trust them? But on the other hand, it seems that we want a world with as much intelligence as possible, and we don’t seem to want a world that lacks intelligence.
AI will learn to be very good at deceiving others
Geoffrey Hinton, the dean of deep learning and the godfather of artificial intelligence, said that the biggest barrier to the development of AI now is the problem of computing power, which is far from enough. Now abandoning the most basic principle of computer science - that software should be separated from hardware - I will mention an algorithm called "activity perturbation" that can be used to train neural networks and save computing power.
This algorithm can predict gradients with much less noise than the traditional backpropagation algorithm (RNN).Regarding the question of how to apply this algorithm to training large neural networks, large neural networks can be divided into many small groups and assign a local objective function to each group. Each group can then be trained using an "activity perturbation" algorithm and combined together into a large neural network with an unsupervised learning model to generate these local objective functions.
When there is a problem with the hardware, the information will be lost. The parent class information is passed to the subclass information so that the learned information can still be retained when there is a problem with the hardware and more effectively constrain the weight of the neural network.
The "distillation" method allows the sub-model to better learn information about classifying images, including how to give correct answers and the probability of giving wrong answers. It also has a special attribute, that is, while training the sub-model, it also Generalization ability of training sub-models.
What if these artificial intelligences didn’t learn from us in a slow way, but directly from the real world, Hinton said. Once they start doing this, they will learn more than people and learn quickly.
What would happen if these things became smarter than us humans? Hinton believes that these superintelligences may occur much faster than previously thought.
If you want to say that super intelligence is more efficient, you need to allow it to create subclasses, more or less you will want to rely on the AI to gain more power, gain more control, the more control you have, The easier it is to achieve your goals. Hinton found it difficult to predict how humans would prevent an AI from trying to gain more control to achieve other goals. When they start doing this, humans will face problems because they will find it very easy to manipulate people to gain more power.
According to Hinton, it is worrying that AI will become very good at deceiving others, and I have not seen a way to prevent this from happening. Researchers need to figure out how to give humans superintelligence that can improve their lives without requiring them to overly intervene.
Humanity may lose control of the world and the future due to AI
Yao Qizhi is a Turing Award winner and an academician of the Chinese Academy of Sciences. He believes that humans need to truly solve their own problems before thinking about how to control artificial intelligence. For AI technology, currently is an important window. Before creating AGI or engaging in an arms race, there is an urgent need to reach consensus and work together to establish an AI governance framework.
Stuart Russell, a professor at the University of California, Berkeley, said that general artificial intelligence (AGI) has not yet been reached, and the large language model is only one piece of the puzzle. People are not sure what the final puzzle will look like and what is missing.
He said that ChatGPT and GPT-4 are not "answering" the question and they do not understand the world.
Russell pointed out that the biggest risk comes from the seemingly unfettered competition among technology companies, which will not stop developing more and more powerful systems regardless of the risks. Just as humans have caused gorillas to lose control of their own future, AI may cause humans to lose control of the world and its future.
AGI’s three technical routes
Huang Tiejun, director of Beijing Zhiyuan Artificial Intelligence Research Institute, pointed out that there are three technical routes to realize general artificial intelligence (AGI): The first is the information type formed by "big data self-supervised learning and large computing power" model; the second is embodied intelligence, which is an embodied model trained through reinforcement learning based on the virtual world or the real world; the third is brain intelligence, which directly "copies the work of natural evolution" and replicates a digital version of the agent.
OpenAI follows the first technical route when doing GPT (generative pre-training Transformer model); a series of progress made with Google DeepMind's DQN (Deep Q-network) as the core is based on the second technology route.
Zhiyuan hopes to be different from the first two technical routes, starting from "first principles", from atoms to organic molecules, to the nervous system, to the body, to build a complete intelligent system AGI. Zhiyuan is a new R&D institutional platform that is working in three directions to achieve a goal that will take about 20 years to achieve.
Three challenges facing AI in the future
Yang Liang, winner of the Turing Award and one of the "Big Three" of artificial intelligence, believes that machine learning is not particularly good compared with human animals. What AI lacks is not only the ability to learn, but also the ability to reason and plan. We should use machines to replicate the ability of humans and animals to learn how the world works by observing it or experiencing it.
Yang Liang pointed out that there are three main challenges facing AI in the next few years: The first is to learn the representation and prediction model of the world, which can be learned in a self-supervised manner.
The second is to learn reasoning. This corresponds to psychologist Daniel Kahneman’s concepts of System 1 and System 2. System 1 is human behavior or action that corresponds to subconscious calculations, and is those things that can be completed without thinking; while System 2 It is a task that you consciously and purposefully use all your thinking power to complete. At present, artificial intelligence can basically only realize the functions in System 1, and it is not complete;
The last challenge is how to plan complex sequences of actions by decomposing complex tasks into simple tasks and running them in a hierarchical manner.
The birth of GPT-5 “will not happen soon”
OpenAI founder Sam Altman quoted the "Tao Te Ching" and talked about cooperation between major countries, saying that AI security begins with a single step, and cooperation and coordination must be carried out among countries.
Altman believes that is likely to have very powerful AI systems in the next ten years. New technologies will fundamentally change the world faster than people imagine. It is important and urgent to have good AI safety rules of.
When asked by Zhang Hongjiang about the future of AGI and whether GPT-5 will be seen soon, Altman said that he was not sure, but made it clear that the birth of GPT-5 "will not be soon."
Altman said that many open source large models will be provided, but there is no specific release schedule.
The above is the detailed content of AI industry event gathers big names! Sam Altam, "Godfather of AI"... understand the latest views in one article. For more information, please follow other related articles on the PHP Chinese website!