Home >Technology peripherals >AI >What are your thoughts on artificial intelligence after 2021?
Under the guidance of the China Association for Science and Technology, the Chinese Academy of Sciences, the Chinese Academy of Engineering, the Zhejiang Provincial People's Government, the Hangzhou Municipal People's Government, and the Zhejiang Provincial Artificial Intelligence Development Expert Committee, it is sponsored by the Chinese Artificial Intelligence Society and the People's Government of Yuhang District, Hangzhou City, Zhejiang The 2020 Global Artificial Intelligence Technology Conference hosted by the Hangzhou Future Science and Technology City Management Committee was successfully held in Hangzhou, the "digital capital". In the keynote report session of the conference held on the 25th, Dai Qionghai, Chairman of the China Artificial Intelligence Society, Counselor of the State Council, Academician of the Chinese Academy of Engineering, and Dean of the School of Information at Tsinghua University, gave us a wonderful speech titled "Some Thoughts on Artificial Intelligence."
Dai Qionghai, Chairman of the Chinese Society for Artificial Intelligence, Counselor to the State Council, Academician of the Chinese Academy of Engineering, Dean of the School of Information, Tsinghua University
I want to talk to you Let’s talk about some of my thoughts on artificial intelligence, including some issues worth discussing. From the primitive society thousands of years ago, people relied on stone tools to work; to the agricultural period, the tools used by people have been upgraded; to the steam engine that appeared in the Industrial Revolution, which further improved productivity; and the electrical revolution has greatly improved human production. Efficiency; in today's information age, the birth of electronic computers has extended our brainpower and broadened our horizons and thoughts. Marx said, "The difference between various economic eras lies not in what is produced, but in how it is produced and what means of labor are used for production. The means of labor can better show the decisive characteristics of a social production era."
In the information age, a series of representative inventions and creations such as the Internet, electronic computers, communication networks, space technology, bioengineering, and atomic energy technology have emerged. In particular, the birth of the Internet and electronic computers has expanded the scope of human beings and their relationship with each other. The boundaries of interaction between people.
Now the era of artificial intelligence has arrived, with the emergence of deep neural networks and many representative industry heroes, such as Elon Musk, as well as new technologies and technologies such as unmanned systems, nanotechnology, quantum computing, and the Internet of Things. New products, people's work and life have undergone earth-shaking changes.
Interdisciplinary intersection is a typical symbol of the artificial intelligence era. For example, the cognitive vision and cognitive expression mentioned by Academician Pan Yunhe are typical interdisciplinary research. Artificial intelligence technology embraces all aspects, such as computer vision, natural language understanding, robotics and logical reasoning, and has played a huge role in medical, electronics, finance and other industries. Next, I will briefly analyze several issues in the artificial intelligence era from three levels: the first is computing power; the second is algorithms; and the third is how do people get along with AI?
First of all, it’s computing power. In 1956, Rosenblatt's perceptron only contained 512 computing units and could perform data classification. However, the development of artificial intelligence has been troubled by computing power until Gordon Moore proposed that the number of transistors integrated on an integrated circuit chip doubles every 18 months, which pointed out the direction for the development of chip technology in the following decades. In 1999, NVIDIA released GPU for parallel data processing, allowing artificial intelligence to develop into a broader field. In 2012, Alex used AlexNet for GPU acceleration, pioneering deep network applications. Next is the well-known Google AlphaGo, which has 5,000 GPUs. After 40 days of training, it can beat all the invincible players in the world. This shows that parallel computing and special chips play an important role in promoting artificial intelligence.
Let’s take a look at the development of existing technology. Streaming video accounts for 58% of global Internet downstream traffic. The number of domestic Internet terminals has exceeded 2 billion in August 2019. These data require huge computing power support. Nowadays, smart medical care, smart manufacturing, and driverless driving are pursuing smaller, faster, and smarter products. Therefore, the booming development of artificial intelligence requires more computing power than other aspects, becoming an important support for artificial intelligence.
But the rate of increase in computing power no longer follows Moore’s Law. From the emergence of the first computer to the following decades, chip computing power basically complied with Moore's Law. But as time goes by, the density growth of transistors on the chip no longer follows Moore's Law. In other words, the growth rate of chip computing power can no longer meet the development needs of artificial intelligence technology. As a result, international technology giants have begun to exert their efforts. For example, Google's TPU and China's Horizon and Cambrian will design special chips for neural networks to increase computing power. However, these chips are specialized and cannot meet the development needs of general artificial intelligence.
I used to say, logical thinking, engineering practice. What are the physical requirements? Such as quantum mechanics and quantum computing. As we all know, Intel and Google have discovered that quantum computing speeds are much higher than current computers when processing specific tasks. As the number of effective qubits continues to increase, they hope (especially Google) to become the dominant player in quantum computing. But the reality is that after analysis by physicists, many of the problems have not yet been solved, such as how to maintain sufficient coherence of qubits for a long time, which is an important issue; and at the same time, make enough ultra-high-precision quantum devices within this time Logical calculations are also a difficult problem. Therefore, it will be completely impossible to completely use quantum computing to increase computing power in the future. Therefore, people proposed an architecture that integrates storage and computing, hoping to break through the limitations of the storage wall and improve its computing power. This is why I say that the era of artificial intelligence has entered a crossover era. In addition to asking for computing power from physics, we also need computing power from brain science. For example, the brain-like project hopes to improve computing power by simulating the mechanisms in brain science; not only that, We also need computing power from the boundary of physics and photoelectric computing; we also need computing power from the integration of storage and calculation, photoelectricity.
The following will introduce the computing power required for photoelectric calculations. The professor from Princeton University has done a theoretical analysis on the architecture of neural network calculations. Theoretically, it can increase the computing power by three orders of magnitude and reduce the power consumption by six orders of magnitude. Power consumption is now also an important issue to consider when increasing computing power. Optoelectronic computing can bring huge benefits to this aspect, increasing computing power by three orders of magnitude and reducing power consumption by six orders of magnitude. Research work in this area has already begun. Optoelectronic computing is not a new thing. Like artificial intelligence, it was also born in the 1950s. Computer semiconductors and silicon-based chips for computing alone have already met the demand, so researchers have gradually reduced research in this area. Especially in 1990, Bell Labs used potassium arsenide to make an optical switch to control a prototype computer. Since the demand for computing power was small at that time, chips could solve the problem. Now, with the extreme requirements of artificial intelligence for chips, from 2017 to 2019, many institutions have made important contributions to the research of optoelectronic computing, such as three-dimensional controlled diffraction propagation time and fully parallel light speed calculation. Through research, a text can be quickly recognized. Because light does not require electricity, this kind of optical calculation can be propagated in a controllable high-dimensional light field without consuming electricity, thereby achieving high-speed and efficient parallel computing. Therefore, building an optoelectronic computing architecture has become an important research direction in solving computing power.
As a new computing approach, the most important changes it brings are: first, paradigm change; second, increased computing power; and third, reduced power consumption. Because of its many advantages, many research institutions at home and abroad have carried out related research. There are currently three contributions made internationally. The interference neural network architecture made by MIT is very good; the University of Münster and Cambridge are using photo materials to create pulse architecture; Tsinghua University is using diffraction neural network architecture. Each of the three different options has advantages and disadvantages. Therefore, some achievements can be made in the future balance of computing power. You can imagine that the computing power of photoelectric computing can provide three orders of magnitude. Our ultra-small intelligent 5G, intelligent robots, micro repair robots, especially the autonomous driving we are currently researching, photoelectric intelligent driving will promote the development of this aspect. Therefore, optoelectronic computing makes unmanned systems faster, smaller, and smarter. At present, this direction has also aroused widespread interest in the international academic and industrial circles. Many institutions have already conducted research in this area, and I hope everyone will pay attention to this direction.
What are the most important features of optoelectronic smart chips? It is their miniaturization of huge computing centers. Our current computing centers consume a lot of electricity. If we use photoelectric computing, we can save a lot of electricity. Second, nanosecond-level target perception and recognition. Nanosecond-level target sensing and recognition are very fast. Nowadays, when using a camera to shoot, it needs to be converted into electricity and then calculated. Imagine if it were calculated by directly entering the light into the camera, then the speed would become very fast. Therefore, optoelectronic smart chips play an important supporting role in the industrial Internet, computer vision, big data analysis and optical communications in new infrastructure. This is a discussion and an idea about computing power, and everyone is welcome to criticize.
The second is the algorithm. Because the most important thing about artificial intelligence is algorithms, researchers are generally studying algorithms. So where do these algorithms come from? Existing artificial intelligence only implements simple primary visual perception functions. As Academician Pan mentioned just now, there are many tasks in no-man's land that need to be solved. In the process of primary visual perception information processing and advanced cognitive intelligence, the performance is far inferior to that of the human brain, which has physical learning and data abstraction capabilities. Some scholars believe that there is a huge crisis in deep learning. The BP algorithm has great limitations and needs to be overturned and started again. It is necessary to find inspiration from the cognitive mechanism model of the brain again. From the picture on the right, we can see that difficult problems are easy to solve, while simple problems are often difficult to solve. Hinton's demo shows that deep networks are currently in crisis, so it is necessary to learn from the multi-modal data representation, transformation and learning rules and feedback methods of the nervous system. Cognitive computing will promote the transformation of artificial intelligence. What is the most important issue that everyone has been discussing about artificial intelligence? How to be efficient now? Now deep networks are uninterpretable, so how to make them interpretable? If you are not robust now, how can you be robust?
The new generation of cognitive intelligence is now the most important international integration point in algorithms. As we all know, the prototype of the BP algorithm in 1969 came from control and was generated and collected from optimal control theory. Until 1989, the convolutional neural network was born. Cognitive and neuroscientists introduced the BP algorithm into multiple neural networks for the first time and constructed a cognitive computing model. And then to the 2015 calculation model. It can be seen that the BP algorithm is the most widely used in deep learning, but it still has many problems.
Judging from the convolutional neural network that was studied since 1958, the Nobel Prize winner in 1981 discovered that human vision is layered, with high-level visual layering, and also found that the visual system is convolutional. characteristic. So in 1980, Japanese scholars mentioned the simple and complex concept of cells and proposed a new cognitive mechanism. David Marr believes that computational research on human representation and processing of visual information has drawn an important conclusion about the relationship between visual and perceptual effects. In 2007, Tomaso Poggio proposed the H-MAX model. Alex's contribution in 2012 ushered in the golden age of artificial intelligence, which has been widely used. This is also the historical origin of our algorithm. Through historical analysis, the future can be predicted.
What can you see from the displayed content? It is about the analysis of nerves by brain scientists. Through the analysis of nerves, brain-like computing is inspired. The above part is an analysis of the entire neurological aspect, and the following part is about hoping to realize a brain-like idea, from brain science to whether artificial intelligence can be used to conduct brain-like research. Research by several institutions has made breakthroughs recently. One is Professor Shi Luping reported in 2019; the other is Wu Huaqiang reported in 2020 who has played a major role in researching brain-like storage-integrated chips. Therefore, China's research in this area should be at a parallel stage internationally. The brain results shown above are based on our extensive research on the activation status of neurons, including the cat’s visual perception and the ganglia of the brain.
Regarding the calculation of artificial intelligence theory, we have made a correspondence and comparison, that is, how to use brain machines to inspire new theories of artificial intelligence is actually an important way for the development of a new generation of artificial intelligence. What does it mean when we compare and compare them? Many artificial intelligence experts have drawn on some mechanisms from brain science to respond to how artificial intelligence should move forward. How should this algorithm be solved? How to solve it specifically?
The adult brain cells have 86 to 100 billion neurons, and when our electrical signals are acting, the overall power consumption is very low, between 10 and 23 watts. If you are working hard to complete something, your maximum power consumption is within 25 watts; if you are in a daze, the minimum power consumption is only about 10 watts. So the power consumption is very small, but the power consumption of artificial intelligence computers is very large.
Just now Mayor Liu talked to me about building a large computing center in Hangzhou, and the power consumption at this time should be much higher. So how can it be provided? Discussing this issue from the perspective of cognitive science, we drew such a bridge. As shown in the figure, cognitive computing is a bridge between brain science and artificial intelligence. Why? Let’s first go back and ask, what does cognitive science do? Cognitive science is multi-modal circuit observation, which involves observing things in brain science.
The second is the multi-level cognitive model, including vision, hearing, language, and touch as Academician Pan said. These are multi-level cognitive models and brain science information. Here we observe and pass Building models is cognitive science. We believe that from brain science to artificial intelligence through cognitive science research, this is to take another path, called from brain science to artificial intelligence. This is what we call a path of future hope, and it is also a way to study new artificial intelligence algorithms. of a road.
Next, let’s go back and look at some classic international contributions. The left side shows the entire contribution of brain science. What is the main contribution here? How humans think. There are Turing Awards on the right. The most important conclusion is, what are the contributions? How do machines think? There should be a bridge in the middle to establish relationships and connect them. So we hope cognitive science is the bridge.
In 2016, the United States launched the US$100 million Apollo project, which recorded and measured the activities and connections of 100,000 neurons. The top of the rightmost diagram here is a model of computational neurons, and the bottom is a computational machine learning model. Can these two models be built into an analysis using brain data? This is imaging. By studying brain computing paradigms and constructing new models and methods of cognitive computing, building a bridge from human thinking to machine thinking is an important way to inspire new artificial intelligence theories and algorithms. This is a plan constructed by Tsinghua University, but this plan is not necessarily mature and is only for your reference.
The picture on the lower right shows the memory circuit of the biological mechanism. We have the external environment, cerebral cortex and hippocampus. The picture on the lower left shows the principle of physical balance, so we look forward to building a BMP network algorithm, a network model that combines brain science, mathematics and physics. Above is a general framework for the new network model we built.
We are still conducting further research on the algorithm issue and hope to provide a solution for experts. So can artificial intelligence algorithms move from knowledge-driven to brain science, but what is the data-driven side? It is a large database with large scenes and multiple objects. What was built? Whether the Troika can be cognitively driven is a new algorithm and framework architecture we have built. This is thinking at the algorithmic level, and I hope everyone can criticize and correct me.
Third, how do people and AI get along with each other. As we all know, AI empowers humans, rather than becoming humans, nor replacing humans. Turing said 50 years ago that the development of artificial intelligence is not to turn people into machines, nor to turn machines into people, but to "research and develop theories, methods, and technologies for simulating, extending, and expanding human intelligence capabilities." and applied systems to solve technical sciences of complex problems and serve humanity”. Therefore, for the harmonious development of artificial intelligence and humans, we need to consider the issues of collaborative security, privacy, and fairness between artificial intelligence and humans.
Ultimately achieve the goal of putting people first and serving people. We currently have a project that teachers Sun Fuchun and Wu Fei are responsible for. What are we doing? Research on future artificial intelligence safety education and its cooperation with humans. After the research is completed, we will discuss the topic of a community with a shared future for mankind with the American Society of Artificial Intelligence and the European Society of Artificial Intelligence.
The four issues of putting people first and serving mankind are the most important issues that we need to explore and cannot avoid. They are ethics, privacy, collaboration and security. How do people cooperate with each other? Humans and machine AI must interact, and humans and nature must also interact. What does extreme interaction mean? In dangerous scenes, we hope to interact with AI, AI with the scene, and humans with AI, that is, what we cannot see, see clearly, hear, and touch. We call this extreme interaction. AI interaction enables disruptive user experiences, improving human cognition and the ability to transform the world. This is the characteristic of extreme interaction.
What is the most important thing in interaction? The external form of AI is an AI-specific interface. Now there are many cars, multi-purpose robots, humanoid robots, including Microsoft Xiaoice, etc., as well as surgical robots, aerial robots, and universal interfaces for AI such as mobile phones and computers, including ours. We now see virtual anchors and automated customer service, so what we want to discuss next is how we interact with AI, such as through an interface like virtual reality. As shown in the figure, virtual/augmented reality and natural interaction technology are a way of obtaining and interacting with information in the future. It can expand human capabilities, change product forms and service models, and promote cognition, intelligence, culture and art. changes and promote the development of the future human-AI-material integration society. This is what we call a characteristic.
We are now holding all meetings online, and many organizations are developing online virtual offline ones, as shown in "Kings Agent". This is equivalent to an extreme environment. Our online meetings are just like offline meetings. I think such a system may be available by the end of 2020. So, this is what we call immersive AI interaction. We have investigated that this year’s courses in primary, secondary schools and universities are basically online. By comparing the quality of classes, the teaching quality of several schools in Beijing has declined. We just adopted this form, but this form did not bring better teaching results.
Even so, if I want to change this teaching effect in the future, I think it will bring a better user experience. Many universities and companies have done related types of research, such as the three-dimensional modeling of Microsoft's 108 cameras, the three-dimensional modeling done by Facebook, and those built by Google and Tsinghua University. Tsinghua University is also now using a camera in the form of deep modeling. After building a model of a person, it can be placed anywhere, so that it can be implemented virtually offline.
As you can see, holographic intelligent teaching can be achieved in this way. For example, intelligent and accurate recommendations, online ubiquitous access, real-person holographic teaching, and immersive interactive courseware. According to this year's AI research and development, the lightest AR glasses can reach 50 grams. Previously, AR glasses were very heavy, so they could not be developed. But I think glasses will also be an important trend in the future and an important area in the virtual offline world. In the future, AI-driven mixed reality will empower teaching, production, design and communication, including industrial design. This is an important tool for future AI interaction and an important way for interaction between people and AI. .
The future is already here. I remember that Academician Li gave a report 5 years ago, saying that the future is already here, which means we should be anxious. In fact, in the future, brain-computer interfaces, human-machine integration, and human-machine "symbiosis and eternal existence" will Including the concept of consciousness storage, whether it can exist on the robot forever or be stored in one place. So this is all happening in the future. Brain-computer interfaces are developing very fast now, and we often talk about brain diseases, such as Alzheimer's disease and epilepsy. If such pathological features are found, we have two regeneration methods. If you know the types of neurons, you can use this biological repair method to repair other neurons to repair these neurons; there is also a way to use our metamaterials to replace the activity level of these neurons. If you can do it well, your brain will be able to maintain a high degree of clarity, and it is normal for human life to be extended by 50 years.
Intelligence drives the future. We have smarter "brains", more dexterous "hands", brighter "eyes", and more sensitive "ears". Smart optoelectronic chips, knowledge-driven, data-driven, cognitive-driven, this is a big future driven by intelligence. It can be seen from here that artificial intelligence is gradually reaching human level. Starting from the timetable of 2016 and planning to 2066, all human tasks have been replaced and machine AI can complete them. Of course, this is our vision. This vision is predictive and contains certain basic discussions.
We are talking about cognitive intelligence. What is cognitive intelligence? There was a Turing test before. The algorithm you developed needs to be tested. Are there any testing requirements? So we start with the Turing test, which mainly tests whether a machine can show intelligence equivalent to or indistinguishable from humans. It was an imitation game, so I'll also talk about testing in the last part.
Turing's test is constantly evolving. It can be seen that the Turing test was proposed in 1950, and early natural language processing computers were also looking forward to the test in 1986. Until 2014, Academician Guo's "Eugene Goostman" program "passed" the Turing test for the first time. In 2015, artificial intelligence was finally able to learn like humans and passed the Turing test. But what about these tests? Are there any areas for improvement? The following experts specialize in Turing tests, which test the ability of machines to reason with common sense, test the ability of abstract reasoning of neural networks, and test the ability of general artificial intelligence (AGI), such as home health care (ECW). These are all It is a new model for artificial intelligence testing, and it is emerging in endlessly. Therefore, the Turing test is also an important direction in the development of our artificial intelligence.
What is the cognitive test in the Turing test? We need to consider this matter, and I hope everyone here can consider it. What should we pay attention to? Cognitive testing and function identification, cognitive decision-making and logical reasoning are also important issues that everyone needs to study at present.
Let’s go back to knowledge-driven, brain science, and data-driven and cognitive-driven. So can a new generation of artificial intelligence algorithms be developed? What does it mean to come out? Can there be cognitive testing? This is what we call an important target topic, and it is also some of our thoughts on the development of artificial intelligence.
Artificial Intelligence is actually developing very rapidly in the historical process of industrial transformation. The information age is coming with the current digital economy, that is, the era of artificial intelligence. Here you can see many typical artificial intelligence companies in the United States, and of course Chinese companies, including ByteDance, Horizon, etc., as well as some European companies, so artificial intelligence has become the core driving force for global economic development. Artificial intelligence is also a new infrastructure and has become a very important national strategy. Artificial Intelligence 2.0 advocated by Academician Pan a few years ago has received great attention from the country. In 2020, the growth rate of my country's artificial intelligence market far exceeds the growth rate of the global market. This is the result of our entire investigation, which has been used in fields such as smart security, medical care, finance and education, such as our smart medical town in Yuhang District. New infrastructure is a very important task, and what Governor Gao and Mayor Liu just said are reflected in it.
An article published in Nature in 2019 focused on China’s leading development in the field of artificial intelligence. More than a dozen teachers and students in our laboratory investigated 44 cases of artificial intelligence-related policies promulgated in Zhejiang Province in the past 10 years. Look at Hangzhou in Zhejiang, and look at Yuhang in Hangzhou. Therefore, Hangzhou has unlimited imagination and unlimited space for building AI. We are also grateful to Hangzhou Future City for its support of our Global Artificial Intelligence Technology Conference.
Finally, to summarize, I actually shared three topics with you today. The first topic is coexistence, higher work efficiency, quality of life and safety, and interaction in extreme environments. What is extreme environment? For example, when holding an on-site meeting, we are geographically far apart, but I hope we can communicate face-to-face. This is a limit; the second is algorithms, which are closer to the original cognitive computing theory and methods. This is what we call an important one. Topic; The third is computing power, new computing paradigms and chip architectures with order-of-magnitude performance improvements, which is the most important. I hope to develop these three aspects of artificial intelligence in the future, including multi-dimensional, multi-angle and in-depth cognitive testing.
The above is the detailed content of What are your thoughts on artificial intelligence after 2021?. For more information, please follow other related articles on the PHP Chinese website!