Home >Technology peripherals >AI >Turing Award winner Jack Dongarra: There is still a lot of room at the top of supercomputing

Turing Award winner Jack Dongarra: There is still a lot of room at the top of supercomputing

WBOY
WBOYforward
2023-05-04 10:58:06850browse

Super computers can be said to be the Olympic champions in scientific computing. Through numerical simulations, supercomputers enrich our understanding of the world: whether it’s the stars light-years away in the universe, Earth’s weather and climate, or how the human body works.

Jack Dongarra has been a driving force in high-performance computing for more than four decades. Earlier this year, the 2021 ACM A.M. Turing Award was awarded to Dongarra "for his pioneering contributions to numerical algorithms and tool libraries that have enabled high-performance computing software to keep pace with the exponential advances in hardware for more than four decades." .

The author of this article, Bennie Mols, met Dongarra during the 9th Heidelberg Laureate Forum in Germany in September and discussed the present and future of high-performance computing. Dongarra, 72, is a distinguished professor at the University of Tennessee and has been a distinguished researcher at the U.S. Department of Energy’s Oak Ridge National Laboratory since 1989. Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

The following is the content of the interview

Q1: What has been your motivation for conducting scientific research over the past few decades?

A: My main area of ​​research is mathematics, especially numerical linear algebra. All my work stems from this. For subjects such as physics and chemistry that require calculations - especially solving systems of linear equations - software that can calculate answers is undoubtedly very important. At the same time, you must also ensure that the operation of the software is consistent with the architecture of the machine, so that you can truly obtain the high performance that the machine can achieve.

Q2: What are the most important requirements for software to run on supercomputers?

A: We hope that the calculation results of this software are accurate. We hope that the scientific community will use and understand this software and even contribute to its improvement. We want the software to perform well and be portable across different machines. We want the code to be readable and reliable. Ultimately, we want software to make the people who use it more productive.

Developing software that meets all these requirements is a non-trivial process. This level of engineering often has millions of lines of code, and about every 10 years we see some major changes in machine architecture. This will lead to the need to refactor both the algorithms and the software that embodies them. Software follows hardware, and there is still a lot of room at the top of supercomputing to achieve better machine performance.

Q3: What are the current developments in high-performance computing that excite you?

A: Our high-performance supercomputers are built on third-party components. For example, you and I can also buy high-end chips, but high-performance computers require a lot of them. Usually we use some accelerators in the form of GPUs on high-performance computers. We put multiple chip development boards on a rack, and many of these racks together form a supercomputer. The reason we use third-party components is because it's cheaper, but if you design a chip specifically to do scientific computing, you get a supercomputer with better performance, which is an exciting thought.

In fact, this is exactly what companies like Amazon, Facebook, Google, Microsoft, Tencent, Baidu and Alibaba are doing; they are making their own chips. They can do this because they have huge funds, whereas universities have limited funds and therefore unfortunately have to use third party products. This ties into another concern of mine: How do we keep talent in science, rather than seeing them go work for larger companies that pay better?

Q4: What other important developments are there for the future of high-performance computing?

A: There are indeed some important things. It’s clear that machine learning is already having a major impact on scientific computing, and this impact will only grow. I think of machine learning as a tool that helps solve problems that computational scientists want to solve.

This goes hand in hand with another important development. Traditionally, our hardware uses 64-bit floating point operations, so numbers are represented in 64 bits. However, if you use fewer bits, such as 32, 16, or even 8 bits, you can speed up the calculation. But by speeding up calculations, accuracy is lost. However, it seems that AI calculations can often be done with fewer bits, 16 or even 8 bits. This is an area that needs to be explored, and we need to find out where reducing bits works well and where it doesn't.

Another area of ​​research is about how to start with low-precision calculations, get an approximation, and then use higher-precision calculations to refine the results.

Q5: What is the power consumption of supercomputers?

A: Today’s best-performing supercomputers consume 20 or 30 megawatts to achieve exascale speeds. If everyone on Earth did one calculation every second, it would take more than four years to do what a very large-scale computer does in one second. Maybe within 20 years, we will reach the scale of zettaflop, which is 10 to the power of 21 floating point operations. However, power consumption can be a limiting factor. You would need a 100 or 200 megawatt machine, which is currently too energy-intensive.

Q6: How do you see the role of quantum computing in future high-performance computing?

A: I think the problems that quantum computing can solve are limited. It will not solve problems like three-dimensional partial differential equations, where we often use supercomputers, such as climate modeling.

In the future, we will build an integrated tool containing different types of calculation tools. We will have processors and accelerators, we will have tools to help with machine learning, we will most likely have devices to do neuromorphic computing in the manner of the brain, we will have optical computers, and in addition, we will have quantum computers to solve specific problems .

The above is the detailed content of Turing Award winner Jack Dongarra: There is still a lot of room at the top of supercomputing. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete