Home > Article > Technology peripherals > Revealing Nvidia’s “Huang’s Law” era: GPU AI inference performance increases 1,000 times in 10 years
When people debate whether Moore's Law has expired, NVIDIA recently officially published a technical article related to "Huang's Law". The article no longer discusses the number of transistors, but believes that in the next ten years, the AI inference performance and efficiency of a single chip will increase by more than 1,000 times
Moore’s Law has dominated the technology industry in the past. NVIDIA CEO Jensen Huang has said many times that Moore's Law is "slowing down" and its concept has begun to become outdated. Although NVIDIA has migrated GPUs from 28nm to 5nm semiconductor nodes over the past decade, the technology has only accounted for 2.5x total revenue
NVIDIA chief scientist Bill Dally clarified in an article that NVIDIA's attitude towards next-generation technology is centered around "Huang's Law".
What is Huang's Law?
The origin of the so-called "Huang's Law", NVIDIA claims that they did not invent it themselves. This term originated from a report by IEEE Spectrum and was later known by many media. The concept that NVIDIA has recently implemented in its products is really interesting and could be the key to unlocking the future of the industry.
Bill Dally pointed out in his speech at "Hot Chip 2023" that NVIDIA has experienced an astonishing 1,000-fold increase in computing chip performance in the past decade. According to the book, such an improvement is impossible to achieve within the framework of Moore's Law, and the reduction of the process will not have any impact on this number. Now, you may ask me how I achieve this feat, and my answer is that we prioritize innovation within a single "stack" rather than chip development
To support this statement, NVIDIA stated in its article that the introduction of the "Hopper architecture" was the decisive factor in showing huge performance figures because they use "8-bit and 16-bit floating point and integer mathematics." Furthermore, the introduction of the “Ampere Architecture” has improved the performance of statistical learning, increasing the performance of computing workloads by 2 times. In order to connect various technologies together, NVIDIA's "NVLINK" technology came in handy, and finally achieved the x1000 breakthrough.
NVIDIA mentioned in the article that during the entire 10-year period, the company switched from the 28-nanometer process to the 5-nanometer process, and the performance only increased by 2.5 times. This violates Moore's Law, which states that every time a chip "shrinks", its performance will increase by 2 times year-on-year. Daly said that NVIDIA's future depends on "Huang's Law", and "Huang's Law" will bring some opportunities for industry progress.
"It's an interesting time to be a computer engineer," Daly said. "The industry situation really validates this fact. It can be said that the computer industry is at a decisive moment, and it all depends on how the company Looking at the development of chips and computing."
The above is the detailed content of Revealing Nvidia’s “Huang’s Law” era: GPU AI inference performance increases 1,000 times in 10 years. For more information, please follow other related articles on the PHP Chinese website!