Home >Technology peripherals >AI >NVIDIA launches L40S GPU, AI inference performance exceeds A100 by about 1.2 times
According to IT House news on August 10, NVIDIA recently launched the new NVIDIA L40S GPU and the OVX server system equipped with the GPU. Documents about this technology can be found here
▲ Picture source: NVIDIA official website
It is reported that this L40S GPU, paired with the latest OVX server, can be used for AI large model training and inference, 3D design, visualization, video processing, industrial digitization and other purposes. Compared with the A100 GPU, this set of "complement each other" "The L40S system can "enhance generative AI, graphics and video processing capabilities and meet the growing demand for computing power."
According to IT House’s investigation, we learned that the L40S GPU is an upgraded version of Nvidia’s previous L40 GPU, with 48GB GDDR6 ECC video memory. The GPU is based on the Ada Lovelace architecture and is equipped with the fourth-generation Tensor Core and FP8 conversion engine. Its computing speed is 5 times that of the previous generation. However, NVLink
is still not supportedAccording to NVIDIA, the L40S is 1.2x better than the A100 in generative AI inference performance and 1.7x better in training performance. NVIDIA believes that for "complex AI work, with billions of parameters and multiple data patterns", the performance of L40S is even more significant
NVIDIA announced that they will launch the NVIDIA L40S GPU this fall. Each NVIDIA OVX server system can support up to 8 L40S accelerator cards, but NVIDIA has not announced the price of this GPU
NVIDIA announced that ASUS, Dell, Gigabyte, HPE, Lenovo, QCT and Supermicro will soon launch OVX server systems equipped with L40S GPU
The above is the detailed content of NVIDIA launches L40S GPU, AI inference performance exceeds A100 by about 1.2 times. For more information, please follow other related articles on the PHP Chinese website!