Home >Technology peripherals >AI >The artificial intelligence boom has led to a shortage of computing power, Microsoft launches a new server rental plan
ChatGPT’s continued popularity has led to a chip shortage: there is not enough computing power to handle ChatGPT’s computing needs, which has overloaded OpenAI’s servers. The last large-scale chip shortage was caused by crazy mining for virtual currencies. However, as the demand for generative artificial intelligence continues to increase, this time the computing power shortage may continue for quite some time.
Microsoft hopes to fill this gap with a new virtual machine product called ND H100 v5, which includes a large number of Nvidia code-named Hopper The latest H100 GPU, for generative artificial intelligence applications.
The idea is to provide higher computing speeds to companies working on generative artificial intelligence, which can dig deeper into data, build relationships, reason and predict answers. Generative AI is still in its early stages, but the popularity of apps like ChatGPT is already demonstrating the technology’s potential.
But this technology also requires huge computing power, and Microsoft is introducing it into Azure cloud services.
The virtual machine offering can be scaled to the size of generative AI applications and scales to thousands of H100 GPUs, which are interconnected via the chipmaker's Quantum-2 InfiniBand technology.
Pricing for H100 virtual machines on Azure was not immediately announced. The price of virtual machines varies depending on the configuration, with a fully loaded A100 virtual machine with 96 CPU cores, 900GB of storage, and eight A100 GPUs costing nearly $20,000 per month.
When ChatGPT was first launched last year, Nvidia GPU faced a severe test. Its calculations are undertaken by the OpenAI supercomputer, which is built with Nvidia A100 GPUs.
But the server was quickly overwhelmed by the crazy increase in demand for ChatGPT, and users complained that the server could not respond and process query tasks in time.
The H100 could close the speed gap needed for generative artificial intelligence, which is already used in healthcare, robotics and other industries. Various development companies are also looking to fill the last mile gap and deploy an interface to make AI simple and usable, like ChatGPT.
Nvidia and Microsoft are already using the H100 to build an artificial intelligence supercomputer. The GPU is designed to work best with applications coded in CUDA (CUDA is Nvidia's parallel programming framework). Products also include the Triton inference server, which will help deploy artificial intelligence model GPT-3 on its GPU environment.
Microsoft began to fully embed artificial intelligence in its products: it implemented a customized version of GPT-3.5, the large language model behind ChatGPT, in the Bing search engine. Microsoft is taking a DevOps (development operations) style iterative approach to Bing AI, in which applications can be quickly updated by learning about users as they use the model. Microsoft 365 Copilot is the original OFFICE suite with embedded artificial intelligence. Familiar software such as WORD, PPT and EXCEL will change traditional working methods with new capabilities. And behind all this, it is inseparable from the support of computing power.
The new Azure virtual machine base configuration interconnects eight H100 Tensor Core GPUs via NVIDIA’s proprietary NVLink 4.0 interconnect. This configuration can be scaled to additional GPUs via the Quantum-2 interconnect. The server features Intel's 4th generation Xeon (Sapphire Rapids) scalable processors, with data transfer to and from the GPU via PCIe Gen5.
The above is the detailed content of The artificial intelligence boom has led to a shortage of computing power, Microsoft launches a new server rental plan. For more information, please follow other related articles on the PHP Chinese website!