Microsoft is spending hundreds of millions of dollars building a massive supercomputer to help power OpenAI’s ChatGPT chatbot. In two blog posts published on Monday, Microsoft explained how it created Azure's powerful artificial intelligence infrastructure that OpenAI uses, and how its systems are becoming more powerful.
To build the supercomputer that powers the OpenAI project, Microsoft said it connected thousands of Nvidia graphics processing units (GPUs) together on its Azure cloud computing platform. This, in turn, allows OpenAI to train increasingly powerful models and "unlock AI capabilities" with tools like ChatGPT and Bing.
Scott Guthrie, Microsoft’s vice president of artificial intelligence and cloud computing, said the company has spent hundreds of millions of dollars on the project. While this may seem like a drop in the bucket for Microsoft, which recently extended its multi-year, multi-billion dollar investment in OpenAI, it certainly shows that it is willing to invest more in AI. Microsoft is already working to make Azure's AI capabilities more powerful by launching new virtual machines that use Nvidia's H100 and A100 Tensor Core GPUs and Quantum-2 InfiniBand networking, a project the two companies teased last year . According to Microsoft, this should allow OpenAI and other companies that rely on Azure to train larger, more complex AI models.
Eric Boyd, corporate vice president of Microsoft Azure AI, said in a statement: "We found that we needed to build purpose-built clusters focused on supporting large training workloads, and OpenAI is one of the early evidences of this." "We work closely with them to understand what are the key things they are looking for and what are the key things they need when building a training environment."
The above is the detailed content of Microsoft invests hundreds of millions to build ChatGPT supercomputer to improve artificial intelligence technology. For more information, please follow other related articles on the PHP Chinese website!