in ,

Microsoft explains how thousands of Nvidia GPUs built ChatGPT

Over the past six months, ChatGPT has become extremely popular online, but this wasn’t an overnight success. The startup that created ChatGPT, OpenAI, contacted Microsoft more than five years ago about developing AI infrastructure on thousands of Nvidia GPUs, according to a blog post that Microsoft published on Monday.

Recently, Microsoft’s $10 billion investment in the research team behind tools like ChatGPT and DALL-E 2 has brought attention to the partnership between OpenAI and Microsoft. However, Microsoft claims that the collaboration began much earlier. According to Bloomberg, Microsoft has now invested “several hundred million dollars” in building the foundation that will underpin ChatGPT and initiatives like Bing Chat.

A large portion of the money was given to Nvidia, which is currently leading the way in computer equipment used to train AI models. Microsoft targeted Nvidia’s enterprise-grade GPUs like the A100 and H100 rather than gaming GPUs as you’d find on a list of the best graphics cards.

Yet it’s not merely a matter of assembling graphics cards and developing a language model. “This is not something that you just buy a big bunch of GPUs, put them together, and they’ll start functioning together,” says Nidhi Chappell, Microsoft’s head of product for Azure. To achieve the optimum performance, a lot of system-level optimisation must be done, and this takes several generations of experience.

Microsoft has put the necessary foundation in place and is now making its hardware available to others. In a separate blog post published on Monday, the business declared that it would deliver Nvidia H100 systems “on-demand in sizes ranging from eight to thousands of Nvidia H100 GPUs.”

Nvidia, which has spent years investing in AI through hardware and software, has seen a tremendous increase in popularity as a result of ChatGPT. With accelerators like the Instinct MI300, AMD, the major rival of Nvidia in gaming graphics cards, has been seeking to gain traction in the market.

The development of ChatGPT would not have been possible without the power provided by Microsoft, according to Greg Brockman, president and co-founder of OpenAI: “Co-designing supercomputers with Azure has been crucial for scaling our demanding AI training needs, making our research and alignment work on systems like ChatGPT possible.”

During the GPU Technologies Conference, Nvidia is anticipated to provide more information regarding upcoming AI products (GTC). On March 21, the keynote address begins the event. The future of AI in the workplace will be the subject of a presentation on March 16 as part of Microsoft’s expansion of its AI road map later this week.