LOS ANGELES — At this year’s SIGGRAPH conference in Los Angeles, Nvidia (nvidia.com) and Hugging Face announced a partnership that will put generative AI supercomputing at the fingertips of millions of developers building large language models (LLMs) and other advanced AI applications.
By giving developers access to Nvidia DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models, the combination will help supercharge industry adoption of generative AI using LLMs that are custom-tailored with business data for industry-specific applications.
“Researchers and developers are at the heart of generative AI that is transforming every industry,” explains Jensen Huang, founder and CEO of Nvidia. “Hugging Face and Nvidia are connecting the world’s largest AI community with Nvidia’s AI computing platform in the world’s leading clouds. Together, Nvidia AI computing is just a click away for the Hugging Face community.”
As part of the collaboration, Hugging Face will offer a new service — called Training Cluster as a Service — to simplify the creation of new and custom generative AI models for the enterprise. Powered by Nvidia DGX Cloud, the service will be available in the coming months.
“People around the world are making new connections and discoveries with generative AI tools, and we’re still only in the early days of this technology shift,” notes Clément Delangue, co-founder and CEO of Hugging Face. “Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open-source community easily access the software and speed they need to contribute to what’s coming next.”
The Hugging Face platform lets developers build, train and deploy AI models using open-source resources. Over 15,000 organizations use Hugging Face, and its community has shared over 250,000 models and 50,000 datasets.
The DGX Cloud integration with Hugging Face will bring one-click access to Nvidia’s multi-node AI supercomputing platform. With DGX Cloud, Hugging Face users will be able to connect to Nvidia AI supercomputing, providing the software and infrastructure needed to rapidly train and tune foundation models with unique data to drive a new wave of enterprise LLM development. With Training Cluster as a Service, powered by DGX Cloud, companies will be able to leverage their unique data for Hugging Face to create uniquely efficient models quickly.
Each instance of DGX Cloud features eight Nvidia H100 or A100 80GB Tensor Core GPUs for a total of 640GB of GPU memory per node. Nvidia Networking provides a high-performance, low-latency fabric that ensures workloads can scale across clusters of interconnected systems to meet the performance requirements of advanced AI workloads.
Support from Nvidia experts is included with DGX Cloud to help customers optimize their models and quickly resolve development challenges. DGX Cloud infrastructure is hosted by Nvidia cloud service provider partners. The Nvidia DGX Cloud integration with Hugging Face is expected to be available in the coming months.