NVIDIA is doubling down on AI, and generative AI in particular, with CEO Jensen Huang placing a big emphasis on the advancements in the company during his keynote address during GTC this week. 

“The warp drive engine is accelerated computing, and the energy source is AI,” Huang said. “The impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.”

During the event he described the areas the company has made strides in. Here are some of the highlights from the events: 

NVIDIA AI Foundations

The new service will enable companies to build AI applications and services. The new services NeMo and Picasso are part of AI Foundations, with NeMo being a large language model (LLM) and Picasso for images, video, and 3D. 

NeMo enables companies to refine their LLMs by defining areas of focus, adding domain-specific knowledge, and teaching functional skills. 

Picasso can be used to build and deploy generative AI-based image, video, and 3D. Companies can use it to include text-to-image, text-to-video, and text-to-3D capabilities. 

Already the company has partnerships in place taking advantage of these new solutions, including Adobe, Getty Images, and Shutterstock. These collaborations will help advance productivity for creatives using AI. 

New inference platforms for LLMs and generative AI workloads

There are four new inference platforms for building AI-powered applications. The inference platforms are combined with the power of the latest NVIDIA Ada, Hopper, and Grace Hopper processors. 

They include NVIDIA L4 for AI Video, NVIDIA L40 for Image Generation, NVIDIA H100 NVL for Large Language Model Deployment, and NVIDIA Grace Hopper for Recommendation Models

“The rise of generative AI is requiring more powerful inference computing platforms,” said Huang. “The number of applications for generative AI is infinite, limited only by human imagination. Arming developers with the most powerful and flexible inference computing platform will accelerate the creation of new services that will improve our lives in ways not yet imaginable.”

Partnering with over 100 MLOps companies

The range of offerings from these companies will help support organizations in optimizing their AI workflows. 

The AI companies that NVIDIA has partnered with include Canonical, ClearML, Dataiku, Domino Data Lab, Run,ai, and Weights & Biases.

The company also partnered with the major cloud providers, which can provide infrastructure for data processing, wrangling, training, and inference. These include AWS, Google Cloud, Azure, Oracle Cloud, and Alibaba Cloud.

NVIDIA makes supercomputing accessible to more

The company launched DGX Cloud, which enables access to an AI supercomputer right from a web browser. This removes the need to acquire, deploy, and manage on-premises infrastructure.

Companies can rent DGX Cloud clusters based on their current needs and easily scale as needs change. 

Oracle Cloud Infrastructure (OCI) is the first to provide hosting for DGX Cloud infrastructure, and according to NVIDIA, Microsoft Azure will offer hosting next quarter. Plans for Google Cloud and other providers are also in the works. 

Improving avatar creation

Through the Omniverse Avatar Cloud Engine (ACE) platform a new feature will make it easier to develop interactive avatars, from interactive chatbots to intelligent digital humans. 

For example, AT&T is using Omniverse ACE to create and deploy virtual assistants for its employee help desk. It is also working with one of NVIDIA’s service delivery partners Quantiphi to provide support for local languages in different regions. 

“Quantiphi and NVIDIA have been collaborating to make customer experience more immersive by combining the power of large language models, graphics and recommender systems,” said Siddharth Kotwal, global head of NVIDIA Practice at Quantiphi. “NVIDIA’s Tokkio framework has made it easier to build, deploy and personalize AI-powered digital assistants or avatars for our enterprise customers. The process of seamlessly integrating automatic speech recognition, conversational agents and information retrieval systems with real-time animation has been simplified.”