NEW: FREE Web3 Resume Cheat Sheet DOWNLOAD NOW

Nvidia and Llama 3.1 help enterprises build supercomputers

559362
Nvidia and Llama 3.1 help enterprises build supercomputers

Contents

Share link:

In this post:

  • Nvidia collaborates with Meta and integrates Llama 3.1 LLMs into AI Foundry and NIM.
  • The new service will enable enterprises and countries to create customized supercomputers.
  • Accenture is the first company to start creating supercomputers for giants like Aramco, Uber, and AT&T.

Nvidia, a multinational corporation and technology company, has announced the release of a new service that helps enterprises and countries build customized and specialized supercomputers using Meta’s latest LLM, Llama 3.1. 

Nvidia released two new services that target enterprises and countries. The technology giant released a service within Nvidia AI Foundry and inference microservices within NVIDIA NIM. Both services leverage Meta’s latest open-source LLMs library, Llama 3.1, and can create generative AI supercomputers. 

Nvidia AI Foundry will help enterprises and countries create super LLM models that are customized for specific industry requirements and needs. This is possible through the use of Llama 3.1 and Nvidia’s software, hardware, and talent. Enterprises and countries will have the option to train these supermodels using proprietary data or synthetic data generated from Llama 3.1 and the Nvidia Nemotron reward model.

Also Read: Tesla to start using humanoid bots in 2025, says Elon Musk

Llama 3.1, which was released today, has 405 billion parameters and is positioned to compete with closed-source AI models like ChatGPT and Gemini. Meta is continuously improving Llama by providing additional components that function with the model. Meta and Nvidia partnered to integrate Llama 3.1 within Nvidia’s services, making the solution available from day one. The CEO of Nvidia, Jensen Huang, said,

“…NVIDIA AI Foundry has integrated Llama 3.1 throughout and is ready to help enterprises build and deploy custom Llama supermodels.”  

The generative AI models of Llama 3.1 are trained on more than 16,000 Nvidia H100 Tensor Core GPUs. In addition, they’re optimized for Nvidia’s accelerated computing and software which enables deployment in data centers, clouds, and on GPU powered personal computers.

See also  New move: Elon Musk to withdraw bid for OpenAI if its board agrees to terms

Also Read: Meta unveils biggest version of Llama 3 AI model

Currently, many companies worldwide have access to NIM microservices for Llama. Accenture is the first client to build custom Llama supermodels for Aramco, AT&T, and Uber. These corporations will be the first to access NIM microservices using Llama 3.1. After successfully creating custom models, an enterprise can choose Nvidia’s microservices, an OPs platform, and a cloud platform to run the models. 

Last week, Mistral AI released a new 12B model named Mistral NeMo in collaboration with Nvidia. The model is available as an Nvidia NIM inference microservice. Regarding new GPU hardware, a leaker claimed that hardware technology company will release a new Gen RTX 5090D only for the Chinese market. The new GPU is going to be the successor to the RTX 4090D. 

Cryptopolitan Academy: FREE Web3 Resume Cheat Sheet - Download Now

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Subscribe to CryptoPolitan