Nvidia Releases New AI Chip, H200, With 141 GB Bandwidth
Nvidia H200 advanced chips (photo: twitter @nvidiadc)

JAKARTA - Nvidia on Monday, November 13 added a new feature to its best chip for artificial intelligence. They stated that this new offering will begin rolling out next year with Alphabet's Amazon.com, Alphabet's Google, and Oracle.

The chip, called H200, will replace the top chip today, H100, with a major increase, namely a larger high bandwidth memory, one of the most expensive parts of the chip that determines how quickly the data can be processed.

Nvidia dominates the artificial intelligence chip market and is driving the ChatGPT service from OpenAI as well as many similar generative artificial intelligence services that respond to questions with human-like writing. The addition of more high bandwidth memory and faster connections to chip processing elements. This means such services can provide answers more quickly.

H200 has a high bandwidth memory of 141 gigabytes, an increase from 80 gigabytes in the previous H100. Nvidia did not disclose suppliers to memory on these new chips, but Micron Technology said in September that it was working to supply Nvidia.

Nvidia also bought memory from SK Hynix from Korea, which said last month that artificial intelligence chips helped revive sales.

Nvidia also announced that Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be the first cloud service providers offering access to H200 chips, in addition to dedicated artificial intelligence cloud service providers such as CoreWeave, Lambda, and Vultr.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)