NVIDIA (NASDAQ:NVDA) continued to show its dominance in the AI space Monday by introducing its latest AI system, the HGX H200 powered by the H200 GPU.
The newest HGX H200 platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads. The new HGX H200, is based on NVIDIA's Hopper architecture.
The H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.
“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”
NVIDIA expects to start shipping H200-powered systems in the second quarter of 2024.
Read more on investing.com