Nvidia’s Hopper H100 pictured, options 80GB HBM3 reminiscence and ambitious VRM

Posted on

Base line: Nvidia took the wraps off its Hopper structure at GTC 2022, pronouncing the H100 server accelerator however best appearing off renders of it. Now we after all have some in-hand footage of the SXM variant of the cardboard, which includes a mind-boggling 700W TDP.

It is been just a little over a month since Nvidia unveiled their H100 server accelerator in response to the Hopper structure, and to this point, we now have best observed renders of it. That adjustments nowadays, as ServeTheHome has simply shared footage of the cardboard in its SXM5 shape issue.

The GH100 compute GPU is fabricated on TSMC’s N4 procedure node and has an 814 mm2 die measurement. The SXM variant options 16896 FP32 CUDA cores, 528 Tensor cores, and 80GB of HBM3 reminiscence hooked up the usage of a 5120-bit bus. As can also be observed within the photographs, there are six 16GB stacks of reminiscence across the GPU, however the sort of is disabled.

Nvidia additionally quoted a staggering 700W TDP, 75% upper than its predecessor, so it is no marvel that the cardboard comes with an extremely-impressive VRM resolution. It options 29 inductors, every geared up with two energy phases and an extra 3 inductors with one energy level. Cooling all of those tightly packed elements it is going to be a problem.

Any other noticeable exchange is the connector structure for SXM5. There may be now a brief and an extended mezzanine connector, while earlier generations featured two identically sized longer ones.

Nvidia will get started transport H100-equipped techniques in Q3 of this 12 months. It is value bringing up that the PCIe model of the H100 is these days indexed in Japan for 4,745,950 yen ($36,300) after taxes and transport, despite the fact that it has fewer CUDA cores, downgraded HBM2e reminiscence, and part the TDP of the SXM variant.

Leave a Reply

Your email address will not be published.