NVIDIA HGX™ H100

The NVIDIA HGX H100 brings together the full power of NVIDIA H100 Tensor Core GPUs,
NVIDIA® NVLink® and NVSwitch technology, and NVIDIA Quantum-2 InfiniBand networking while
Cirrascale delivers it all to you via the cloud.

The NVIDIA Accelerated Server Platform for AI and High Performance Computing is Now Part of the Cirrascale AI Innovation Cloud

AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks. The NVIDIA HGX H100, as offered by Cirrascale, combines eight NVIDIA H100 GPUs with a high-speed interconnect powered by NVLink and NVSwitch technology to enable the creation of the world’s most powerful scale-up servers. Leveraging the power of multi-precision Tensor Cores in H100, an eight-way HGX H100 provides over 32 petaFLOPS of FP8 deep learning compute performance. Additionally, Cirrascale offers large-scale NVIDIA HGX H100 clusters built using NVIDIA Quantum-2 InfiniBand networking platform, so users can experience unmatched application performance across multiple servers.

H100 further extends NVIDIA’s market-leading inference leadership with several advancements that accelerate inference by up to 30X and deliver the lowest latency. Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, INT8, and now FP8 to reduce memory usage and increase performance while still maintaining accuracy for large language models.

Ready to Go?

Sign-up to access the NVIDIA HGX H100 platform and experience the fastest GPUs available as a secure monthly cloud service.

SIGN UP TODAY

Data Sheet

Download the HGX H100 Cloud Service data sheet to learn more about our offering.

DOWNLOAD

NVIDIA HGX H100 in the Cloud with Cirrascale Cloud Services

We offer fully-managed NVIDIA GPU-based clusters at a fraction of the cost of traditional cloud service providers. These bare-metal servers are completely dedicated to you with no contention and no performance issues due to virtualization overhead.

Our flat-rate, no surprises billing model means we can provide you with a price that is up to 30% lower than the other cloud service providers. We also don't nickel-and-dime you by charging to get your data in to or out of our cloud. Instead, we charge no ingress or egress fees so you never receive a supplemental bill.

NVIDIA HGX Platform

Unprecedented performance, scalability, and security for Enterprise AI / HPC

The NVIDIA HGX H100 represents the key building block of the new Hopper generation GPU server. It hosts eight H100 Tensor Core GPUs and four third-generation NVSwitch. Each H100 GPU has multiple fourth generation NVLink ports and connects to all four NVSwitches. Each NVSwitch is a fully non-blocking switch that fully connects all eight H100 Tensor Core GPU.

NVIDIA HGX Platform

High-level block diagram of HGX H100 8-GPU

This fully connected topology from NVSwitch enables any H100 to talk to any other H100 concurrently. Notably, this communication runs at the NVLink bidirectional speed of 900 gigabytes per second (GB/s), which is more than 14x the bandwidth of the current PCIe Gen4 x16 bus.

The third-generation NVSwitch also provides new hardware acceleration for collective operations with multicast and NVIDIA SHARP in-network reductions. Combining with the faster NVLink speed, the effective bandwidth for common AI collective operations like all-reduce go up by 3x compared to the HGX A100. The NVSwitch acceleration of collectives also significantly reduces the load on the GPU.

Exascale high-performance computing

The NVIDIA data center platform consistently delivers performance gains beyond Moore’s Law. And H100’s new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.

H100 triples the floating-point operations per second (FLOPS) of double-precision Tensor Cores, delivering 60 teraFLOPS of FP64 computing for HPC. AI-fused HPC applications can leverage H100’s TF32 precision to achieve one petaFLOP of throughput for single-precision, matrix-multiply operations, with zero code changes.

H100 also features DPX instructions that deliver 7X higher performance over NVIDIA A100 Tensor Core GPUs and 40X speedups over traditional dual-socket CPU-only servers on dynamic programming algorithms, such as Smith-Waterman for DNA sequence alignment.

NVIDIA HGX Platform

Projected performance subject to change. 3D FFT (4K^3) throughput | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB | Genome Sequencing (Smith-Waterman) | 1 A100 | 1 H100

NVIDIA InfiniBand

NVIDIA Quantum-2 InfiniBand Networking

Cirrascale offers large-scale NVIDIA HGX H100 clusters built using NVIDIA Quantum-2 InfiniBand networking platform, so users can experience unmatched application performance across multiple servers. Smart adapters and switches reduce latency, increase efficiency, enhance security, and simplify data center automation to accelerate end-to-end application performance.

The data center is the new unit of computing, and HPC networking plays an integral role in scaling application performance across the entire data center. NVIDIA InfiniBand is paving the way with software-defined networking, In-Network Computing acceleration, remote direct-memory access (RDMA), and the fastest speeds and feeds.

Sign Up for an NVIDIA HGX Instance

Sign-up to access the NVIDIA HGX H100 platform and experience the fastest GPUs available as a secure monthly cloud service.