NVIDIA H100 Tensor Core GPU 80GB HBM2

Graphic Cards And Accessories\ Datacenter GPUs

Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models. H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.

Specifications

GPU FeaturesNVIDIA H100 PCIe
GPU Memory80 GB HBM2e
Memory bandwidth2TB/s
FP64 Tensor Core51 TFLOPS
TF32 Tensor Core756 TFLOPS
FP16 Tensor Core1,513 TFLOPS
FP8 Tensor Core3,026 TFLOPS
INT8 Tensor Core3,026 TOPS
Max thermal design
power (TDP)
300-350W
(configurable)
Multi-Instance
GPUs
Up to 7 MIGS @ 10GB each
NVLink2 -way 2-slot or 3-slot b
Form factorPCIe dual-slot air-cooled
Server optionsPartner and NVIDIA Certified Systems with 1–8 GPUs

Explore the technology breakthroughs of NVIDIA Hopper

NVIDIA H100 Tensor Core GPU
Mountain Built with 80 billion transistors using a cutting-edge TSMC 4N process custom tailored for NVIDIA’s accelerated compute needs, H100 is the world’s most advanced chip ever built. It features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data center scale.
Transformer Engine
Mountain The Transformer Engine uses software and Hopper Tensor Core technology designed to accelerate training for models built from the world’s most important AI model building block, the transformer. Hopper Tensor Cores can apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers.
NVLink Switch System
Mountain The NVLink Switch System enables the scaling of multi-GPU input/output (IO) across multiple servers at 900 gigabytes per second (GB/s) bidirectional per GPU, over 7X the bandwidth of PCIe Gen5. The system supports clusters of up to 256 H100s and delivers 9X higher bandwidth than InfiniBand HDR on the NVIDIA Ampere architecture.
NVIDIA Confidential Computing
Mountain NVIDIA Confidential Computing is a built-in security feature of Hopper that makes NVIDIA H100 the world’s first accelerator with confidential computing capabilities. Users can protect the confidentiality and integrity of their data and applications in use while accessing the unsurpassed acceleration of H100 GPUs.
Second-Generation MIG
Mountain The Hopper architecture’s second-generation Multi-Instance GPU (MIG) supports multi-tenant, multi-user configurations in virtualized environments, securely partitioning the GPU into isolated, right-size instances to maximize quality of service (QoS) for 7X more secured tenants.
DPX Instructions
Mountain Hopper’s DPX instructions accelerate dynamic programming algorithms by 40X compared to CPUs and 7X compared to NVIDIA Ampere architecture GPUs. This leads to dramatically faster times in disease diagnosis, real-time routing optimizations, and graph analytics.