NVIDIA A100 Tensor Core GPU 80GB HBM2

Graphic Cards And Accessories\ Datacenter GPUs

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

Specifications

GPU FeaturesNVIDIA A100 80GB
GPU Memory80 GB HBM2e
Memory bandwidth1935 GB/s
FP64 Tensor Core19.5 TFLOPS
Tensor Float 32 (TF32)156 TFLOPS
BFLOAT16 Tensor Core312 TFLOPS
FP18 Tensor Core312 TFLOPS
INT8 Tensor Core624 TOPS
Max thermal design
power (TDP)

300W

 

Multi-Instance
GPUs
Up to 7 MIGS @ 10GB
NVLinkNVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s * PCIe Gen4: 64 GB/s
Form factorPCIe Dual-slot air-cooled or single-slot liquid-cooled
Server optionsPartner and NVIDIA Certified Systems with 1–8 GPUs

Groundbreaking Innovations

NVIDIA AMPERE ARCHITECTURE
Mountain Whether using MIG to partition an A100 GPU into smaller instances or NVLink to connect multiple GPUs to speed large-scale workloads, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100’s versatility means IT managers can maximize the utility of every GPU in their data center, around the clock.
THIRD-GENERATION TENSOR CORES
Mountain NVIDIA A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That’s 20X the Tensor floating-point operations per second (FLOPS) for deep learning training and 20X the Tensor tera operations per second (TOPS) for deep learning inference compared to NVIDIA Volta GPUs.
NEXT-GENERATION NVLINK
Mountain NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/sec), unleashing the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.
MULTI-INSTANCE GPU (MIG)
Mountain An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.
HIGH-BANDWIDTH MEMORY (HBM2E)
Mountain With up to 80 gigabytes of HBM2e, A100 delivers the world’s fastest GPU memory bandwidth of over 2TB/s, as well as a dynamic randomaccess memory (DRAM) utilization efficiency of 95%. A100 delivers 1.7X higher memory bandwidth over the previous generation.
STRUCTURAL SPARSITY
Mountain AI networks have millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros, making the models “sparse” without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.