NVIDIA Tesla P100 the most advanced DataCenter GPU
- Details
- Category: News
- Created on 19 October 2016
THE MOST ADVANCED DATA CENTER GPU EVER BUILT
Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's data centers rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads.
NVIDIA Tesla® P100 GPU accelerators are the most advanced ever built for the data center. They tap into the new NVIDIA Pascal™ GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity nodes. Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.
With over 400 HPC applications accelerated—including 9 out of top 10—as well as all deep learning frameworks, every HPC customer can now deploy accelerators in their data centers.
NVIDIA Tesla P100 ACCELERATOR FEATURES AND BENEFITS
The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.
The new NVIDIA Pascal™ architecture enables the Tesla P100 to deliver the highest absolute performance for HPC and hyperscale workloads. With more than 21 TeraFLOPS of FP16 performance, Pascal is optimized to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 TeraFLOPS of double and single precision performance for HPC workloads. |
The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS® (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell™ architecture. This provides a generational leap in time-to-solution for data-intensive applications. |
Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory. |
NVIDIA TESLA P100 FOR MIXED-WORKLOAD HPCTesla P100 for PCIe enables mixed-workload HPC data centers to realize a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70% in overall data center costs. |
![]() |
PERFORMANCE SPECIFICATION FOR NVIDIA TESLA P100 ACCELERATORS | |
Double-Precision Performance | 4.7 TeraFLOPS |
Single-Precision Performance | 9.3 TeraFLOPS |
Half-Precision Performance | 18.7 TeraFLOPS |
NVIDIA NVLink™ Interconnect Bandwidth | - |
PCIe x16 Interconnect Bandwidth | 32 GB/s |
CoWoS HBM2 Stacked Memory Capacitye | 16 GB or 12 GB |
CoWoS HBM2 Stacked Memory Bandwidth | 720 GB/s or 540 GB/s |
Enhanced Programmability with Page Migration Engine | ● |
ECC Protection for Reliability | ● |
Server-Optimized for Data Center Deployment | ● |