NVIDIA RTX™ A6000 48GB
Unlock the next generation of revolutionary designs, scientific breakthroughs, and immersive entertainment with the NVIDIA RTX™ A6000, the world’s most powerful visual computing GPU for desktop workstations. With cutting-edge performance and features, the RTX A6000 lets you work at the speed of inspiration—to tackle the urgent needs of today and meet the rapidly evolving, compute-intensive tasks of tomorrow.
|GPU Features||NVIDIA RTX™ A6000|
|GPU Memory||48 GB GDDR6 with ECC|
|CUDA Cores||10,752 (Ampere Architcture)|
|Tensor Cores||336 (3rd gen)|
|RT Cores||84 (2nd gen)|
|Display Ports||4x DisplayPort 1.4|
|Max Power Consumption||300 W|
|Graphics Bus||PCI Express Gen 4 x 16|
|Form Factor||4.4” (H) x 10.5” (L) dual slot|
|NVLink||2-way 2-slot or 3-slot b|
|vGPU Software Support||NVIDIA vPC/vApps, NVIDIA RTX Virtual Workstation, NVIDIA Virtual Compute Server|
|vGPU Profiles Supported||See the Virtual GPU Licensing Guide|
Multi-GPU Scalability - NVIDIA NVLink Bridge
NVLink enables professional applications to easily scale memory and performance with multi-GPU configurations. With a low-profile design that fits into a variety of systems, NVIDIA NVLink Bridges allow you to connect two RTX A6000s. This delivers up to 112 gigabytes per second (GB/s) of bandwidth and a combined 96 GB of GDDR6 memory to tackle the most memory -intensive workloads.
Exceptional Performance by Design
NVIDIA Ampere Architecture Based CUDA Cores
Double-speed processing for single-precision floating point (FP32) operations and improved power efficiency provide significant performance improvements for graphics and simulation workflows, such as complex 3D computer-aided design (CAD) and computer-aided engineering (CAE), on the desktop.
With up to 2X the throughput over the previous generation and the ability to concurrently run ray tracing with either shading or denoising capabilities, second-generation RT Cores deliver massive speedups for workloads like photorealistic rendering of movie content, architectural design evaluations, and virtual prototyping of product designs. This technology also speeds up the rendering of ray-traced motion blur for faster results with greater visual accuracy.
Third-Generation Tensor CoresNew Tensor Float 32 (TF32) precision provides up to 5X the training throughput over the previous generation to accelerate AI and data science model training without requiring any code changes. Hardware support for structural sparsity doubles the throughput for inferencing. Tensor Cores also bring AI to graphics with capabilities like DLSS, AI denoising, and enhanced editing for select applications.
48 Gigabytes (GB) of GPU MemoryUltra-fast GDDR6 memory, scalable up to 96 GB with NVLink, gives data scientists, engineers, and creative professionals the large memory necessary to work with massive datasets and workloads like data science and simulation.
Virtualization-ReadySupport for NVIDIA virtual GPU (vGPU) software allows a personal workstation to be repurposed into multiple high-performance virtual workstation instances enabling remote users, to share resources to drive high-end design, AI, and compute workloads.
PCI Express Gen 4Support for PCI Express Gen 4 provides double the bandwidth of PCIe Gen 3, improving data-transfer speeds from CPU memory for data-intensive tasks like AI and data science.
Third-Generation NVIDIA NVLink®
Increased GPU-to-GPU interconnect bandwidth provides a single scalable memory to accelerate graphics and compute workloads and tackle larger datasets.