Subscriptions

NVIDIA A100 SXM4 - 80GB

An advanced data centre GPU ever built

[a100-80GB]
The Nvidia A100 SXM4 80GB is the fastest GPU available on the market.
Availability: In stock

For details on how to purchase the NVIDIA A100 SXM4 - 80GB, please click the button below to send us your details and brief requirements. We can then quote you accordingly.

Details

Welcome to the era of AI

Powered by NVIDIA Ampere, a single A100 Tensor Core GPU offers the performance of nearly 64 CPUs—enabling researchers to tackle challenges that were once unsolvable. The A100 has repeated it's win for MLPerf, the first industry-wide AI benchmark, validating itself as the world’s most powerful, scalable, and versatile computing platform.

Every industry wants intelligence. Within their ever-growing lakes of data lie insights that can provide the opportunity to revolutionize entire industries: personalized cancer therapy, predicting the next big hurricane, and virtual personal assistants conversing naturally.

These opportunities can become a reality when data scientists are given the tools they need to realize their life's work.

NVIDIA Ampere A100 is the world's most advanced data GPU ever built to accelerate highly parallelised workloads, Artificial-Intelligence, Machine and Deep Learning. For graphics it pushes the latest rendering technology DLSS (deep learning super-sampling), ray-tracing, and ground truth AI graphics.

Ampere - 3rd Generation Tensor Cores

First introduced in the NVIDIA Volta architecture, NVIDIA Tensor Core technology has brought dramatic speedups to AI, bringing down training times from weeks to hours and providing massive acceleration to inference. The NVIDIA Ampere architecture builds upon these innovations by bringing new precisions -Tensor Float (TF32) and Floating Point 64 (FP64) - to accelerate and simplify AI adoption and extend the power of Tensor Cores to HPC.

By pairing CUDA cores and Tensor Cores within a unified architecture, a single server with A100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and deep learning.

CUDA & TENSOR CORES

Equipped with 6912 CUDA Cores and 432 Tensor Cores, the A100 delivers 19.5 FP64 teraFLOPS (TFLOPS) of processing performance.

That’s 24X Tensor FLOPS for deep learning training, and 12X Tensor FLOPS for deep learning inference when compared to NVIDIA Pascal GPUs.

3rd Generation NVLink

Scaling applications across multiple GPUs requires extremely fast movement of data. The third generation of NVIDIA NVLink in A100 doubles the GPU-to-GPU direct bandwidth to 600 gigabytes per second (GB/s), almost 10X higher than PCIe Gen4. When paired with the latest generation of NVIDIA NVSwitch, all GPUs in the server can talk to each other at full NVLink speed for incredibly fast data transfers.

MAXIMUM EFFICIENCY MODE

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget.

In this mode, A100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

HBM2e

A100 is bringing massive amounts of compute to data centers. To keep those compute engines fully utilized, it has a leading class 2.0 terabytes per second (TB/sec) of memory bandwidth, a 67 percent increase over the previous generation.

In addition, A100 has significantly more on-chip memory, including a 40 megabyte (MB) level 2 cache—7X larger than the previous generation—to maximize compute performance.

PROGRAMMABILITY

A100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

600+ GPU-ACCELERATED APPLICATIONS

Ampere A100 is the flagship product of the NVIDIA data center platform for deep learning, HPC, and graphics.

The platform accelerates over 600 HPC applications and every major deep learning framework. It's available everywhere, from desktops to servers to cloud services, delivering both dramatic performance gains and cost-savings opportunities.

  • Amber
  • ANSYS Fluent
  • Gaussian
  • Gromacs
  • LD-DYNA
  • NAMD
  • OpenFOAM
  • Simulia Abaqus
  • VASP
  • WRF
Part No. a100-80GB
Manufacturer nvidia
End of Life? No
Preceeded By v100s
Form Factor PCIe Full Height/Length
GPU Architecture NVIDIA, AMPERE, VOLTA, CUDA, DirectCompute, OpenCL, OpenACC
Maximum Power Consumption 400W
Thermal Solution Passive
NVIDIA CUDA Cores 6912
Compute APIs CUDA, DirectCompute, OpenCL™, OpenACC
ECC Protection No
GPU Memory 80GB HBM2e memory 1.2GHz
Memory Interface ECC
Double-Precision Performance 9.7 TFLOPs
Single-Precision Performance 19.5 TFLOPs
Tensor Performance 624 TOPs INT8 Tensor
PCI Slot(s) PCIe Gen3
NVIDIA Ampere GA100 GPU

    Please login to x.com to view...