Our customers call us superheroes, who are we to argue?


The Challenge of Scaling to Meet the Demands of Modern AI and Deep Learning


More Image(s)

The Fastest Path to Deep Learning.
Availability: In stock

For details on how to purchase the NVIDIA DGX-2, please click the button below to send us your details and brief requirements. We can then quote you accordingly.



Nvidia DGX-2

Deep neural networks are rapidly growing in size and complexity, in response to themost pressing challenges in business and research. The computational capacityneeded to support today’s modern AI workloads has outpaced traditional data centerarchitectures.

Modern techniques that exploit increasing use of model parallelismare colliding with the limits of inter-GPU bandwidth, as developers build increasinglylarge accelerated computing clusters, pushing the limits of data center scale.

A new approach is needed - one that delivers almost limitless AI computing scalein order to break through the barriers to achieving faster insights that can transformthe world.


Performance to Train the Previously Impossible

Increasingly complex AI demands unprecedented levels of compute. NVIDIA®DGX-2™ is the world’s first 2 petaFLOPS system, packing the power of 16 of theworld’s most advanced GPUs, accelerating the newest deep learning model types that were previously untrainable.

With groundbreaking GPU scale, you can trainmodels 4X bigger on a single node. In comparison with legacy x86 architectures, DGX-2’s ability to train ResNet-50 would require the equivalent of 300 serverswith dual Intel Xeon Gold CPUs costing over $2.7 million dollars.


NVIDIA NVSwitch—A Revolutionary AI Network Fabric

Leading edge research demands the freedom to leverage model parallelism andrequires never-before-seen levels of inter-GPU bandwidth. NVIDIA has createdNVSwitch to address this need. Like the evolution from dial-up to ultra-high speedbroadband, NVSwitch delivers a networking fabric for the future, today. WithNVIDIA DGX-2, model complexity and size are no longer constrained by the limits oftraditional architectures.

Embrace model-parallel training with a networking fabricin DGX-2 that delivers 2.4TB/s of bisection bandwidth for a 24X increase over priorgenerations. This new interconnect “superhighway” enables limitless possibilitiesfor model types that can reap the power of distributed training across 16 GPUsat once.


AI Scale on a Whole New Level

Modern enterprises need to rapidly deploy AI powerin response to business imperatives, and need toscale-out AI, without scaling-up cost or complexity. We’ve built DGX-2 and powered it with DGX softwarethat enables accelerated deployment, simplifiedoperations – at scale.

DGX-2 delivers a ready-to-gosolution that offers the fastest path to scaling-upAI, along with virtualization support, to enable youto build your own private enterprise grade AI cloud.

Now businesses can harness unrestricted AI powerin a solution that scales effortlessly with a fraction of the networking infrastructure needed to bindaccelerated computing resources together. With anaccelerated deployment model, and an architecturepurpose-built for ease of scale, your team canspend more time driving insights and less timebuilding infrastructure.


Enterprise Grade AI Infrastructure

If your AI platform is critical to your business, youneed one designed with reliability, availability andserviceability (RAS) in mind. DGX-2 is enterprisegrade,built for rigorous round-the-clock AIoperations, and is purpose-built for RAS to reduceunplanned downtime, streamline serviceability, andmaintain operation continuity.

Spend less time tuning and optimizing and moretime focused on discovery. NVIDIA’s enterprise-gradesupport saves you from the time-consuming job oftroubleshooting hardware and open source software.With every DGX system, get started fast, train faster,and remain faster with an integrated solution thatincludes software, tools and NVIDIA expertise.

Manufacturer nvidia
End of Life? No
Preceeded By DGX-1
Rack Units 2
System Weight 134 lbs
System Dimensions 866 D x 444 W x 131 H (mm)
Packing Dimensions 1,180 D x 730 W x 284 H (mm)
Operating Temperature Range 10–35 °C
NVIDIA CUDA Cores 81920
NVIDIA Tensor Cores 10240
Performance 2 petaFLOPS
Compatible CPU(s) Dual Intel Xeon Platinum 8168, 2.7 GHz, 24-cores
No. of GPUs 16X NVIDIA® Tesla V100
System Memory 1.5TB
GPU Memory 512 GB total system
Storage Capacity OS: 2X 960GB NVME SSDs Internal Storage: 30TB (8X 3.84TB) NVME SSDs
Supported OS Ubuntu Linux Host OS
Ports Dual 10 GbE