in Link Post

NVIDIA Develops NVLink Switch

Source: www.anandtech.com

Ryan Smith:

Unsurprisingly, the first system to ship with the NVSwitch will be a new NVIDIA system: the DGX-2. The big sibling to NVIDIA’s existing DGX-1 system, the DGX-2 incorporates 16 Tesla V100 GPUs. Which as NVIDIA likes to tout, means it offers a total of 2 PFLOPs of compute performance in a single system, albeit via the more use-case constrained tensor cores.

Notably here, the topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space, though with the usual tradeoffs involved if going off-chip. Not unlike the Tesla V100 memory capacity increase then, one of NVIDIA’s goals here is to build a system that can keep in-memory workloads that would be too large for an 8 GPU cluster.

One of the issues of training with multiple GPUs is that splitting and parallelising the training is hard. One reason for this is that moving data in and out of GPU memory is (relatively) slow. This seems like it solves that problem. Plenty of memory, and all of the GPUs in the cluster can read to and write from it. On the other hand:

Ultimately the DGX-2 is being pitched at an even higher-end segment of the deep-learning market than the DGX-1 is. Priced at $399K for a single system, if you can afford it (and can justify the cost) you’re probably Facebook, Google, or in the same league thereof.

Ouch. That amount of money would buy a of 1080tis…