Multi-GPU Computing with Pytorch (Draft)
An overview
1. Introduction Pytorch provides a few options for mutli-GPU/multi-CPU computing or in other words distributed computing. While this is unsurprising for Deep learning, what is pleasantly surprising is the support for general purpose low-level distributed or parallel computing. Those who have used MPI will find this functionality to be familiar. Pytorch can be used for the following scenarios:
Single GPU, single node (multiple CPUs on the same node) Single GPU, multiple nodes Multiple GPUs, single node Multiple GPUs, multiple nodes Pytorch allows ‘Gloo’, ‘MPI’ and ‘NCCL’ as backends for parallelization.
[Read More]