Multi-GPU Computing with Pytorch (Draft)

An overview

1. Introduction Pytorch provides a few options for mutli-GPU/multi-CPU computing or in other words distributed computing. While this is unsurprising for Deep learning, what is pleasantly surprising is the support for general purpose low-level distributed or parallel computing. Those who have used MPI will find this functionality to be familiar. Pytorch can be used for the following scenarios: Single GPU, single node (multiple CPUs on the same node) Single GPU, multiple nodes Multiple GPUs, single node Multiple GPUs, multiple nodes Pytorch allows ‘Gloo’, ‘MPI’ and ‘NCCL’ as backends for parallelization. [Read More]

Easiest way toward Multi-GPU training in Tensorflow 2

Quick tip

Overview Easy parallelization over multiple GPUs can be accomplished in Tensorflow 2 using the ‘MirroredStrategy’ approach, especially if one is using Keras through the Tensorflow integration. This can be used as a replacement for ‘multi_gpu_model’ in Keras. There are a few caveats (bugs) with using this on TF2.0 (see below). An example illustrating its use is shown below where two of the GPU devices are selected. import tensorflow as tf from tensorflow. [Read More]

Tensorflow in Jupyter Notebook for Multi-GPU environments

Options/Best Practices

When running Jupyter notebooks on machines will multiple GPUs one might want to run individual notebooks on separate GPUs to take advantage of your available resources. Obviously, this is not the only type of parallelism available in TensorFlow, but not knowing how to do this can severely limit your ability to run multiple notebooks simultaneously since Tensorflow selects your physical device 0 for use. Now if you have two notebooks running and one happens to use up all the GPU memory on your physical device 0, then your second notebook will refuse to run complaining that it is out of memory! [Read More]

Word2Vec in Pytorch - Continuous Bag of Words and Skipgrams

Pytorch implementation

Reader level: Intermediate Overview of Word Embeddings Word embeddings, in short, are numerical representations of text. They are represented as ‘n-dimensional’ vectors where the number of dimensions ‘n’ is determined on the corpus size and the expressiveness desired. The larger the size of your corpus, the larger you want ‘n’. A larger ‘n’ also allows you to capture more features in the embedding. However, a larger dimension involves a longer and more difficult optimization process so a sufficiently large ‘n’ is what you want to use, determining this size is often problem-specific. [Read More]

CS4984/5984 Big Data Summarization

Class notes

Connecting to ARC machines Cascades The ARC cluster that will be used for this class is ‘Cascades’. Detailed instructions on how to access this machine can be found here. A quick overview of how to login and submit jobs is given below. To login: ssh username@cascades1.arc.vt.edu where username is your PID and your password is the VT PID password followed by a comma and the two-factor six-digit code. For e.g. the password looks like this: [Read More]