1. Introduction Pytorch provides a few options for mutli-GPU/multi-CPU computing or in other words distributed computing. While this is unsurprising for Deep learning, what is pleasantly surprising is the support for general purpose low-level distributed or parallel computing. Those who have used MPI will find this functionality to be familiar. Pytorch can be used for the following scenarios: Single GPU, single node (multiple CPUs on the same node) Single GPU, multiple nodes Multiple GPUs, single node Multiple GPUs, multiple nodes Pytorch allows ‘Gloo’, ‘MPI’ and ‘NCCL’ as backends for parallelization. [Read More]
The slides below were used for presentations at the SuperComputing 2018 conference in Dallas.
Overview of PyTorch
Quick introduction to AutoML
The post associated with these slides can be found here. Note that this is still work in progress and will be updated periodically.
Word2Vec in Pytorch - Continuous Bag of Words and Skipgrams
Reader level: Intermediate Overview of Word Embeddings Word embeddings, in short, are numerical representations of text. They are represented as ‘n-dimensional’ vectors where the number of dimensions ‘n’ is determined on the corpus size and the expressiveness desired. The larger the size of your corpus, the larger you want ‘n’. A larger ‘n’ also allows you to capture more features in the embedding. However, a larger dimension involves a longer and more difficult optimization process so a sufficiently large ‘n’ is what you want to use, determining this size is often problem-specific. [Read More]
CS4984/5984 Big Data Summarization
Connecting to ARC machines Cascades The ARC cluster that will be used for this class is ‘Cascades’. Detailed instructions on how to access this machine can be found here. A quick overview of how to login and submit jobs is given below. To login: ssh email@example.com where username is your PID and your password is the VT PID password followed by a comma and the two-factor six-digit code. For e.g. the password looks like this: [Read More]