When running Jupyter notebooks on machines will multiple GPUs one might want to run individual notebooks on separate GPUs to take advantage of your available resources. Obviously, this is not the only type of parallelism available in TensorFlow, but not knowing how to do this can severely limit your ability to run multiple notebooks simultaneously since Tensorflow selects your physical device 0 for use. Now if you have two notebooks running and one happens to use up all the GPU memory on your physical device 0, then your second notebook will refuse to run complaining that it is out of memory!

Adding this at the beginning of your code or the first cell of your notebooks should help to control device selection.

from keras import objectives, backend as K
import keras
import tensorflow as tf

sess = tf.Session(config=tf.ConfigProto(
    device_count={ 'CPU': num_cpus , 'GPU':num_gpus},
    inter_op_parallelism_threads=0,
    allow_soft_placement=True,
    gpu_options= {'allow_growth': True, 'visible_device_list': "gpu_id_1, gpu_id2..."},
    intra_op_parallelism_threads=0,
))

If your code is in Keras use the following as well to set the session:

K.set_session(sess)

In the above ‘num_cpus’ is the maximum number of cpus and ‘num_gpus’ is the maximum number of gpus that you want your code to utilize. The option ‘visible_device_list’ is where your populate your physical gpu id that you want used. Keep in mind that Tensorflow will refer to these devices as ‘gpu:0’ and ‘gpu:1’ even though you selected gpu devices ‘2’ and ‘3’ in the option.

It is best to leave ‘inter_op_parallelism_threads’ and ‘intra_op_parallelism_threads’ to 0 because that allows Tensorflow to assign an optimal value based on your resources. The option ‘allow_soft_placement’ moves code between the CPU and GPU based on availability, this can eliminate a lot of out-of-memory errors on GPUs. The gpu option ‘allow_growth’ tells Tensorflow to start with minimal gpu memory utilization and increase it as needed.

An example configuration:

from keras import objectives, backend as K
import keras
import tensorflow as tf

sess = tf.Session(config=tf.ConfigProto(
    device_count={ 'CPU': 2 , 'GPU':4},
    inter_op_parallelism_threads=0,
    allow_soft_placement=True,
    gpu_options= {'allow_growth': True, 'visible_device_list': "2,3"},
    intra_op_parallelism_threads=0,
))