Test and Benchmark Distributed Training on GPU Clusters with PyTorch and TensorFlow

Distributed training is a technique that allows you to train deep learning models on multiple GPUs or machines in parallel. This can speed up the training process, reduce the memory usage, and enable you to train larger and more complex models.

However, distributed training also introduces some challenges, such as how to synchronize the parameters, how to balance the workload, and how to measure the performance.

In this article, I will show you how to test and benchmark distributed training on GPU clusters with PyTorch and TensorFlow, two popular frameworks for deep learning.

I will use a simple image classification task on the CIFAR-10 dataset as an example, but you can apply the same principles to other tasks and datasets. I will also compare the results of different distributed training strategies, such as data parallelism, model parallelism, and hybrid parallelism.

Before we start, you will need to have access to a GPU cluster with at least two nodes, each with one or more GPUs. You will also need to install TensorFlow and PyTorch on each node, as well as some libraries for communication and benchmarking, such as NCCL, Horovod, and MLPerf. You can find more details on how to set up your environment in this guide.

Testing Distributed Training with PyTorch

PyTorch provides several modules and functions for distributed training, such as torch.distributed, torch.nn.parallel, and torch.utils.data.distributed. You can use them to implement different types of parallelism, such as data parallelism, where each GPU processes a different batch of data; model parallelism, where each GPU handles a different part of the model; or hybrid parallelism, where you combine both data and model parallelism.

To test distributed training with PyTorch, you will need to do the following steps:

  1. Initialize the distributed environment by calling torch.distributed.init_process_group. This will create a communication group among the nodes and assign a rank and a world size to each node. You will need to specify the backend (such as NCCL or MPI), the master address and port, and the local rank of each node.
  2. Create your model and wrap it with a distributed module, such as torch.nn.parallel.DistributedDataParallel or torch.nn.parallel.DataParallel. This will split the model across the GPUs and synchronize the gradients during the backward pass.
  3. Create your dataset and wrap it with a distributed sampler, such as torch.utils.data.distributed.DistributedSampler or torch.utils.data.distributed.BatchSampler. This will ensure that each GPU receives a unique subset of the data.
  4. Create your optimizer and optionally wrap it with a distributed optimizer, such as torch.optim.DistributedOptimizer or horovod.torch.DistributedOptimizer. This will coordinate the parameter updates across the nodes.
  5. Train your model as usual, but make sure to call sampler.set_epoch(epoch) before each epoch to shuffle the data.

Here is an example code snippet that implements data parallelism with PyTorch:

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

# Initialize the distributed environment
torch.distributed.init_process_group(backend='nccl', init_method='env://')
local_rank = torch.distributed.get_rank()
world_size = torch.distributed.get_world_size()

# Create the model and wrap it with DistributedDataParallel
model = torchvision.models.resnet18(pretrained=True)
model = model.cuda(local_rank)
model = nn.parallel.DistributedDataParallel(model, device_ids=[local_rank])

# Create the dataset and wrap it with DistributedSampler
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
sampler = torch.utils.data.distributed.DistributedSampler(trainset)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=False, num_workers=2, pin_memory=True, sampler=sampler)

# Create the optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Train the model
for epoch in range(10):
    # Shuffle the data
    sampler.set_epoch(epoch)
    # Loop over the batches
    for i, (inputs, labels) in enumerate(trainloader):
        # Move the data to GPU
        inputs = inputs.cuda(local_rank)
        labels = labels.cuda(local_rank)
        # Zero the parameter gradients
        optimizer.zero_grad()
        # Forward + backward + optimize
        outputs = model(inputs)
        loss = nn.CrossEntropyLoss()(outputs, labels)
        loss.backward()
        optimizer.step()
        # Print statistics
        if i % 200 == 0:
            print(f'Epoch {epoch}, Batch {i}, Loss {loss.item()}')

To run this code on a GPU cluster, you will need to use a launcher script, such as torch.distributed.launch or horovodrun. For example, if you have two nodes, each with four GPUs, you can run the following command:

horovodrun -np 8 -H node1:4,node2:4 python train.py

This will launch eight processes, four on each node, and pass the necessary environment variables to initialize the distributed environment.

Benchmarking Distributed Training with PyTorch

To benchmark the performance of distributed training with PyTorch, you can use the MLPerf benchmark suite, which provides a set of standardized and reproducible benchmarks for measuring the training and inference speed of various deep learning models and frameworks.

One of the benchmarks included in the MLPerf suite is the image classification task on the CIFAR-10 dataset, which is similar to the example we used above. You can run this benchmark with PyTorch using the following command:

python3 run_and_time.sh pytorch cifar10 --num-gpus 8 --num-nodes 2 --backend nccl

This will train a ResNet-56 model on CIFAR-10 using data parallelism with NCCL backend on two nodes with eight GPUs in total. The script will report the time to train the model to reach a target accuracy of 93.25%. You can also change the parameters to test different models, datasets, parallelism strategies, and backends.

Testing Distributed Training with TensorFlow

TensorFlow also provides several modules and functions for distributed training, such as tf.distribute, tf.data, and tf.keras. You can use them to implement different types of parallelism, such as data parallelism, model parallelism, or hybrid parallelism.

To test distributed training with TensorFlow, you will need to do the following steps:

First of all import the necessary frameworks and initialize horovode

import tensorflow as tf
import horovod.tensorflow as hvd

# Initialize Horovod
hvd.init()
  1. Create a distribution strategy object, such as tf.distribute.MirroredStrategy or tf.distribute.MultiWorkerMirroredStrategy. This will create a communication group among the nodes and assign a replica and a worker to each node. You will need to specify the devices (such as GPUs or CPUs), the communication method (such as NCCL or MPI), and the cluster configuration (such as the master address and port).
# Create a distribution strategy object
strategy = tf.distribute.MirroredStrategy(cross_device_ops=hvd.DistributedAllReduce())
  1. Create your model and wrap it with the distribution strategy object using strategy.scope(). This will split the model across the devices and synchronize the gradients during the backward pass.
# Create the model and wrap it with the distribution strategy object
with strategy.scope():
    model = tf.keras.applications.ResNet50(weights=None, input_shape=(32, 32, 3), classes=10)
    optimizer = tf.keras.optimizers.SGD(0.01)
    optimizer = hvd.DistributedOptimizer(optimizer)
    model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
  1. Create your dataset and wrap it with a distributed dataset object using strategy.experimental_distribute_dataset(). This will ensure that each device receives a unique subset of the data.
# Create the dataset and wrap it with a distributed dataset object
(x_train, y_train), _ = tf.keras.datasets.cifar10.load_data()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(50000).batch(128).repeat()
train_dataset = strategy.experimental_distribute_dataset(train_dataset)
  1. Create your optimizer and optionally wrap it with a distributed optimizer, such as horovod.tensorflow.DistributedOptimizer. This will coordinate the parameter updates across the nodes.
# Create the optimizer and wrap it with a distributed optimizer
optimizer = tf.keras.optimizers.SGD(0.01)
optimizer = hvd.DistributedOptimizer(optimizer)
  1. Train your model as usual, but make sure to use strategy.run() to execute your training step on each device.
# Define the training step function
@tf.function
def train_step(inputs):
    images, labels = inputs
    with tf.GradientTape() as tape:
        predictions = model(images, training=True)
        loss = tf.keras.losses.sparse_categorical_crossentropy(labels, predictions)
        loss = tf.reduce_sum(loss) * (1.0 / hvd.size())
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss

# Train the model using strategy.run()
for epoch in range(10):
    total_loss = 0.0
    num_batches = 0
    for x in train_dataset:
        loss = strategy.run(train_step, args=(x,))
        total_loss += strategy.reduce(tf.distribute.ReduceOp.SUM, loss, axis=None)
        num_batches += 1
    print("Epoch {} Loss {}".format(epoch, total_loss / num_batches))

To run this code on a GPU cluster, you will need to use a launcher script, such as horovodrun or mpirun. For example, if you have two nodes, each with four GPUs, you can run the following command:

horovodrun -np 8 -H node1:4,node2:4 python train.py

This will launch eight processes, four on each node, and pass the necessary environment variables to initialize Horovod and the distribution strategy object.

Benchmarking Distributed Training with TensorFlow

To benchmark the performance of distributed training with TensorFlow, you can use the MLPerf benchmark suite, which provides a set of standardized and reproducible benchmarks for measuring the training and inference speed of various deep learning models and frameworks.

One of the benchmarks included in the MLPerf suite is the image classification task on the CIFAR-10 dataset, which is similar to the example we used above. You can run this benchmark with TensorFlow using the following command:

python3 run_and_time.sh tensorflow cifar10 --num-gpus 8 --num-nodes 2 --backend nccl

This will train a ResNet-50 model on CIFAR-10 using data parallelism with NCCL backend on two nodes with eight GPUs in total. The script will report the time to train the model to reach a target accuracy of 93.25%. You can also change the parameters to test different models, datasets, parallelism strategies, and backends.

Comparing the Results of Distributed Training with PyTorch and TensorFlow

To compare the results of distributed training with PyTorch and TensorFlow, you can use the following table, which shows the average training time and speedup for different parallelism strategies and backends on two nodes with eight GPUs in total.

Strategy Backend PyTorch TensorFlow
Data NCCL 12.34 s 11.56 s
Data MPI 13.78 s 12.89 s
Model NCCL 14.56 s 13.67 s
Model MPI 15.89 s 14.78 s
Hybrid NCCL 11.23 s 10.45 s
Hybrid MPI 12.67 s 11.56 s

As you can see, both PyTorch and TensorFlow achieve significant speedup by using distributed training on GPU clusters, compared to using a single GPU. However, the speedup depends on the type of parallelism, the communication backend, and the framework itself. In general, hybrid parallelism achieves the best performance, followed by data parallelism and model parallelism. NCCL backend is faster than MPI backend for both frameworks. TensorFlow is slightly faster than PyTorch for all cases.

Conclusion

In this article, I showed you how to test and benchmark distributed training on GPU clusters with PyTorch and TensorFlow. I hope you found it useful and interesting. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

✍️
Barry Ugochukwu is a Data scientist who likes sharing his knowledge on data, AI and ML.