Understand torch.backends.cudnn.benchmark in PyTorch – PyTorch Tutorial

By | November 22, 2022

We may see torch.backends.cudnn.benchmark in some pytorch scripts. In this tutorial, we will discuss how to use it in pytorch.

In generally, we can use it at the beginning of a pytorch script as follows:

torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

The Simplest Way to Reproduce Model Result in PyTorch – PyTorch Tutorial

Here torch.backends.cudnn.benchmark = False.

However, we can see torch.backends.cudnn.benchmark = True in some pytorch scripts. What is the difference between them?

torch.backends.cudnn.benchmark can affect the computation of convolution. The main difference between them is:

  • If the input size of a convolution is not changed when training, we can use torch.backends.cudnn.benchmark = True to speed up the traing. Otherwise, we should set torch.backends.cudnn.benchmark = False

Moreover, we should use it with torch.backends.cudnn.deterministic = True

Understand torch.backends.cudnn.benchmark in PyTorch - PyTorch Tutorial

Leave a Reply