Understand Effects of Random Seed in Deep Learning – Deep Learning Tutorial

By | July 13, 2022

When we are training a deep learning model, we may have to set a random seed to make the final result stable.

A Beginner Guide to Get Stable Result in TensorFlow – TensorFlow Tutorial

The Simplest Way to Reproduce Model Result in PyTorch – PyTorch Tutorial

In this tutorial, we will discuss the effects of random seed.

For example, we can set random seed as follows:

import torch
import torch.nn as nn
import random
import numpy as np
import os

def seed_everything(seed = 47):
    os.environ["PL_GLOBAL_SEED"] = str(seed)
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
seed_everything()

Effect 1: make the random value reproduced

For example:

for i in range(10):
    print(random.random())

for i in range(10):
    print(torch.rand([1, 5]))

Run this code, we will see:

0.35184625582788265
0.430186245990098
0.4536708635895742
0.3434697408782532
0.5124443649244089
0.3924154014554718
0.04090872254284261
0.4185326073961467
0.023659862260269615
0.10245584696998078
tensor([[0.0530, 0.0499, 0.4677, 0.8757, 0.5561]])
tensor([[0.7984, 0.9758, 0.2482, 0.1469, 0.4345]])
tensor([[0.6988, 0.8883, 0.2638, 0.2658, 0.1375]])
tensor([[0.4610, 0.7439, 0.0351, 0.1422, 0.4056]])
tensor([[0.5341, 0.5862, 0.1469, 0.2960, 0.2738]])
tensor([[0.7361, 0.9117, 0.2284, 0.0591, 0.2135]])
tensor([[0.4852, 0.7574, 0.4865, 0.0853, 0.8015]])
tensor([[0.7464, 0.3557, 0.4055, 0.0283, 0.4071]])
tensor([[0.3296, 0.3150, 0.5413, 0.3500, 0.2998]])
tensor([[0.4580, 0.3409, 0.3975, 0.3384, 0.3793]])

From the result, we can find:

  • After having set a random seed, we also can get different randomized values
  • Run code above many times, we can find all randomized values are the same in different running time.

Effect 2: improve the performance of deep learning model

If you have not set a random seed, the deep learning model will get different final result. Moreover, the performance may have 1% different.

Different random seeds when training the CNN models could possibly change the behavior of models, sometimes by more than 1%. This is due to the randomness in deep learning, such as the random shuffling the datasets, initialization of the weights and drop-out rate of a tensor.

Random Seed and Neural Networks Performance: A Beginner Guide

Leave a Reply