Understand torch.nn.init.normal_() with Examples – PyTorch Tutorial

By | November 25, 2022

In pytorch, we can use torch.nn.init.normal_() to initialize a tensor value. In this tutorial, we will use some examples to show you how to use it.

Syntax

torch.nn.init.normal_() is defined as:

torch.nn.init.normal_(tensor, mean=0.0, std=1.0)

It can initialize the input Tensor with values drawn from the normal distribution \(N(mean,{std}^2)\).

How to use torch.nn.init.normal_() to initialize a tensor?

For example:

import torch
in_dim = 3
out_dim = 4
linear_layer = torch.nn.Linear(in_dim, out_dim, bias=True)
print(linear_layer.weight)
torch.nn.init.normal_(
           linear_layer.weight,
           mean= 0,
           std= 0.01)
print(linear_layer.weight)

Here we will initialize linear_layer.weight tensor with mean = 0, std = 0.01.

Run this code, we may see:

Parameter containing:
tensor([[-0.1234,  0.5183, -0.3900],
        [ 0.0652,  0.3852, -0.1232],
        [-0.0059, -0.0869,  0.0389],
        [ 0.1367, -0.4261, -0.2882]], requires_grad=True)
Parameter containing:
tensor([[-0.0063, -0.0006,  0.0065],
        [-0.0093, -0.0160,  0.0036],
        [-0.0076, -0.0123,  0.0168],
        [-0.0035, -0.0011, -0.0088]], requires_grad=True)

Leave a Reply