PyTorch torch.nn.functional.normalize() function can allow us to compute normalization of a tensor over specified dimension. In this tutorial, we will introduce how to use it by some examples.
Syntax
It is defined as:
torch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None)
- torch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None)
torch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None)
It will compute the normalization of input tensor.

How to use it?
Here we will use some examples to show you how to do.
When p = 1.0
import torch
import torch.nn.functional as F
i = torch.randn(5,3)
p = 1.0
x1 = F.normalize(i, p = p, dim = 1)
print(i)
print(x1)
- import torch
- import torch.nn.functional as F
- i = torch.randn(5,3)
- p = 1.0
- x1 = F.normalize(i, p = p, dim = 1)
- print(i)
- print(x1)
import torch
import torch.nn.functional as F
i = torch.randn(5,3)
p = 1.0
x1 = F.normalize(i, p = p, dim = 1)
print(i)
print(x1)
We will get:
tensor([[-0.8943, -0.2765, 0.0855],
[ 0.4701, -1.4141, 1.8332],
[-1.1191, 1.3420, -0.0523],
[-0.1921, -1.6129, -0.0264],
[-0.0728, 0.3456, 0.7744]])
tensor([[-0.7118, -0.2201, 0.0681],
[ 0.1265, -0.3804, 0.4932],
[-0.4452, 0.5339, -0.0208],
[-0.1049, -0.8807, -0.0144],
[-0.0610, 0.2898, 0.6492]])
- tensor([[-0.8943, -0.2765, 0.0855],
- [ 0.4701, -1.4141, 1.8332],
- [-1.1191, 1.3420, -0.0523],
- [-0.1921, -1.6129, -0.0264],
- [-0.0728, 0.3456, 0.7744]])
- tensor([[-0.7118, -0.2201, 0.0681],
- [ 0.1265, -0.3804, 0.4932],
- [-0.4452, 0.5339, -0.0208],
- [-0.1049, -0.8807, -0.0144],
- [-0.0610, 0.2898, 0.6492]])
tensor([[-0.8943, -0.2765, 0.0855],
[ 0.4701, -1.4141, 1.8332],
[-1.1191, 1.3420, -0.0523],
[-0.1921, -1.6129, -0.0264],
[-0.0728, 0.3456, 0.7744]])
tensor([[-0.7118, -0.2201, 0.0681],
[ 0.1265, -0.3804, 0.4932],
[-0.4452, 0.5339, -0.0208],
[-0.1049, -0.8807, -0.0144],
[-0.0610, 0.2898, 0.6492]])
When p=2.0
p = 2.0
x2 = F.normalize(i, p = p, dim = 1)
print(x2)
- p = 2.0
- x2 = F.normalize(i, p = p, dim = 1)
- print(x2)
p = 2.0
x2 = F.normalize(i, p = p, dim = 1)
print(x2)
We will get:
tensor([[-0.9514, -0.2942, 0.0910],
[ 0.1990, -0.5986, 0.7760],
[-0.6401, 0.7677, -0.0299],
[-0.1183, -0.9929, -0.0162],
[-0.0855, 0.4061, 0.9098]])
- tensor([[-0.9514, -0.2942, 0.0910],
- [ 0.1990, -0.5986, 0.7760],
- [-0.6401, 0.7677, -0.0299],
- [-0.1183, -0.9929, -0.0162],
- [-0.0855, 0.4061, 0.9098]])
tensor([[-0.9514, -0.2942, 0.0910],
[ 0.1990, -0.5986, 0.7760],
[-0.6401, 0.7677, -0.0299],
[-0.1183, -0.9929, -0.0162],
[-0.0855, 0.4061, 0.9098]])
It is very useful when p = 2.0, which is also called L2 normalization. If you plan to compute cosine similarity between tensors, you will use it.
When p = 3.0
p = 3.0
x3 = F.normalize(i, p = p, dim = 1)
print(x3)
- p = 3.0
- x3 = F.normalize(i, p = p, dim = 1)
- print(x3)
p = 3.0
x3 = F.normalize(i, p = p, dim = 1)
print(x3)
We will get:
tensor([[-0.9901, -0.3061, 0.0947],
[ 0.2252, -0.6775, 0.8783],
[-0.7160, 0.8586, -0.0335],
[-0.1190, -0.9994, -0.0163],
[-0.0913, 0.4337, 0.9718]])
- tensor([[-0.9901, -0.3061, 0.0947],
- [ 0.2252, -0.6775, 0.8783],
- [-0.7160, 0.8586, -0.0335],
- [-0.1190, -0.9994, -0.0163],
- [-0.0913, 0.4337, 0.9718]])
tensor([[-0.9901, -0.3061, 0.0947],
[ 0.2252, -0.6775, 0.8783],
[-0.7160, 0.8586, -0.0335],
[-0.1190, -0.9994, -0.0163],
[-0.0913, 0.4337, 0.9718]])