Tutorial Example

Understand PyTorch tensor.data with Examples – PyTorch Tutorial

In some pytorch scripts, we may see tensor.data. In this tutorial, we will use some examples to help you understand it.

Pytorch tensor.data

It will return a copy of current tensor with the same memory, which means if we change the value of the copied tensor, the original tensor is also changed.

For example:

import torch
x = torch.tensor([[1.0, 2.0],[2.0, 3.0]])
print(x)
print(type(x))
y = x.data
print(y)
print(type(y))

Run this code, we will see:

tensor([[1., 2.],
        [2., 3.]])
<class 'torch.Tensor'>
tensor([[1., 2.],
        [2., 3.]])
<class 'torch.Tensor'>

x is the orignal tensor, y is the copy of x. We can find the value of them is the same.

However, if we change the value of y, the orignal x is also changed.

y.zero_()
print(x)
print(y)

Here we set the y value to zero, run this code, we will see:

tensor([[0., 0.],
        [0., 0.]])
tensor([[0., 0.],
        [0., 0.]])

x is also set to zero.

Gradient of tensor.data

As to copied tensor created by tensor.data, the copied tensor will remove grad, grad_fn attribution. It

However, it will remove grad, grad_fn attribution.

For example:

import torch
x = torch.tensor([[1.0, 2.0],[2.0, 3.0]], requires_grad = True)
print(x)
print(type(x))
y = x.data
print(y)
print(type(y))

Run this code, we will see:

tensor([[1., 2.],
        [2., 3.]], requires_grad=True)
<class 'torch.Tensor'>
tensor([[1., 2.],
        [2., 3.]])
<class 'torch.Tensor'>

From the result, we can find:

x supports gradient, however, its copied tensor y will not.

tensor.data is not safe

It is not a good choice to use tensor.data to get its copy, because if we change the copied tensor (y), the original tensor (x) is also changed. However, pytorch autograd will not know. It will cause the error when computing gradient.

If you want to get a tensor copy safely, you can use tensor.detach().