In this tutorial, we will use some examples to show you how to understnd and use tensor.contiguous() in PyTorch.
tensor.contiguous()
It is defined as:
Tensor.contiguous(memory_format=torch.contiguous_format)
It will return a contiguous in memory tensor containing the same data as self tensor.
Why dose we use tensor.contiguous()?
As to tensor.view() function, it should be implemented on a contiguous tensor.
For example:
import torch x = torch.tensor([[1, 2, 2],[2, 1, 3]]) x = x.transpose(0, 1) print(x) y = x.view(-1) print(y)
In this code, we transpose tensor x, then change its shape with tensor.view() function.
Run this code, we will see this error.
y = x.view(-1)
RuntimeError: view size is not compatible with input tensor’s size and stride
In order make tensor.view() work, we can get a contiguous tensor.
For example:
import torch x = torch.tensor([[1, 2, 2],[2, 1, 3]]) x = x.transpose(0, 1) print(x) x = x.contiguous() y = x.view(-1) print(y)
Run this code, we will see:
tensor([[1, 2], [2, 1], [2, 3]]) tensor([1, 2, 2, 1, 2, 3])
In this example, we use x.contiguous() to get a contiguous tensor before using x.view(), then x.view() can work well.
From above, we can know how to use tensor.contiguous() correctly.