Understand torch.nn.AdaptiveAvgPool1d() with Examples in PyTorch – PyTorch Tutorial

By | June 23, 2022

In this tutorial, we will use some examples to show you how to use torch.nn.AdaptiveAvgPool1d() in pytorch, which is very useful when you are building a CNN networks.

torch.nn.AdaptiveAvgPool1d()

It is defined as:

torch.nn.AdaptiveAvgPool1d(output_size)

It will apply a 1D adaptive average pooling over an input signal.

Input: (N, C, L_in) or (C, L_in)

Output: (N, C, L_out) or (C, L_out), where L_out=output_size

How to use torch.nn.AdaptiveAvgPool1d()?

It is easy to use it, for example:

import torch
import torch.nn as nn

L_out = 3
m = nn.AdaptiveAvgPool1d(L_out)
data = range(30)
t = torch.tensor(data, dtype=torch.float)

N = 2
C = 3
L_in = 5
t = torch.reshape(t, [N, C, L_in])
print(t)
output = m(t)
print(output)

Here input is 2*3*5,  the output will be 2*3*3

There is one question: how to get output?

output will be computed as follows:

#     for i in range(m):
#         lstart = floor(i * L / m)
#         lend = ceil((i + 1) * L / m)
#         output[:, :, i] = sum(input[:, :, lstart: lend])/lend - lstart)

Here L = L_in, m = L_out

Run this code, we will find the input is:

tensor([[[ 0.,  1.,  2.,  3.,  4.],
         [ 5.,  6.,  7.,  8.,  9.],
         [10., 11., 12., 13., 14.]],

        [[15., 16., 17., 18., 19.],
         [20., 21., 22., 23., 24.],
         [25., 26., 27., 28., 29.]]])

The final output is:

tensor([[[ 0.5000,  2.0000,  3.5000],
         [ 5.5000,  7.0000,  8.5000],
         [10.5000, 12.0000, 13.5000]],

        [[15.5000, 17.0000, 18.5000],
         [20.5000, 22.0000, 23.5000],
         [25.5000, 27.0000, 28.5000]]])

Leave a Reply