Here 1D matrix is: [batch_size, channel, width] or [batch_size, channel, height]. We will implement a SE block on it in pytorch.
Squeeze-and-Excitation (SE) Block
In order to understand what is se block, you can read this tutorial.
Channel Attention in Squeeze-and-Excitation (SE) Block Explained – Deep Learning Tutorial
How to implement se block for 1D matrix in pytorch?
Here we will build a module to do.
For example:
import torch import torch.nn as nn class SEModule(nn.Module): def __init__(self, channels, divide=4): super(SEModule, self).__init__() bottleneck = channels // divide self.se = nn.Sequential( nn.AdaptiveAvgPool1d(1), nn.Conv1d(channels, bottleneck, kernel_size=1, padding=0), nn.ReLU(inplace=True), nn.Conv1d(bottleneck, channels, kernel_size=1, padding=0), nn.Sigmoid(), ) def forward(self, input): x = self.se(input) return input * x
Then, we can use it as follows:
batch_size = 32 W = 30 C = 80 se = SEModule(C) inputs = torch.randn(batch_size, C, W) outputs = se(inputs) print(outputs.shape)
Run this code, we will see:
torch.Size([32, 80, 30])
Moreover, if you want to implement se block on 2D matrix, you can learn this tutorial:
Implement Squeeze-and-Excitation (SE) Block for 2D Matrix in PyTorch – PyTorch Tutorial