Channel Attention in Squeeze-and-Excitation (SE) Block Explained – Deep Learning Tutorial

By | August 17, 2022

Channel attention is the core of Squeeze-and-Excitation (SE) block. In this tutorial, we will analysis how to implement it for beginners.

Squeeze-and-Excitation (SE) Block

It is proposed in paper: Squeeze-and-Excitation Networks

The structure of it as follows:

The structure of Squeeze-and-Excitation (SE) block

The implementation of it is below:

The implementation of Squeeze-and-Excitation (SE) block

We can use it with residual network as follows:

Residual network and Squeeze-and-Excitation (SE) block

Channel Attention in Squeeze-and-Excitation (SE) Block

The channel attention is computed by sigmoid function in se block. The shape of it is 1*1*C.

Here we will introduce how to calculate channel attention in 1D and 2D matrix by using pytorch.

Implement Squeeze-and-Excitation (SE) Block for 1D Matrix in PyTorch – PyTorch Tutorial

Implement Squeeze-and-Excitation (SE) Block for 2D Matrix in PyTorch – PyTorch Tutorial

Leave a Reply