As to 2D matrix, it means the shape is [batch, channel, height, width]. In this tutorial, we will introduce how to implement a squeeze-and-excitation (SE) block for this kind of data.
torch.optim.lr_scheduler.StepLR() allows us to change the learning rate when training a model. In this tutorial, we will use some examples to show you how to use it.
When we are using torch.nn.Conv2d() function, we may also use torch.nn.AdaptiveAvgPool2d(). In this tutorial, we will use some examples to show you how to use it.
When we are reading papers, we may see: All models are trained using Adam with a learning rate of 0.001 and gradient clipping at 2.0. In this tutorial, we will introduce gradient clipping in pytorch.
Focal loss is one of method to process imbalance dataset in deep learning. In this tutorial, we will introduce how to implement focal loss for multi label classification in pytorch.