last_epoch is an important parameter in pytorch scheduler. For example:
torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False) torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)
How to understand and use it? In this tutorial, we will use some example to show you the effect of it.
What is last_epoch?
last_epoch is default to -1 in some pytorch learning rate schedulers. It indicates the index of the last epoch when resuming training. When we create a pytorch scheduler, it will be 0.
For example:
import torch cc = torch.nn.Conv2d(10,10,3) myoptimizer = torch.optim.Adam(cc.parameters(), lr=0.1) myscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1) myscheduler.last_epoch, myscheduler.get_lr() (0, [0.1])
In this example, we only create myscheduler, we can find last_epoch = 0
How to update last_epoch?
When we call myscheduler.step(), last_epoch will be added 1.
For example:
myscheduler.step() myscheduler.last_epoch, myscheduler.get_lr() (1, [0.001])
However, we should notice: scheduler.step() will update last_epoch after batch training or epoch training.
How to resume training by last_epoch?
We can do as follows:
mynewscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1, last_epoch=myscheduler.last_epoch) mynewscheduler.last_epoch, mynewscheduler.get_lr()
Here we set last_epoch=myscheduler.last_epoch to resume training.