Learning Rate Range Test
Learning rate may be the most important hyper-parameter in deep learning and you can use this test to find the right learning rate; Run your model and record accuracy/loss for several epochs while letting the learning rate increase linearly between low and high learning rates.
Implementation
This test is enormously valuable whenever you are facing a new architecture or dataset, so here is one way to implement it using pytorch
Prerequisite
pip install torch-lr-finder
Python Code
from torch_lr_finder import LRFinder
model = ...
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-7, weight_decay=1e-2)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=100, num_iter=100)
lr_finder.plot() # to inspect the loss-learning rate graph
lr_finder.reset() # to reset the model and optimizer to their initial state
Make sure your “trainloader” returns a tuple (image, target)
Reference:Pytorch Learning Rate Finder