Category Pytorch

360. Saving Checkpoint During Training

Implementation Saving checkpoints for model weights during training can be helpful in the case of the following examples. Want to resume training later Avoid losing weight data when the process stops during training due to some kind of error. Restore…

344. Pytorch Profiler

Pytorch Profiler can help you detect performance bottlenecks when training/deploying a model Here is one implementation import torch import torchvision.models as models from torch.profiler import profile, record_function, ProfilerActivity model = models.resnet18() inputs = torch.randn(1, 3, 512, 512) with profile(activities=[ProfilerActivity.CPU], record_shapes=True)…

339.TorchScript

Running Pytorch without Python TorchScript enables users to load Pytorch models in processes where there is no python dependency. Instead of running the process in Python runtime, it converts the model to be able to run in an independent “Torchsript”…

332. TorchServe

Deploying Your Model TorchServe allows you to expose a WEB API for your Pytorch model that may be accessed directly or via your application. 3 Steps Choose a default handler or author a custom model handler. You will define a…

229. Quantization using Pytorch

Quantization Quantization is a technique to change the data type used to compute neural networks for faster inference. After you’ve deployed your model, there is no need to backpropagate(which is sensitive to precision). This means, if a slight decrease in…

228. Pruning Using Pytorch

Pruning State-of-the-art deep learning techniques rely on over-parameterized models which makes it hard when the deploying destination has limited resources. Pruning is used to learn the differences between over-parameterized and under-parameterized networks and sparsify your neural networks. In Pytorch, you…

199. Simple SGD vs Cyclic Learning Rate

Simple SGD vs Cyclic Learning Rate I compared the training speed between two optimizers by training a UNet Model. Simple SGD optimizer = torch.optim.SGD(model.parameters(), lr=0.01) Cyclic Learning Rate optimizer = torch.optim.SGD(model.parameters(), lr=0.01) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.1, max_lr=1e-4) Cyclic Learning Rate…

198. Learning Rate Range Test

Learning Rate Range Test Learning rate may be the most important hyper-parameter in deep learning and you can use this test to find the right learning rate; Run your model and record accuracy/loss for several epochs while letting the learning…