Category Image Segmentation

362. Image Segmentation using K-means Clustering

Clustering Depending on your data and objective, you may not even have to train a deep-learning model for image segmentation. Here is one way to apply segmentation using Kmeans clustering. Implementation from sklearn.cluster import KMeans from matplotlib.image import imread import…

335. DAFormer

Objective DAFormer is an architecture proposed to improve domain adaptation for segmentation models. For the encoder, the “Hierarchical Transformer” is used due to being revealed to be robust to domain shifts. The decoder applies context-aware fusion which utilizes domain-robust context…

199. Simple SGD vs Cyclic Learning Rate

Simple SGD vs Cyclic Learning Rate I compared the training speed between two optimizers by training a UNet Model. Simple SGD optimizer = torch.optim.SGD(model.parameters(), lr=0.01) Cyclic Learning Rate optimizer = torch.optim.SGD(model.parameters(), lr=0.01) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.1, max_lr=1e-4) Cyclic Learning Rate…

180. Polynomial Learning Rate

Polynomial Learning Rate For deep learning models, the learning rate is one of the most important hyper-parameters in any deep neural network optimization process. Polynomial Learning Rate is a proposed technique to apply learning rate decay and optimize such process.…

179. Transfer Learning PIDNet

Today I tried to do transfer learning using PIDNet (Since I just learned about PIDNet). Compared to my first attempt, the output is getting slightly better but still not to the level where it is actually useful.

176. CrossEntropyLoss for Segmentation Models

torch.nn.CrossEntropyLoss() Using torch.nn.CrossEntropyLoss() as a loss function for semantic segmentation models was first confusing for me, so I’d like to share it here. CrossEntropyLoss is for multi-class models and it expects at least 2 arguments. One for the model prediction…