Category Computer Vision

210. Graphical Neural Networks

NNs and GNNs NN’s Expects inputs as a node GNN’s Inputs/Outputs graphs instead of a singular node. Expects inputs as an array of nodes. Considers “Adjacency Matrix” which stores information on the relationships between each node, besides the results from…

208. Otsu Binarization

Otsu Binarization Image thresholding is used during the preprocessing phase to binarize images depending on the pixel intensities. Otsu Binarization is one method to find the appropriate threshold by minimizing the variance between each class. Here is an implementation example…

206. Image Generation Methods

4 Categories There are mainly 4 Image generation methods. 1. GAN Train a GENERATOR that generates an image from “z”, and have a DISCRIMINATOR discriminate whether the generated image is real or not. The GENERATOR tries to learn to be…

205. CVAT for Annotating Data

CVAT Here is one tool you can use to create annotation data for free. CVAT: – You can annotate data for many computer-vision-related tasks(semantic segmentation, 3d object detection, etc.) – You can export with multiple format options Link: CVAT

202. PatchCore

PatchCore PatchCore offers competitive inference time while achieving state-of-the-art performance for both anomaly detection and localization. PatchCore is presented as an effective method for.. Maximizing nominal information available at test time Reducing biased towards ImageNet classes by using mid-level network…

201. PaDiM

PaDiM Several methods have been proposed to combine anomaly detection(Give anomaly score to images) and localization(Assign each pixel an anomaly score to output anomaly map) in a one-class learning setting(Whether an image is normal or not). However, either they require…

200. SPADE

Anomaly Detection Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images. A limitation of kNN methods is the lack of segmentation map describing where the anomaly lies inside the image.…

199. Simple SGD vs Cyclic Learning Rate

Simple SGD vs Cyclic Learning Rate I compared the training speed between two optimizers by training a UNet Model. Simple SGD optimizer = torch.optim.SGD(model.parameters(), lr=0.01) Cyclic Learning Rate optimizer = torch.optim.SGD(model.parameters(), lr=0.01) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.1, max_lr=1e-4) Cyclic Learning Rate…