Category Research Paper

221. SimSiam

Abstract Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. However, previous works include the following that causes all outputs to “collapse” to a constant. Negative Sampling Large Batches Momentum Encoders This paper…

220. Self-Supervised Learning Meets Active Learning

The Three Stage This paper combines SimSiam(a self-supervised learning method to learn feature representations) with active learning to reduce labeling effort in 3 stages. Stage 1 Train the Encoder with all available data Stage 2: Fine-tune the SVM/Classifier layer with…

215. Improving A3C

Improving A3C Deep Reinforcement Learning Model In case you don’t know what an A3C model is, please check my previous post. By replacing the hidden layer with an LSTM layer, you can improve the performance of the A3C model.

214. A3C

A3C: Asynchronous Advantage Actor-Critic A3C is a deep reinforcement learning method that consists of mainly 3 elements. Element 1: Asynchronous Instead of only having one agent trying to get to the desired destination, this paper has multiple agents exploring the…

202. PatchCore

PatchCore PatchCore offers competitive inference time while achieving state-of-the-art performance for both anomaly detection and localization. PatchCore is presented as an effective method for.. Maximizing nominal information available at test time Reducing biased towards ImageNet classes by using mid-level network…

201. PaDiM

PaDiM Several methods have been proposed to combine anomaly detection(Give anomaly score to images) and localization(Assign each pixel an anomaly score to output anomaly map) in a one-class learning setting(Whether an image is normal or not). However, either they require…

200. SPADE

Anomaly Detection Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images. A limitation of kNN methods is the lack of segmentation map describing where the anomaly lies inside the image.…

197. Cyclical Learning Rate

Cyclical Learning Rate It is known that the learning rate is one of the most important hyper-parameters for training deep neural networks. Unlike previous methods such as monotonically decreasing the learning rate, the Cyclical learning rate practically eliminates the need…