Category Research Paper

342. Fine-Tune Vgg16 with BatchNorm

Implementation Here is one way you can Fine-tune Vgg16 while adding a batch normalization layer using Keras. 1. Import from keras.applications.vgg16 import VGG16 from keras.optimizers import SGD from keras.layers import Input, Dense, Flatten, BatchNormalization, Activation from keras.models import Sequential from…

340. Domain-Adversarial Training

Domain Adaptation Domain Adaptation is about learning to adapt to test data even when the distribution is different from the training data. This paper proposes a domain adversarial learning method that can achieve domain adaptation. Architecture The architecture consists of…

336. RandAugment

Data Augmentation Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in computer vision tasks. However, when it comes to adopting…

335. DAFormer

Objective DAFormer is an architecture proposed to improve domain adaptation for segmentation models. For the encoder, the “Hierarchical Transformer” is used due to being revealed to be robust to domain shifts. The decoder applies context-aware fusion which utilizes domain-robust context…

333. Domain Shift

Domain Shift occurs when the distribution of the training set(Source domain) is different from the test set(Target domain) leading to poor results after deployment. Recent works show that Transformers are more robust than CNNs with respect to these properties.

226. Training Methods for EBMs

Constrastive Method Push down on the energy of training samples while pulling up on the energies of suitably placed contrastive samples. The Disadvantage is that you always need contrastive samples in order to constrain the low-energy region. Regularized Method Push…

225. Latent Variable Energy-Based Model

World Model If you haven’t read my previous blog post about the “world model” please go check it out. Training the world model is a prototypical example of Self-Supervised Learning; Learning the mutual dependencies between its inputs. It is said…