Category AI

342. Fine-Tune Vgg16 with BatchNorm

Implementation Here is one way you can Fine-tune Vgg16 while adding a batch normalization layer using Keras. 1. Import from keras.applications.vgg16 import VGG16 from keras.optimizers import SGD from keras.layers import Input, Dense, Flatten, BatchNormalization, Activation from keras.models import Sequential from…

341. Deterministic vs Stochastic Models

Deterministic Models Produces consistent outcomes for a given input no matter how many times you recalculate. Deterministic models have the benefit of simplicity which can be easier to explain in some cases. Stochastic Models Posses some inherent randomness which leads…

340. Domain-Adversarial Training

Domain Adaptation Domain Adaptation is about learning to adapt to test data even when the distribution is different from the training data. This paper proposes a domain adversarial learning method that can achieve domain adaptation. Architecture The architecture consists of…

339.TorchScript

Running Pytorch without Python TorchScript enables users to load Pytorch models in processes where there is no python dependency. Instead of running the process in Python runtime, it converts the model to be able to run in an independent “Torchsript”…

338. Pearson VS Spearman Correlation

Pearson Correlation Evaluates the linear relationship between 2 variables. Ranges from -1(When the value of one variable increases while the other decreases) to 1(When the value of one variable increases while the other increases as well). Spearman Rank-Order Correlation Evaluates…

337. Discretizing Data

Why Here are several reasons why you may need to use Discretization. It is often easier to understand continuous data when divided and stored into meaningful categories of groups It is easier to find correlations with the target variables after…

336. RandAugment

Data Augmentation Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in computer vision tasks. However, when it comes to adopting…

335. DAFormer

Objective DAFormer is an architecture proposed to improve domain adaptation for segmentation models. For the encoder, the “Hierarchical Transformer” is used due to being revealed to be robust to domain shifts. The decoder applies context-aware fusion which utilizes domain-robust context…

333. Domain Shift

Domain Shift occurs when the distribution of the training set(Source domain) is different from the test set(Target domain) leading to poor results after deployment. Recent works show that Transformers are more robust than CNNs with respect to these properties.