120. AI Learning How To Detect My Dog
This is how an AI learns how to classify my dog. I’ve found a blog post visualizing the feature maps of a classification model, so I tried it out! I’m going to use Resnet18 and visualize the feature maps for…
This is how an AI learns how to classify my dog. I’ve found a blog post visualizing the feature maps of a classification model, so I tried it out! I’m going to use Resnet18 and visualize the feature maps for…
I’ve quantitatively evaluated binary segmentation models using IOU as my metric. The reason why I used IoU is because IoU penalizes a single instance of classification error more compared to other metrics such as f1 scores, therefore lowering the value.…
I’ve compared Image segmentation performance depending on the loss function. Mean Squared Error Binary Cross Entropy I was able to achieve higher average IOU using BCE instead of MSE.
I’ve experimented how standardization affect a binary segmentation model’s performance by comparing 2 outputs. Inference result from model trained with standardization using ImageNet RGB mean and standard deviation Inference result from model trained without standardization I’ve found out that the…
When I was evaluating the model, both f1 score and jaccard score, for some reason, was decreasing as the model finishes more epochs. (Which is quite insane.) I’ve been checking the dimensions of the variables I was using to calculate…
Depending on the loss function, you might need to change the data structure of the tensor coming out from the model. In my case, I’m currently doing a binary semantic segmentation using pytorch, and the output of the model is…
I’ve learned that, for semantic segmentation, the Binary Cross Entropy as the loss function, instead of MSE, might be the one to first test out. Motivation When you check the derivative for MSE(Mean Squared Error)’s cost function, you’ll notice that…
For a Tensorflow model, you can use .summary() to see the structure of your model. In pytorch, there is no such thing. So instead, you can use torchinfo. This was really useful when I wanted to do transfer-learning but didn’t…
When you want to load your data for training, the data preparation pipeline would be the following. Randomly shuffle your data Turn them into batches Iterate Although you can manually do this, when the data size becomes large this can…