Kyosuke

Kyosuke

94 TorchInfo

For a Tensorflow model, you can use .summary() to see the structure of your model. In pytorch, there is no such thing. So instead, you can use torchinfo. This was really useful when I wanted to do transfer-learning but didn’t…

93. Dataset ⇒ DataLoader Pipeline in Pytorch

Here is 1 way to prepare your data using DataLoader in Pytorch. 1. Create Custom Dataset Class When using a custom dataset the following 3 functions has to be overloaded. class Dataset: #Initialize the dataset with the input data and…

92. DataLoader in Pytorch

When you want to load your data for training, the data preparation pipeline would be the following. Randomly shuffle your data Turn them into batches Iterate Although you can manually do this, when the data size becomes large this can…

91. The Innovator’s Dilemma by Clayton M. Christensen

These are my takeaways from the book The Innovator’s Dilemma by Clayton M. Christensen Why was it that firms that could be esteemed as aggressive, innovative, customer-sensitive organizations could ignore or attend belatedly to technological innovations with enormous strategic importance?…

90. You’re Not Listening by KATE MURPHY

These are my takeaways from the book You’re Not Listening by Kate Murphy It is said that intuition, often called the 6th sense, is nothing more than recognition. The more people you listen to , the more aspect of humanity…

89. Max-Norm Regularization

Another useful regularization technique is called Max-Norm Regularization. Implementation layer = keras.layers.Dense(100, activation=”selu”, kernel_initializer=”lecun_normal”, kernel_constraint=keras.constraints.max_norm(1.)) By setting a hyper-parameter r you can set a max value for weights to prevent over-fitting. References: Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow,…

88. Monte Carlo Dropout

Dropout is one of the most popular regularization techniques for deep neural networks. Monte Carlo Dropout may help boost the dropout model even more. Full Implementation ys = np.stack([model(X_test, training=True) for sample in range(100)]) y = ys.mean(axis=0) Predict() method returns…

85. Keras Model Transfer-Learning

Load model model_A = keras.models.load_model(“model_A.h5″) Clone Architecture model_A_clone = keras.models.clone_model(model_A) Clone Weights model_A_clone.set_weights(model_A.get_weights()) Delete Last Layer model_B = keras.models.Sequential(model_A_clone.layers[:-1]) Add Final layer => Change to Binary classifier model_B.add(keras.layers.Dense(1, activation=”sigmoid”)) You can prevent copied layers to be affected when training for…