49. Comparing Model Performance: GoogleColab and JetsonTX2

EVALUATING PERFORMANCE
I’ve been able to create several successful routes to deploy models using the Deepstream app. So for my next step, before retraining models, I compared the performance depending on the GPU

METRICS & NOTES
– Model: YOLOV3
– I’m comparing JetsonTX2 with Google Colab: Tesla K80 GPU (12GB).
– Speed Comparison with FPS
– Precision Comparison by comparing the output video (Later on, I’ve learned about Mean Average Precision, so maybe I can try that out for my next comparison)
– I’ve decided to convert the model to a TensorRT engine only for JetsonTX2, to confirm that the performance is actually improving by using TensorRT.

RESULTS
As you can see, just by looking at the video, I couldn’t see that much of a difference between the two. But looking at the FPS, the model in JetsonTX2 is slightly higher. So I was able to confirm that TensorRT did increase the performance.
Now I can say that as long as I convert the model to a TensorRT engine, the performance would rather increase even using a Jetson.

THOUGHTS
It was my first time running inference for an object detection model on Google Colab, so I was able to learn a lot from it. It feels great to be able to understand little by little about the AI world and I’m very excited for what awaits ahead as well!
Since I was able to A) Deploy several models on Deepstream and B) Compare performance, now I think I am prepared to actually retrain the model and deploy that using the Deepstream Pipeline!