DEEPSTREAM
I was finally able to deploy a simple model using Deepstream, so I’d like to share it here
PREREQUISITE
– DeepStream 5.0.1
– Jetpack 4.4
– Tensorflow 1.15.0
IMPLEMENTATION
■ Install requirements for TensorFlow1.x on Jetson
sudo apt-get nano
sudo apt-get update
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install python3-pip
sudo pip3 install -U pip testresources setuptools==49.6.0
sudo pip3 install -U --no-deps numpy==1.19.4 future==0.18.2 mock==3.0.5 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.4.0 protobuf pybind11 cython pkgconfig
sudo env H5PY_SETUP_REQUIRES=0 pip3 install -U h5py==3.1.0
■Install Tensorflow1.15.0: Be careful with the version of Jetpack
My Jetpack is 4.4 so, the link is v44
Check how to set versions here.
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'
■ Get the Pre-trained model: This time I’m using ssd_mobilenet_v2_coco_2018_03_29
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
tar xfvz ssd_mobilenet_v2_coco_2018_03_29.tar.gz
cd ssd_mobilenet_v2_coco_2018_03_29
■ Get config.py
This file is for converting frozen_inference_graph.pb(the file you get in the previous step) to a .uff file
wget -O ./config.py https://forums.developer.nvidia.com/uploads/short-url/xQohdgZxqhw4l7rQxsBQPeI22lm.txt
config.py will look like this..
import graphsurgeon as gs
import tensorflow as tf
Input = gs.create_node("Input",
op="Placeholder",
dtype=tf.float32,
shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
numLayers=6,
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=91,
###########################################
#inputOrder=[0, 2, 1],
inputOrder=[1, 0, 2],
###########################################
confSigmoid=1,
isNormalized=1,
scoreConverter="SIGMOID")
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
namespace_plugin_map = {
"MultipleGridAnchorGenerator": PriorBox,
"Postprocessor": NMS,
"Preprocessor": Input,
"ToFloat": Input,
"image_tensor": Input,
# "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
"Concatenate": concat_priorbox,
"concat": concat_box_loc,
"concat_1": concat_box_conf
}
def preprocess(dynamic_graph):
all_assert_nodes = dynamic_graph.find_nodes_by_op("Assert")
dynamic_graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)
all_identity_nodes = dynamic_graph.find_nodes_by_op("Identity")
dynamic_graph.forward_inputs(all_identity_nodes)
dynamic_graph.collapse_namespaces(namespace_plugin_map)
dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
dynamic_graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")
■ Use convert_to_uff.py(Which is already installed) with config.py(The one you’ve just made) to initiate conversion
python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o frozen_inference_graph.uff -O NMS -p ./config.py
■ Copy file to the appropriate directory and change directory for next step
cp -r /opt/nvidia/deepstream/deepstream/sources ./
cd sources/objectDetector_SSD/
■ Check your CUDA’s version and export that information
This will be referenced when you build a custom parser in the next step.
nvcc -V
■ My CUDA’s version was 10.2 so…
export CUDA_VER=10.2
echo $CUDA_VER
■ Build Custom Parser and verify that libnvdsinfer_custom_impl_ssd.so file has been created
In this file, you can assign what kind of post-process you’d like, such as detection boxes. But for now, I’m using a parser that is already prepared for simplicity.
make -C nvdsinfer_custom_impl_ssd
ls -l nvdsinfer_custom_impl_ssd/
■ Copy the created .so file to the appropriate directory
cp nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so ../../
■ Change directory and get the data for labels, and insert ‘undefined’ to the first row.
-> Adding this is required for Deepstream
cd ../../
wget https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-paper.txt
sed -i '1s/^/Undefined\n/' coco-labels-paper.txt
■ Create Required config file for the Deepstream Pipeline
The file frozen_inference_graph.uff_b1_gpu0_fp32.engine will be automatically created so don’t worry about not having that file yet.
When you start the application, it will first look through this config file and then check the config_infer.txt(Settings for inference), which we’ll make for our next step.
touch app_config.txt && sudo nano app_config.txt
app_config.txt would look something like this.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=640
height=480
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1 #Type 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=3
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1
container=1 #1=mp4,2=mkv
codec=1 #1=h264,2=h265
output-file=./out.mp4
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=640
height=480
enable-padding=0
nvbuf-memory-type=0
# config-file : Mandatory
[primary-gie]
enable=1
gpu-id=0
model-engine-file=frozen_inference_graph.uff_b1_gpu0_fp32.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer.txt
[tracker]
enable=0
[tests]
file-loop=0
■ Create Required config file for Inference
touch config_infer.txt && sudo nano config_infer.txt
config_infer.txt would look like this.
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=frozen_inference_graph.uff_b1_gpu0_fp32.engine
labelfile-path=coco-labels-paper.txt
uff-file=frozen_inference_graph.uff
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=91
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=libnvdsinfer_custom_impl_ssd.so
[class-attrs-all]
threshold=0.6
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
■ Run the App
deepstream-app -c app_config.txt
A file named output.mp4 should appear in your current directory!