~~NOTOC~~
====== Hello Object Detection ======
===== Get jetson-inference repository =====
$ cd
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
===== Create workspace =====
$ mkdir jetson_ws
===== Run the container =====
Go inside the jetson-inference repository
$ cd ~/jetson-inference
$ docker/run.sh --volume /home/jetson/jetson_ws:/jetson-inference/jetson_ws
===== Code =====
import jetson.inference
import jetson.utils
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.gstCamera(1280, 720, "/dev/video0")
display = jetson.utils.glDisplay()
while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle("Title")
===== Re-build the container or build your own =====
==== Set NVIDIA Container Runtime for Docker ====
in the file ''/etc/docker/daemon.json'' set the ''default-runtime'' to ''nvidia'' by adding the following json code:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
==== Build the container =====
$ docker/build.sh