This is a node for the Learning Loop wich provides an RESTful API for edge devices to retrieve inferences. It is intended to run on NVidia Jetson (>= r32.4.4) by utilizing TKDNN.
- Active Learning for the Zauberzeug Learning Loop (upload images & detections with bad predictions)
- RESTful interface to retrieve predictions
Runs only on NVidia Jetson (Tegra Architecture).
docker pull zauberzeug/tkdnn_detection_node:nano-r32.5.0 # to make sure we have the latest image
docker run -it --rm --runtime=nvidia -p 80:80 \
-v $HOME/data:/data \ # bind the model to make it persistent (should contain an model.rt file)
-e NVIDIA_VISIBLE_DEVICES=all \ # to enable hardware acceleration
-e ORGANIZATION=zauberzeug \ # define your organization
-e PROJECT=demo\ # define the project for which the detector should run
zauberzeug/tkdnn_detection_node:nano-r32.5.0
If the container is up and running you can get detections through the RESTful API:
curl --request POST -H 'mac: FF:FF:FF:FF:FF' -F 'file=@test.jpg' localhost/detect
For startup the image expects a valid model.rt
file, training.cfg
and names.txt
in the /data
directory. These will automatically provided by converter nodes.
Put a TensorRT model model.rt
and a names.txt
with the category names into the data
folder.
You can use the download_model_for_testing.sh
helper.
Build the container with ./docker.sh build
and run it with ./docker.sh run
.
Now you can connect to the container with vs code and modify the code.