Experience with Deepstack on Blue Iris, and Jetson Nano 2GB

Joined
Oct 9, 2021
Messages
1
Reaction score
2
Location
21122
Hello everyone,

I took it upon myself to do a little testing since there is so much information across the board on this. I could not find anything concrete in regards to performance on the matter. My specific use case was highest energy efficiency with acceptable performance for my cameras using AI for motion triggers.

I am running 10 Annke C800 flashed to Hikvision providing 8MP mainstream with .3MP substreams. $650 on eBay

My Blue Iris system is a i7-6700 @ 3.4 Ghz with no frequency throttling, 16GB ram, nvme SSD and 1 TB HD on a Elitedesk G2 mini platform (for energy conservation). $250 on eBay

My Jetson Nano is the 2GB model with 64 GB SD card, with a fan which seemed to help a lot, running in 10W mode. $60 on Amazon

When I ran Deepstack CPU on the G2 with the following settings:

Normal Run:

Mode: High
Trigger to confirm: person,cat,dog
Min confidence 55%
+ real time images: 5
analyze one each: 1 sec
NOT using main stream, using sub stream which is .3MP for analysis
Typical response times were 500-1,500 ms
CPU usage would rarely spike up to 100% on occasion if this scan ran during a motion trigger event, causing choppy video recording.

Normal+Custom ExDark Run:
Mode: High
Trigger to confirm: person,cat,dog,People,Cat,Dog
Min confidence 55%
+ real time images: 5
analyze one each: 1 sec
NOT using main stream, using sub stream which is .3MP for analysis
Typical response times were 7,000-8,000 ms
CPU usage would frequently spike up to 100% on occasion if this scan ran during a motion trigger event, causing choppy video recording.

When I ran Deepstack GPU on the Jetson Nano 2GB with the SAME settings as above:

Normal Run:

docker run --restart always -d --gpus all -e VISION-DETECTION=True -e MODE=High -p 82:5000 deepquestai/deepstack:jetpack-2021.09.1

root@deepstack-server:~# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
faf4fed89cb3 deepquestai/deepstack:jetpack-2021.09.1 "/app/server/server" 49 seconds ago Up 23 seconds 0.0.0.0:82->5000/tcp, :::82->5000/tcp flamboyant_pascal

root@deepstack-server:~# watch -n 5 free -m

Every 5.0s: free -m deepstack-desktop: Mon Oct 11 15:44:08 2021
total used free shared buff/cache available
Mem: 1979 1195 46 3 738 681
Swap: 5085 119 4966

root@deepstack-server:~# docker container logs flamboyant_pascal
[GIN] 2021/10/11 - 19:45:16 | 200 | 273.12886ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:19 | 200 | 223.371476ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:21 | 200 | 237.459566ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:43 | 200 | 254.340839ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:45 | 200 | 231.153821ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:45 | 200 | 247.265336ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:45 | 200 | 138.673355ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:46 | 200 | 220.145063ms | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:45:46 | 200 | 149.644872ms | 172.16.10.5 | POST /v1/vision/detection
Typical response times were 150-300 ms! (significant upgrade over running CPU on the G2)

Normal+Custom ExDark Run:

docker run --restart always -d --gpus all -e VISION-DETECTION=True -e MODE=High -v /etc/deepstack-custom:/modelstore/detection -p 82:5000 deepquestai/deepstack:jetpack-2021.09.1

root@deepstack-server:~# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
defb4da3e2ca deepquestai/deepstack:jetpack-2021.09.1 "/app/server/server" 36 seconds ago Up 7 seconds 0.0.0.0:82->5000/tcp, :::82->5000/tcp blissful_gould


root@deepstack-server:~# watch -n 5 free -m

Every 5.0s: free -m deepstack-desktop: Mon Oct 11 15:49:36 2021
total used free shared buff/cache available
Mem: 1979 1837 37 0 104 48
Swap: 5085 1294 3791
I started to notice the system unresponsiveness here, it would take forever to do anything to include logging in, swap was getting used like crazy.

root@deepstack-server:~# docker container logs blissful_gould
[GIN] 2021/10/11 - 19:49:20 | 500 | 1m0s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:50 | 500 | 1m2s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:50 | 500 | 1m2s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:50 | 500 | 1m3s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:50 | 500 | 1m3s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:52 | 500 | 1m4s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:52 | 500 | 1m5s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:55 | 500 | 1m5s | 172.16.10.5 | POST /v1/vision/detection
[GIN] 2021/10/11 - 19:49:55 | 500 | 1m4s | 172.16.10.5 | POST /v1/vision/detection
Typical response times were 60,000ms-90,000 ms! (horrible downgrade over running CPU on G2)

TL;DR - Jetson Nano 2GB provides a great solution for regular Deepstack operations, but if you go custom models, you will want the 4GB version to avoid memory issues.


I personally got rid of the custom model and stayed with the normal operations. My goal was to find the most efficient system setup for my cameras. The G2 pulls about 30W on average with CPU usage around 3% on average and spikes to 20% on multiple trigger events. The Jetson Nano runs at 10W so overall I am running a 40W system for this setup with provides many great trigger alerts.

I hope this helps someone, this form got me to where I am at so this is my contribution!
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
Running an old 1060 and response times ae ~130ms, but using about 30watts, so I could save $40 a year if a Jetson Nano only consumes 10w.... Interesting may be worth the hit. Could probably offload the GPU also...:)
 
Top