[tool] [tutorial] Free AI Person Detection for Blue Iris

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
@Tinbum / @Village Guy in the Windows version of AI-Tool you can select 1 or all the different detection models from the deepstack tab, When you run the Docker version that tab is gone so how would you tell it to run more than 1 model? Do you just run sudo docker run -e XXXX-DETECTION=True -v localstorage:/datastore -p 80:5000 \ deepquestai/deepstack
multiple times and change where I put XXXX to each model you want it to use?
Thanks.
I'm personally not familiar with running more than one version of deepstack simultaneously . That said I suspect your proposal would return an error complaining that the port is already in use. Each version would need it's own port to be defined. No sure how AITool would handle that scenario.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
balucanb said:
@Tinbum / @Village Guy in the Windows version of AI-Tool you can select 1 or all the different detection models from the deepstack tab, When you run the Docker version that tab is gone so how would you tell it to run more than 1 model? Do you just run sudo docker run -e XXXX-DETECTION=True -v localstorage:/datastore -p 80:5000 \ deepquestai/deepstack
multiple times and change where I put XXXX to each model you want it to use?
Thanks.

I'm personally not familiar with running more than one version of deepstack simultaneously . That said I suspect your proposal would return an error complaining that the port is already in use. Each version would need it's own port to be defined. No sure how AITool would handle that scenario.
I think he is speaking of the AI model (object detection / face / scene recognition), in which case you'd probably run the command again on another port, or add all into one line, using the same format (VISION-DETECTION=True, or whatever the model is you want). It should come up on the command line once it starts, listing all API's in use.
Not sure how it works with multiple models, but i could see the objects and faces being used simultaneously for BI usage.
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
I think he is speaking of the AI model (object detection / face / scene recognition), in which case you'd probably run the command again on another port, or add all into one line, using the same format (VISION-DETECTION=True, or whatever the model is you want). It should come up on the command line once it starts, listing all API's in use.
Not sure how it works with multiple models, but i could see the objects and faces being used simultaneously for BI usage.
@Village Guy , @cjowers Thanks for replying. I agree it may not be possible to run more than one version. Village guy I suspect you may be correct or more than likely I am doing something wrong- edit Pretty positive I'm doing something wrong since I have about a 2-3 weeks of exp. with docker now. :) I have been working on a custom detection model using the instructions just put out and when trying to deploy it I am indeed getting errors. I have a working vision-detection model running in/on (?) Docker desktop using port 8383, I tried to run this new one on the same port and it threw a error. The working one was running when I tried. I guess you can't run 2 instances on the same port? I stopped it and tried again, same thing error, I then tried port 80, errored out. There very well could be something else going on since I am pretty clueless concerning Docker. If anyone thinks they can figure it out, I screen shot what was going on- see attachment. Thanks

Update- As I suspected the problem was the user (me) @johnolafenwa looked at my code from the attachment and I missed a keystroke there should be a dash between cpu and 2020, so it should be cpu-2020.12 Still waiting to see if I can run more than one model at the same time....
 

Attachments

Last edited:

maximosm

Young grasshopper
Joined
Jan 8, 2015
Messages
95
Reaction score
6
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
@Village Guy , @cjowers Thanks for replying. I agree it may not be possible to run more than one version. Village guy I suspect you may be correct or more than likely I am doing something wrong- edit Pretty positive I'm doing something wrong since I have about a 2-3 weeks of exp. with docker now. :) I have been working on a custom detection model using the instructions just put out and when trying to deploy it I am indeed getting errors. I have a working vision-detection model running in/on (?) Docker desktop using port 8383, I tried to run this new one on the same port and it threw a error. The working one was running when I tried. I guess you can't run 2 instances on the same port? I stopped it and tried again, same thing error, I then tried port 80, errored out. There very well could be something else going on since I am pretty clueless concerning Docker. If anyone thinks they can figure it out, I screen shot what was going on- see attachment. Thanks

Update- As I suspected the problem was the user (me) @johnolafenwa looked at my code from the attachment and I missed a keystroke there should be a dash between cpu and 2020, so it should be cpu-2020.12 Still waiting to see if I can run more than one model at the same time....
To the best of my knowledge you must use a different port address for each version. Try using 8384 for the second version but you will still have an issue with aitool handling more than one port. Port 80 is probably already being used by some other app.

Why can't you incorporate everything into one module?
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
To the best of my knowledge you must use a different port address for each version. Try using 8384 for the second version but you will still have an issue with aitool handling more than one port. Port 80 is probably already being used by some other app.

Why can't you incorporate everything into one module?
You may very well be able to that...But I wouldn't know! LMAO!! I did some more searching and found where someone else was asking a similar question to mine and the response they were given was ......"you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack".... Now I need to figure out how to try that, don't suppose I can just R-click someplace and create a new folder huh? Pretty sure it involves some arcane string of characters I have no idea about. :eek:
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
You may very well be able to that...But I wouldn't know! LMAO!! I did some more searching and found where someone else was asking a similar question to mine and the response they were given was ......"you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack".... Now I need to figure out how to try that, don't suppose I can just R-click someplace and create a new folder huh? Pretty sure it involves some arcane string of characters I have no idea about. :eek:
Now mapping sounds like the way forward as you will presumably only need the one port address. Once you get this all working be ready to answer a lot of questions :)
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
Now mapping sounds like the way forward as you will presumably only need the one port address. Once you get this all working be ready to answer a lot of questions :)
That you think I am going to get that far is nice....not sure about the validity...but nice. I am now trying to figure out how exactly to do said mapping...MTF.
 
Last edited:

BossHogg

n3wb
Joined
Dec 18, 2020
Messages
9
Reaction score
0
Location
Ottawa
Hi Guys,

I'm new to this forum and IP cameras in general, but after reading a ton about how to set it up, I want to start with a system that won't overwhelm me with false notifications. The AI tool, Deepstack and Blue Iris seems like the way to go for my situation.

I'm having a problem getting the AI tool to communicate with the Deepstack server. I've tried Deepstack on a Linux VM in docker and on the Windows 10 VM that's also running BI and the AI tool. Neither one seems to work. I think the AI tool can see the images that come from BI, but there doesn't seem to be a proper connection to Deepstack. The history tab is empty.

Below I've pasted the log for the current windows setup. It looks like an error around like 126 in the traceback. I also usually get the following:

DateFuncDetailLevelSourceAIServerCameraImageIdxDepthColorThreadIDFromFileFilename
2020-12-18 11:32:05 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneDrivewaySDNone42911FalseAITool.[2020-12-18].log
Please let me know if anyone sees something that could lead to a solution.



Thanks.
DateFuncDetailLevelSourceAIServerCameraImageIdxDepthColorThreadIDFromFileFilename
2020-12-18 9:38:25 AMDSHandleRedisProcMSGDebug: DeepStack>> [6020] 18 Dec 09:38:25 # no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'WarnAITOOLS.EXENoneNoneREDIS-SERVER.EXE6814FalseAITool.[2020-12-18].log
2020-12-18 9:38:25 AMDSHandleRedisProcMSGDebug: DeepStack>> [6020] 18 Dec 09:38:25 # Opening port 6379: bind 10048ErrorAITOOLS.EXENoneNoneREDIS-SERVER.EXE6914FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE89110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE90110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python.pywrap_tensorflow_internal import *ErrorAITOOLS.EXENoneNonePYTHON.EXE91110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE92110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _pywrap_tensorflow_internal = swig_import_helper()ErrorAITOOLS.EXENoneNonePYTHON.EXE93110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helperErrorAITOOLS.EXENoneNonePYTHON.EXE94110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)ErrorAITOOLS.EXENoneNonePYTHON.EXE95110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 242, in load_moduleErrorAITOOLS.EXENoneNonePYTHON.EXE96110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 342, in load_dynamicErrorAITOOLS.EXENoneNonePYTHON.EXE97110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import DLL load failed: A dynamic link library (DLL) initialization routine failed.ErrorAITOOLS.EXENoneNonePYTHON.EXE98110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> During handling of the above exception, another exception occurred:ErrorAITOOLS.EXENoneNonePYTHON.EXE99110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE100110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "../intelligence.py", line 13, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE101110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from sharedintelligence.commons import preprocessErrorAITOOLS.EXENoneNonePYTHON.EXE102110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\init.py", line 5, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE103117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .detection3 import DetectModel3ErrorAITOOLS.EXENoneNonePYTHON.EXE104117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\init.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE105117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .process import DetectModel3ErrorAITOOLS.EXENoneNonePYTHON.EXE106117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\process.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE107117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .utils import read_pb_return_tensors,cpu_nmsErrorAITOOLS.EXENoneNonePYTHON.EXE108117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\utils.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE109117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> import tensorflow as tfErrorAITOOLS.EXENoneNonePYTHON.EXE110117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\init.py", line 24, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE111117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importErrorAITOOLS.EXENoneNonePYTHON.EXE112117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\init.py", line 49, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE113117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python import pywrap_tensorflowErrorAITOOLS.EXENoneNonePYTHON.EXE114117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE115117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> raise ImportError(msg)ErrorAITOOLS.EXENoneNonePYTHON.EXE116117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE117117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE118117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python.pywrap_tensorflow_internal import *ErrorAITOOLS.EXENoneNonePYTHON.EXE119117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE120117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _pywrap_tensorflow_internal = swig_import_helper()ErrorAITOOLS.EXENoneNonePYTHON.EXE121117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helperErrorAITOOLS.EXENoneNonePYTHON.EXE122117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)ErrorAITOOLS.EXENoneNonePYTHON.EXE123117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 242, in load_moduleErrorAITOOLS.EXENoneNonePYTHON.EXE124117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 342, in load_dynamicErrorAITOOLS.EXENoneNonePYTHON.EXE125117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import DLL load failed: A dynamic link library (DLL) initialization routine failed.ErrorAITOOLS.EXENoneNonePYTHON.EXE126117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Failed to load the native TensorFlow runtime.ErrorAITOOLS.EXENoneNonePYTHON.EXE127117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> See Build and install error messages | TensorFlowErrorAITOOLS.EXENoneNonePYTHON.EXE128117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> for some common reasons and solutions. Include the entire stack traceErrorAITOOLS.EXENoneNonePYTHON.EXE129117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> above this error message when asking for help.ErrorAITOOLS.EXENoneNonePYTHON.EXE130117FalseAITool.[2020-12-18].log
2020-12-18 9:38:32 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13221FalseAITool.[2020-12-18].log
2020-12-18 9:38:32 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13441FalseAITool.[2020-12-18].log
2020-12-18 9:38:35 AMStart 5 python.exe processes did not fully start in 10110msErrorAITOOLS.EXENoneNoneNone13614FalseAITool.[2020-12-18].log
2020-12-18 9:38:35 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13711FalseAITool.[2020-12-18].log
2020-12-18 9:46:01 AMDetectObjects Got http status code 'Forbidden' (403) in 86ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg148119FalseAITool.[2020-12-18].log
2020-12-18 9:46:01 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg149119FalseAITool.[2020-12-18].log
2020-12-18 9:46:31 AMDetectObjects Got http status code 'Forbidden' (403) in 9ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg166119FalseAITool.[2020-12-18].log
2020-12-18 9:46:31 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg167119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMDetectObjects Got http status code 'Forbidden' (403) in 18ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094606131.jpg182119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094606131.jpg183119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMImageQueueLoop... AI URL for 'DeepStack' failed '6' times. Disabling: ''ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDNone189019FalseAITool.[2020-12-18].log
 
Last edited:

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
@BossHogg
I propose that you remove the windows version of deepstack and install docker with deepstack running in a Linux environment. After installation, test it before integrating with aitool. The docker environment will allow you to run with current versions of deepstack that are not presently available to operating within a windows environment.
 

BossHogg

n3wb
Joined
Dec 18, 2020
Messages
9
Reaction score
0
Location
Ottawa
@BossHogg
I propose that you remove the windows version of deepstack and install docker with deepstack running in a Linux environment. After installation, test it before integrating with aitool. The docker environment will allow you to run with current versions of deepstack that are not presently available to operating within a windows environment.
Hi Village Guy,

I previously had An Ubuntu VM running docker and the :latest version of deepstack. that didn't work either. I'll revert back to that setup and post the log.

Actually, are you proposing i install Docker in Windows instead?

Thanks for the help.
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
Hi Village Guy,

I previously had An Ubuntu VM running docker and the :latest version of deepstack. that didn't work either. I'll revert back to that setup and post the log.

Actually, are you proposing i install Docker in Windows instead?

Thanks for the help.
Yes:thumb:
 

BossHogg

n3wb
Joined
Dec 18, 2020
Messages
9
Reaction score
0
Location
Ottawa
I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
Google WSL requirements- it tells you the prerequisites you need to include checking/enabling the virtualization. I started here- Install Windows Subsystem for Linux (WSL) on Windows 10
 
Joined
Sep 17, 2019
Messages
7
Reaction score
3
Location
Boulder, CO
Just a quick note that I got DeepStack up and running on my NVIDIA Jetson I had just sitting around. AI-Tool is on the BI server but all analysis is running on the little Jetson. I previously had AI-Tool run DeepStack on my BI box but it got a little crowded on the CPU since the current Windows version is not GPU accelerated.

A few thoughts:

  • Running Deepstack on Jetson with a fresh Jetpack microSD card
  • Updated all software (sudo apt update +upgrade)
  • Turned off desktop environment since I'm just going to access DeepStack and I can ssh into the box if I need to fix something (sudo systemctl set-default multi-user.target)
  • Installed latest deepstack in docker, in High mode, and asked it to restart after machine reboot:
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-x3-beta

The DeepStack docker image defaults to using Medium as the MODE. This means that the default Jetson server is limited to processing images no larger than 320 pixels at Medium. I'm running 4K cameras but that resolution is lost and even suboptimal for DeepStack. You should resize images close to the target processing size in BI, and not ask little jetson to do that resizing before running the image recognition.

I ended up starting my docker at HIGH mode since my cameras are already pretty tuned with motion zones in BI. At that setting I'm getting about 300ms processing time per frame:

[GIN] 2020/12/18 - 20:23:15 | 200 | 324.133383ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:15 | 200 | 280.243879ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:17 | 200 | 287.85692ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:18 | 200 | 288.047127ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:19 | 200 | 293.305335ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:20 | 200 | 281.178667ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:21 | 200 | 274.997808ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:22 | 200 | 283.577667ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:32 | 200 | 269.086322ms | 192.168.1.233 | POST /v1/vision/detection

With original size 4K images I was getting close to 900ms and with MEDIUM with scaled images I was getting around 200ms.

Also note that Jetson is running a smaller size object detection model compared to the desktop gpu and cpu so the accuracy will probably be a little worse on Jetson. You can see the DeepStack settings code below for reference:
deepstack/intelligencelayer/shared/shared.py line 61-90:

"desktop_cpu": Settings(
DETECTION_HIGH=640,
DETECTION_MEDIUM=416,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5m.pt",
FACE_HIGH=416,
FACE_MEDIUM=320,
FACE_LOW=256,
FACE_MODEL="face.pt",
),
"desktop_gpu": Settings(
DETECTION_HIGH=640,
DETECTION_MEDIUM=416,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5m.pt",
FACE_HIGH=416,
FACE_MEDIUM=320,
FACE_LOW=256,
FACE_MODEL="face.pt",
),
"jetson": Settings(
DETECTION_HIGH=416,
DETECTION_MEDIUM=320,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5s.pt",
FACE_HIGH=384,
FACE_MEDIUM=256,
FACE_LOW=192,
FACE_MODEL="face_lite.pt",
),

The current code looks like it's using busy waiting on images so the CPU usage is a little high on the Jetson when Idling (~40%) but I'm guessing that will be fixed now that it's open source:

It would be simpler from a hardware setup to have the DeepStack running on same box as BI but I'm a little worried that the GPU h265 4K decoding of my cams are going to be crowded by DeepStack so a separate Jetson that does extra flagging looks like a good setup for now. If Jetson blows up then I just have more false detections.
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
All bets are off if you are not running within a windows 10 Native environment. There are just too many variables.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?
-try deepstackai on 'low' mode
-try lower frequency of images being anaylized
-try lower resolution images from a substream camera stream
-try less cameras triggering deepstack
-try latest deepstack versions
-try a jetson nano

gpu will definately help, its way more efficient at this type of computation (but you have to run the gpu deppstack version)
 
Joined
Sep 17, 2019
Messages
7
Reaction score
3
Location
Boulder, CO
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?
Are you reducing the size of your saved images? Make BI resize them to 640x480 or even 320x240 if you you're running on default Medium MODE.
 

kosh42efg

n3wb
Joined
Aug 14, 2020
Messages
29
Reaction score
13
@Mattias Fornander - I also offloaded my Deepstack to a Jetson when they released a version that finally worked on the platform! Thanks for the tip about MODE=High, I'd missed that.
 
Joined
Sep 17, 2019
Messages
7
Reaction score
3
Location
Boulder, CO
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-x3-beta
Apologies, don't use the Jetson beta image btw, their release image is newer:

Code:
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack
 
Top