[tool] [tutorial] Free AI Person Detection for Blue Iris

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
Should I worry about using Deepstack Windows or just keep using Docker? (I'm running Windows on a Dell PoweEdge Server so plenty of power).
If yes, can anyone help me with the install?
If Docker works for you then no need to go to windows.
I am still running Deepstack as a Docker container on a Ubuntu virtual machine and no issues with it. As long as your network is reasonable, it shouldn't be an issue.
I think Windows Deepstack is good if you want to run everything on the one Windows machine, but if you can run up Docker in a VM that seems to be maybe a better option in most cases.

I have a spare Gen6 core i7 with 32GB RAM at the moment so I am going to runup a complete standalone with BI Deepstack and AItool all on Windows for comparison, but I don't expect it to be any faster.
 
Last edited:

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
279
Reaction score
1,358
Location
Washington State
Should I worry about using Deepstack Windows or just keep using Docker? (I'm running Windows on a Dell PoweEdge Server so plenty of power).
If yes, can anyone help me with the install?
Can we get some important posts pinned somewhere or something so people can find the instructional posts easier?
I run the Windows version of DS on my BI machine and on my personal workstation. I see no reason to add additional layer of complexity by using Docker. I also run a Jetson Nano which uses Docker. The first version of Windows DS that I used started with a GUI and the Gui could not be minimized. However the VorlonCD version of AI Tool will start and stop DS without the GUI. I'm now running the very recently released version of DS for Windows and I start it with a powershell script. Running the latest DS for Windows is pretty straight forward. Just install and run. The earlier version had to be activated.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
I run the Windows version of DS on my BI machine and on my personal workstation. I see no reason to add additional layer of complexity by using Docker. I also run a Jetson Nano which uses Docker. The first version of Windows DS that I used started with a GUI and the Gui could not be minimized. However the VorlonCD version of AI Tool will start and stop DS without the GUI. I'm now running the very recently released version of DS for Windows and I start it with a powershell script. Running the latest DS for Windows is pretty straight forward. Just install and run. The earlier version had to be activated.
Does the latest Windows version allow auto start on boot without AItools calling it? I have noticed some people (not all, but includes myself on my setup) have some issues with the Pre-Release versions from VorlonCD and need to revert to the more stable 1.65 or 1.67 release.

Edit: As a note, if one is all ready running a server environment with VM's then Deepstack on docker is not really anymore complex than what already is running :)
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
279
Reaction score
1,358
Location
Washington State
Does the latest Windows version allow auto start on boot without AItools calling it? I have noticed some people (not all, but includes myself on my setup) have some issues with the Pre-Release versions from VorlonCD and need to revert to the more stable 1.65 or 1.67 release.

Edit: As a note, if you are all ready running a server environment with VM's then Deepstack on docker is not really anymore complex than what already is running :)
I start DS with a single line in a PowerShell script. I don't have it in my startup folder but you should be able to put it there and have it start on login.

deepstack --VISION-DETECTION True --PORT 5050

Most people don't run Blue Iris in a VM so the easiest way to run DS on the surveillance computer is to run the Windows version. I spread the AI load across three DS machines. The BI computer, my personal desktop, and a Jetson Nano. In my case using Windows Docker would add complexity. But you are correct, if you are already running Docker in Windows then you aren't adding any more complexity. However, one problem that you might encounter is if you want to use a GPU. It may be more difficult to get working in Windows Docker than with the native Windows application. I have not checked to see if AI Tool will start the new version of Windows DS as it does with the older version.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
I start DS with a single line in a PowerShell script. I don't have it in my startup folder but you should be able to put it there and have it start on login.

deepstack --VISION-DETECTION True --PORT 5050

Most people don't run Blue Iris in a VM so the easiest way to run DS on the surveillance computer is to run the Windows version. I spread the AI load across three DS machines. The BI computer, my personal desktop, and a Jetson Nano. In my case using Windows Docker would add complexity. But you are correct, if you are already running Docker in Windows then you aren't adding any more complexity. However, one problem that you might encounter is if you want to use a GPU. It may be more difficult to get working in Windows Docker than with the native Windows application. I have not checked to see if AI Tool will start the new version of Windows DS as it does with the older version.
That is true and I see the benefit of standalone BI if you have a lot of high def / 4K cameras for sure. I will likely be upgrading some cameras soon and when I do that I will run BI bare metal on a dedicated PC, but I will likely keep Deepstack running in the VM. I might even toy with running two Deepstack servers if VorlonCD's version runs better on the new machine :).

Edit: Just off topic for this reply, but if you do run fully in Windows and run BI as a service, you might find a performance increase by setting Windows Processor Scheduling to adjust for best performance for background services. Obviously this will only be suitable if the machine is dedicated to being a server setup and you run most things as a service.
Open sysdm.cpl and under Advanced -> Performance -> Advanced you will find the setting.
 
Last edited:

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
I keep getting errors in AITool when using Telegram Cooldown. Error message is "ERROR sending image to telegram". This happens when I have the Telegram cooldown set to say 20 seconds but I have Blue Iris dumping JPEGS every 5 seconds while triggered. I want those images to continue to be processed and flags issued to Blue Iris, I just only want Telegram messages every 20 seconds. I don't understand why it's throwing an error in this situation. The behavior is as expected and it's not supposed to be sending to Telegram, so why does it throw an error?
Has anyone come up with a solution to this? I am experiencing this issue as well... running the latest version from github.
 

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?

Edit: is there any reason to include the "TRIGGER" command in the URL? It results in far less images being made and analysed?


Unless I am missing something, it does not make sense to make AI TOOLS to trigger blue iris, as it results in soo many images being created... in a real life scenario, if there is 10-20 images in the queue, one with a person somewhere toward the end, it will re-trigger blue iris again, creating additional CPU load and work... even though the person was last detected 20-30 seconds ago?
 
Last edited:

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?
Not quite.
The motion Blue Iris detects causes the snapshots to be taken or in the event of using single camera, it will actually record all motion clips.
The AI part sends a trigger command back to Blue Iris which then flags the footage and sends an alert (single camera setup) or using the cloned camera method, the trigger URL actually starts the motion recording.
Continued motion within the break timeout is what causes the snapshots to be taken either way and this is a function of the Blue Iris detection not AI.
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?

Edit: is there any reason to include the "TRIGGER" command in the URL? It results in far less images being made and analysed?


Unless I am missing something, it does not make sense to make AI TOOLS to trigger blue iris, as it results in soo many images being created... in a real life scenario, if there is 10-20 images in the queue, one with a person somewhere toward the end, it will re-trigger blue iris again, creating additional CPU load and work... even though the person was last detected 20-30 seconds ago?
@austwhite can type faster than me so I won't re answer but in ref to your edit- How many snapshots are taken is going to be determined by how often you tell BI to do so along with all the other ways you can limit that. AI-Tools is not triggering BI it is simply sending the images to DS (that is my understanding) If your queue is getting filled up then you may need to update your BI box, I have no idea of your set up so I am making an assumption here, Mine is set up all on 1 computer (no DOCKER) and my computer is a old ass Dell optiplex, I have never had queueing issues myself (9 cameras). As far as how to write the trigger do a search here or check out one of the several threads on GitHub or some of the alternate set ups on YouTube, there are no shortages of examples. I struggle with that topic myself simply because I have no flippin idea how to write
them. :) currently I am using these:

[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]
[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]&flagalert=1&memo={Detection}
and in the Cancel block this one:
[BlueIrisURL]/admin?camera=[camera]&user=[Username]&pw=[Password]&flagalert=0

This is exactly how they are entered the user/PW info is on a different location in AI-Tools and get pulled from there *that is how it is with version I use you may have a newer or older version.

**Again I am no expert so if jacked anything up someone PLEASE correct me so there is not bad info out there.
 
Last edited:

Nierka

n3wb
Joined
Jan 21, 2016
Messages
14
Reaction score
11
@austwhite can type faster than me so I won't re answer but in ref to your edit- How many snapshots are taken is going to be determined by how often you tell BI to do so along with all the other ways you can limit that. AI-Tools is not triggering BI it is simply sending the images to DS (that is my understanding) If your queue is getting filled up then you may need to update your BI box, I have no idea of your set up so I am making an assumption here, Mine is set up all on 1 computer (no DOCKER) and my computer is a old ass Dell optiplex, I have never had queueing issues myself (9 cameras). As far as how to write the trigger do a search here or check out one of the several threads on GitHub or some of the alternate set ups on YouTube, there are no shortages of examples. I struggle with that topic myself simply because I have no flippin idea how to write
them. :) currently I am using these:

[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]
[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]&flagalert=1&memo={Detection}
and in the Cancel block this one:
[BlueIrisURL]/admin?camera=[camera]&user=[Username]&pw=[Password]&flagalert=0

This is exactly how they are entered the user/PW info is on a different location in AI-Tools and get pulled from there *that is how it is with version I use you may have a newer or older version.

**Again I am no expert so if jacked anything up someone PLEASE correct me so there is not bad info out there.
memo={Detection} should be memo=[Detection] I think. I personally use only one trigger URL, what is the reason to use 2?
 

Scoobs72

n3wb
Joined
Jun 14, 2014
Messages
18
Reaction score
13
memo={Detection} should be memo=[Detection] I think. I personally use only one trigger URL, what is the reason to use 2?
I don't use "&Trigger" at all in my URLs as my camera is recording to disk 24x7 anyway. For me, the purpose of the "Trigger" URL is to use the "&flagalert=1&memo=[summary]" to update the Alert details in Blue Iris.
 

kosh42efg

n3wb
Joined
Aug 14, 2020
Messages
29
Reaction score
13
What is your processing time per pic? And pic size?
I did follow the suggestion to resize the image to max available (1280x1024 from the substream) and this did improve the % recognition significantly (at least 30% on avg). I am running 5 CPU docker instances now at about 350ms/pic.
Recording substream fulltime, mainstream triggered.
Sorry, just catching up after a few days away.
Code:
[GIN] 2020/12/29 - 16:26:52 | 200 |    128.7103ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:56 | 200 |    144.0097ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:59 | 200 |    139.4709ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:03 | 200 |    142.1914ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:06 | 200 |    127.6803ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:10 | 200 |    127.9357ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:13 | 200 |    125.9889ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:17 | 200 |    124.4378ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:20 | 200 |    129.2105ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:24 | 200 |    133.2759ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:27 | 200 |    145.9042ms |      172.17.0.1 | POST     /v1/vision/detection
That's on HIGH on a Deepstack GPU running in Docker on WSL2. This is the other instance that runs:
Code:
[GIN] 2021/01/01 - 06:27:03 | 200 |    117.6835ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:14 | 200 |    135.3589ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:17 | 200 |    125.6809ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:19 | 200 |    120.2531ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:22 | 200 |      119.41ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:36 | 200 |    135.4541ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:39 | 200 |    132.1701ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:41 | 200 |    146.6058ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:44 | 200 |    107.0353ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:59 | 200 |    136.0961ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:02 | 200 |     118.547ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:04 | 200 |    105.7385ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:07 | 200 |    107.7303ms |      172.17.0.1 | POST     /v1/vision/detection
I've just noticed nothing logger for a few days. That's not right...

The other question, I get BI to save the JPEGs at 10% quality and 1280x720 from a cloned HD stream with a motion trigger that never records video.
 

kosh42efg

n3wb
Joined
Aug 14, 2020
Messages
29
Reaction score
13
I might have missed this somewhere, but has there been a Deepstack release for Windows that can either run as a service or auto-start on boot?
I've tried googling and not found anything on this.
I am trying to move away from running it in a virtual machine and running everything natively in Windows.
I run Deepstack in an auto-restarting Docker container in WSL2. I do this using Task Scheduler as per the answer in this thread.

I have nothing against VMs. I use them for running PiHole, Home Assistant, my NAS, my system monitor, etc. But why do it if you don't have to?
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
memo={Detection} should be memo=[Detection] I think. I personally use only one trigger URL, what is the reason to use 2?
Thanks. I am not getting any errors with the incorrect {} vs[] or it is not doing what it should be maybe. I will change it and see if it makes any changes. I was only using 1 also then I was reading something about an alternate way of doing this... anyway I "think" the second one is so BI will flag all the snapshots but if it is not a valid detection it is placed in the cancelled alerts folder, that URL and the one in the cancel block along with the current settings on my cameras are supposed to be all working together is my understanding of what I read. Again that all may be BS? I am trying to understand how the triggers work but I really do not understand if you write X vs Y what does it really mean/do. I have searched and searched for something to help me understand it all but personally had no luck. Based on looking at all the snapshots in my capture folder (aiinput) and checking what is being pushed to telegram, etc. the system is doing what it should correctly, seldom do I catch a snapshot that should have been flagged that is not being pushed to telegram that should have been.
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
I run Deepstack in an auto-restarting Docker container in WSL2. I do this using Task Scheduler as per the answer in this thread.

I have nothing against VMs. I use them for running PiHole, Home Assistant, my NAS, my system monitor, etc. But why do it if you don't have to?
I have read where some people have written a script to get it all running on boot.
 

astroshare

Getting the hang of it
Joined
Dec 18, 2020
Messages
76
Reaction score
41
Location
Florida Panhandle
Wondering if someone can help me. I've setup single cameras recording continuously and sending the triggered snapshots to AITOOL via Post section in BI.
The triggers work fine, however, AITOOL doesn't seem to flag all valid motion on the timeline. They show in the clips but with no flags. Is something wrong on my setup? I'm posting screenshots of my camera setup and also the BI screens that doesn't show the flagged event.
The event in question here is at 1:48pm. Detected by DS and AITOOLs as a dog (even though it's my cat, but that's fine). If you look at the last flagged event, it was at 12:39pm, but it shows fine in the Alerts area.
Thanks much in advance.

Camera setup:

Screen Shot 2021-01-02 at 1.59.19 PM.pngScreen Shot 2021-01-02 at 1.59.08 PM.png

BI timeline
Screen Shot 2021-01-02 at 1.52.22 PM.png Screen Shot 2021-01-02 at 1.51.15 PM.png Screen Shot 2021-01-02 at 1.51.04 PM.png

AI Tool config:
Screen Shot 2021-01-02 at 2.02.34 PM.png
 

Scoobs72

n3wb
Joined
Jun 14, 2014
Messages
18
Reaction score
13
Wondering if someone can help me. I've setup single cameras recording continuously and sending the triggered snapshots to AITOOL via Post section in BI.
The triggers work fine, however, AITOOL doesn't seem to flag all valid motion on the timeline. They show in the clips but with no flags. Is something wrong on my setup? I'm posting screenshots of my camera setup and also the BI screens that doesn't show the flagged event.
The event in question here is at 1:48pm. Detected by DS and AITOOLs as a dog (even though it's my cat, but that's fine). If you look at the last flagged event, it was at 12:39pm, but it shows fine in the Alerts area.
Thanks much in advance.

Camera setup:

View attachment 78673View attachment 78674

BI timeline
View attachment 78675 View attachment 78676 View attachment 78677

AI Tool config:
View attachment 78678
Is there a reason you're posting the images using the post tab rather than sending them using the Alert tab? Maybe that is the cause, as I don't see anything else especially wrong with your config. You might also want to remove the "trigger" statement from the Trigger URL because you don't want to re-trigger the camera (or if it is already in a triggered state you might now want to resend a trigger command).
 

105437

BIT Beta Team
Joined
Jun 8, 2015
Messages
2,046
Reaction score
951
Anyone trying the DOODS object AI with the latest version of AI Tools? I'd like to compare it to Deepstack and AWS Rekognition. I have it installed in Docker on my QNAP but I haven't been able to get it running just yet. Thanks
 
Top