[tool] [tutorial] Free AI Person Detection for Blue Iris

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
86
Reaction score
82
Location
Australia/Melbourne
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)

if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
 
Last edited:

spammenotinoz

Pulling my weight
Joined
Apr 4, 2019
Messages
208
Reaction score
128
Location
Sydney
I actually said the "integration" was Native, not AI \ Deepstack Functionality...:)
But agree with what you have said, it comes down to use cases. For instance I am comfortable with BI and direct DeepStack Integration as I use constant record, but as you point out AI Tools will flag a higher rate (BI will deliberately miss two close events), isn't a problem for me with constant recording, but I would hate to be someone recording on alert only. Need a really long trigger timeout, to ensure you get all the footage.
BI can now send motion alerts to mobiles only when people are detected when away from home, while still flagging other relevant objects, that was my key use as well as daily summaries and third party integrations\web calls, but BI can do all that now.
I also have BI sending "Critical" IOS alerts, when a person is detected between 11pm and 5am. Supporting the iOS critical alert function is new to me.
Other gotchas, you need to select High-Quality Alert Images in BI to have similar quality with AI Tools.
Funny you mention AI Tools, as the cam I still have on AT_Tools is LPR, but using JPEG's created on alert only. Honestly though both have similar detection rates, My cams are 4k, I then run a script to trim (not downscale) the image so it meets the plate analzyer sizing then upload.
Uploading via a script, provides more customization around which API to use. ie: one when away, one when home. But I was actually able to configure the same with BI (ie: still run the same script on alert and then upload)
Perhaps it was just the number of cams, but I did find the dynamic masking consuming a fair amount of CPU, and damn BI switching stills between main and sub played havock with dynamic and masking in general. Get it working and then bam, BI update and it's unreliable again.
The other strange thing is with AI_Tools I had to run 6 separate GPU instances of DeepStack, where with BI I can just use 1 GPU instance. I think it's because BI is sending the Lower Res Alert Images and not full 4k JPEG's, or just the DeepStack GPU version improved over when I first set it up.
What I am doing though, is finding uses for AI_Tool outside of BlueIris.
 
Last edited:

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
86
Reaction score
82
Location
Australia/Melbourne
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
3,000
Reaction score
5,175
Location
USA
If yo

You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
Thanks for the reply, sorry I should of mentioned it is an i7 with 32Gigs of RAM. So I ended up installing the GPU version, I am up an running, right now I have Deepstack on my Hik Doorbell Cam to test it out. It is working, I am getting Detections...

Appreciate your input...
 

spammenotinoz

Pulling my weight
Joined
Apr 4, 2019
Messages
208
Reaction score
128
Location
Sydney
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
86
Reaction score
82
Location
Australia/Melbourne
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
Sorry, bad wording on my part. I do use a clone camera that I use for "sub streams" for image captures.
 

sdeir

n3wb
Joined
Oct 21, 2020
Messages
1
Reaction score
0
Location
VA
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
You can use a load balancer for that.
 

Futaba

Getting the hang of it
Joined
Nov 13, 2015
Messages
137
Reaction score
30
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
@austwhite, I am still running AI Tool 1.67 from the 1st post in this thread. Are you running a newer version from github? Is there a minimum NVidia GPU version for DS to work? I have a GTX 670 that I can put into my BI server. Thanks.
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
3,000
Reaction score
5,175
Location
USA
@austwhite, I am still running AI Tool 1.67 from the 1st post in this thread. Are you running a newer version from github? Is there a minimum NVidia GPU version for DS to work? I have a GTX 670 that I can put into my BI server. Thanks.
I am running a GTX 970 in my BI box and running DS GPU version, no problems, but I am just testing one Cam right now though. My card has 4Gigs of Memory, which I am hoping will be enough...How much Memory does your 670 have? I had a GTX 760 card once that had 4Gigs even though 2 Gigs was the norm, I paid more for it since I was using it as a gaming card back then.
 

maximosm

Young grasshopper
Joined
Jan 8, 2015
Messages
93
Reaction score
5
Is it possible to trigger with a face detection only a telegram message?


Gesendet von iPhone mit Tapatalk
 

yghujnkl

n3wb
Joined
Jan 25, 2021
Messages
4
Reaction score
0
Location
Antartica
Has anyone experienced AITool removing the URL after the comma under Camera config?
Seems to happen every time I reboot my server.

Fixed, I only had a comma, not a comma space.
"," vs ", "
 
Last edited:

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
223
Reaction score
20
Location
usa
folks - in deep stack, can somebody explain the cancel section in camera settings. How do I use it and what do the numbers mean? Thanks
 

HillSonMX

n3wb
Joined
May 8, 2021
Messages
15
Reaction score
0
Location
Mexico
Hi everybody, new here and with AI Tool, and little old with BI, :thumb: and let me know you I have read all 173 page and I could't find my issue, hope any help from your and this help to anyone else, I could see guy with very good contributions like @Village Guy @GentlePumpkin , etc. hope I could help in some way too !!!.........let's start.
- OS Windows 10 Xenon 2 processsor 128 Ram (main OS)
  • OS Windows 10 (hyper-V)
  • BI 5.4.5.1 (05-11-2021)
  • AI Tool 2.0.760.7721 (2/24/2021)


note: I install AI Tool to have telegram notifications

** TELEGRAM ERROR TEXT
telegram.PNG

** AI TOOL LOG
AIToolLog.png

** AI TOOL SETTING
camaras.PNG

setting.PNG
deepstack.PNG

** BI SETTINGS
trigger.PNGrecord.PNGAlerts.PNG

That is, any else info let me know, hope we can fix this together...thank you for you time !!! :wave:
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
220
Reaction score
100
Location
UK
Last edited:

HillSonMX

n3wb
Joined
May 8, 2021
Messages
15
Reaction score
0
Location
Mexico
Hi @Village Guy good to know you ..... and thank you for you time....... I tried but still same status, here you have more picture (test result{actions}) to help in this.
and Iforgot to mention these second error message (File wacher).

telegramlog.png

aitoollog2.PNG

*** I try these solution from @barnyard post with no successful. [tool] [tutorial] Free AI Person Detection for Blue Iris

Here I have observation, it is ok https?? because I use https with stunnel app in another port, (I could not find where to change these info to test)
variables.png




Hope new tips, really thank you !!! :headbang:
 
Last edited:
Top