Blueiris added direct support for Deepstack AI

ccaru

Young grasshopper
Joined
May 18, 2020
Messages
35
Reaction score
9
Location
Luxembourg
Could somebody shed some light on this. Excuse the layman terms as I am completely new to this: Is it possible to get Deepstack to learn how to better recognize an object by manually confirming or rejecting a "recognition event" that is fed to it by Blue Iris?
Is there any way I can monitor the stills that it is analyzing and confirm the object inside and get it to learn?
 

CrazyAsYou

Getting the hang of it
Joined
Mar 28, 2018
Messages
58
Reaction score
30
Could somebody shed some light on this. Excuse the layman terms as I am completely new to this: Is it possible to get Deepstack to learn how to better recognize an object by manually confirming or rejecting a "recognition event" that is fed to it by Blue Iris?
Is there any way I can monitor the stills that it is analyzing and confirm the object inside and get it to learn?
Monitoring the stills is easy becuase you can have them saved to the "Alerts" folder by selecting the "Burn label mark-up onto alert images" in the AI settings of each camera. You'll find the reviewed images in the folder marked up with orange squars around the object along with the confidence %

As for confirming/rejecting - At present there is no way to-do this but if your up to it you can train Deepstack for you own models but it's it's farily involved - see here
 

ccaru

Young grasshopper
Joined
May 18, 2020
Messages
35
Reaction score
9
Location
Luxembourg
Thanks for the pointer! So, just to understand this a bit more, if I create a custom model, I can call it something which isn't reserved (like "CustomCat") and then enter that into the Blue Iris list together with Person,Cat,Dog etc.. and it should work?
 

CrazyAsYou

Getting the hang of it
Joined
Mar 28, 2018
Messages
58
Reaction score
30
Thanks for the pointer! So, just to understand this a bit more, if I create a custom model, I can call it something which isn't reserved (like "CustomCat") and then enter that into the Blue Iris list together with Person,Cat,Dog etc.. and it should work?
Yes that's exaclty what you can do. Note that you need a good number of images for training and testing to get the best results when creating a new model, I think it's something like 300 for training and 50 for testing. I'd also advise you use the Google Colab for training unless you have a pretty beefly Nvidia GPU in your computer. The steps are well documented on the Deepstack website and although the youtube video to create a new model for people with "masks" on is without sound it shows the exact steps along wth testing of the new model towards the end.
 

ccaru

Young grasshopper
Joined
May 18, 2020
Messages
35
Reaction score
9
Location
Luxembourg
Amazing! thank you! Sounds doable, as effectively I have potentially thousands of images taken by the same cameras that I would be using for the same purpose. I'm assuming that since they all have the same background, it might turn out to be a good exercise to use the same images which are very close to the ones I'd like it to recognize.

Sounds like a good project to start working on :)
 
Joined
Oct 1, 2020
Messages
17
Reaction score
2
Location
Texas
I've been using the external application method for nearly a year now with no issues. Is there any good reason to move to the internal integration over continuing to use the external app? My trigger times are already pretty low and I prefer the fine tuning of confidence levels, etc, the app provides.
 

wittaj

Known around here
Joined
Apr 28, 2019
Messages
4,727
Reaction score
6,079
Location
USA
I've been using the external application method for nearly a year now with no issues. Is there any good reason to move to the internal integration over continuing to use the external app? My trigger times are already pretty low and I prefer the fine tuning of confidence levels, etc, the app provides.
Comes down to personal choice. Many have reported that the additional items folks have added to the 3rd party add more flexibility and thus better AI. But for those not wanting to deal with Dockers or looking for a more simple implementation, the DeepStack integration is a more logical choice.
 

Alan_F

n3wb
Joined
May 17, 2019
Messages
15
Reaction score
7
Location
Maryland
I've been using the external application method for nearly a year now with no issues. Is there any good reason to move to the internal integration over continuing to use the external app? My trigger times are already pretty low and I prefer the fine tuning of confidence levels, etc, the app provides.
The external AITOOL application definitely gives more fine-tuning ability, at least for now. At the pace BI is developing new enhancements I wouldn't be surprised to see the gap between them narrow pretty rapidly.

I've tried playing around with the direct integration a little to see if it can replace my AITool setup. So far I'm sticking with AI Tools but I'm going to keep running some integrated AI on a few cameras just to get a feel for it. A few things I have to figure out or overcome before I could switch over:

  • I'm using Telegram as the primary notification method for certain alerts, and it is much faster than my email to SMS gateway that I can use with BI. I'd love to see BI add Telegram as a notification method. I took a quick look at using MQTT or a web request and Node-Red to send to telegram, but the mechanics of attaching the image file raised the complexity level to where I didn't do it.
  • By combining BI with AITool I'm able to have a mask that's different from the motion zone that initiates the analysis. I use this to detect people approaching my front door. Because of the angle of the camera the AI mask includes some areas further from the door which might have a person walking by in the street. I have BI set to only write the JPGs for analysis when motion is detected in a smaller area covering the front walkway, but the mask in AITool is taller and would contain the upper body of a person walking on the walkway. I haven't figured out any way to have the integrated BI/DS trigger based on one motion zone but only alert based on AI detecting the object in another motion zone. Stated another way... If motion in zone A and object in zone B, then alert. If motion in zone B but not in zone A, do nothing.
Since the CPU load of the integrated AI analysis seems to be pretty low, I am going to use it on a couple of cameras where I already record 24x7 HD but also record motion separately. The cost of losing a motion detection is fairly low on those cameras as I can always pull the 24x7 recording to see if something was missed, assuming it hasn't been deleted. I'm sending the 24x7 recordings to one drive and the motion recordings to another drive which allows me to keep many weeks of motion events while the 24x7 streams go back only a few days, but if something happens and the AI canceled the motion recording I'm likely to know about it before the 24x7 is gone.
 
Top