Is there a thread that tells how to use the AI tab in the Blue Iris Status section?

ingeborgdot

Pulling my weight
Joined
May 7, 2017
Messages
655
Reaction score
153
Location
Scott City, KS
I have never used it but would really like to. I am not sure how the AI tab in the BI Status works. Should there be video in there already, or do I need to put the video in there myself? I use BI a lot, but have never really taken advantage of some of the things it does. I did a search, but didn't find anything. I may not have put in the proper parameters. Thanks.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,696
Location
New Jersey
You need to save the detection information on the AI tab of the camera. That will create a .dat file. Then you go to the directory holding the alerts for the camera and drag the appropriate .dat file into the bottom pane of the AI tab.

It's easier just to locate an alert in the alerts pane on the main console screen, hold down the control key and double click on the alert. That will pop up the same analysis of the real time event.
 

ingeborgdot

Pulling my weight
Joined
May 7, 2017
Messages
655
Reaction score
153
Location
Scott City, KS
Ok. I did the control key, and double click. It brought up the clip like stated, but should there be a label as to what it is seeing? Like person or car etc?
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,696
Location
New Jersey
Yes, but you need to check the box to save the detection information on the AI tab for each camera that you have configured to use DeepStack. Without the file, the .dat file, which contains the detection information of what was dtected all that will appear is the capture jpg used for detection, IE the still image of the alert.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,696
Location
New Jersey
Can't help you with that since I don't use SenseAI. It's still too finicky IMHO. Maybe @MikeLud1 can help with this in terms of SenseAI.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,696
Location
New Jersey
I use, typically 40% minimum confidence on everything. The number of images and the timing of those images depends on the camera, the view and what I'm trying to detect. For example looking at a street view from a moderate distance to detect street traffic I use 15 images with a 200ms timing. For my driveway approach cameras, approaching the house, I use five images and 500ms timing since they are all moving much slower. It takes some time to tinker and find the sweet spots but is well worth the effort.
 

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,215
Reaction score
1,492
Location
USA
It really depends on your camera's viewpoint of when your trigger goes off...etc. This video helps explain it a lot. It was for deepstack, but it is the same thing for Codeproject AI.

 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,696
Location
New Jersey
That's actually the first thing to do. It is also a trial and error type of thing. I usually set up a single zone zone and try to eliminate things like a fluttering flag from the zone to prevent as many false trigger as possible. Then it becomes a balance of object size and contrast to get good detection. I don't use object detection other than to avoid scene lighting changes triggering an alert. I typically use "edge vector" for the detection mode but that is not always the case. I have a few cameras on "simple" instead. It all depends on the view and what I'm trying to detect. It's not a fast thing to do and can take some time. You need to test as you change settings each time to get a better feel for what each one can do.

My experience is that even the camera settings are very important with AI. Contrast is a key for reliable AI detection, especially when the target is smaller, IE a smaller object or a larger object further from the camera.
 
Top