Deepstack Models questions:

Wen

Getting the hang of it
Joined
Aug 24, 2015
Messages
80
Reaction score
25
When would a user run more than one model?

What are the advantages to run "Combined, and "Dark" together? Disadvantage?

Do the extra model files need to be manually updated on occasion?

When a user enters the string, "Objects:0" then the Deepstack models are ignored, is that a good thing?

Does it help to enter "Banana" or "Zebra" in the To Cancel position?

When would you elect to use the "Begin Analysis with motion leading image"?

Do most users select "Use Mainstream if available? Does that feature depend on the speed of the console that's running BI?

Thanks in advance, I've read numerous posts hoping to get answers to the above, but I'm hoping for more clarification...
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,692
Location
New Jersey
I run combined and dark at night. It does increase processing time/CPU utilization/GPU utilization but results in more successful detections. Sometime combined will see what dark can't and sometime vice versa.

Using "objects:0" (note case is very important to DeepStack) is common when using a model like combined. In this case detection times can get so long that they may cause a time out of DS.

I've tried the "banana" "zebra" method and found it hasn't helped much and also increases detection times and utilization. Every situation is different and you may need to experiment.

I always use "leading edge" since it can significantly reduce detection times.

Most leave "Mainstream" unchecked. DeepStack downsizes the images, or they are downsized prior to being sent to DeepStack, so there is no advantage that I can see to using "Mainstream" resolution.

Back to the case sensitivity of DeepStack. That means the directory names, model names and object names all need to be exactly the same as they appear in the directory structure, file names of the models and object names inside the models.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,885
Reaction score
48,547
Location
USA
I agree with the previous response. My additional thoughts:

When would a user run more than one model? In addition to what Sebastiantombs said, you may run another custom model like the LOGO model or the LPR model that MikeLud1 created. Or maybe you train your own model.

What are the advantages to run "Combined, and "Dark" together? Disadvantage? Agree with the above - it adds CPU time, but it may find what one doesn't. The Dark is specifically made for nighttime, but in my field of view, the Dark worked better than the default objection detection both day and night.

Do the extra model files need to be manually updated on occasion? That is up to the creator of the model - they may update it from time to time.

When a user enters the string, "Objects:0" then the Deepstack models are ignored, is that a good thing? The objects:0 means the default model is not used. It is a good thing if you have another model that performs better. I turned off the default model because dark works better in my field of view.

Does it help to enter "Banana" or "Zebra" in the To Cancel position? It helps in certain fields of view, and it helps get a better alert image.

At night, my one camera, that has a straight on angle of the street to get a side profile of a car, would either find a car but the alert image would be the lightshine on the street or just a part of the vehicle, or it would trigger out nothing found due to headlight bounce off the street.

1642735397709.png

Once I added a banana in the cancel field, it now will go thru all the images and select the best one, which gives me the whole vehicle in the frame. It makes for scrubbing video much quicker as I can skip looking at video of known vehicles.

1642735454496.png


When would you elect to use the "Begin Analysis with motion leading image"? In most instances it works better as that is the first image. You usually do not use this if you are doing plates because the leading edge could be before the plate is visible.

Do most users select "Use Mainstream if available? Does that feature depend on the speed of the console that's running BI? Most do not. It can take longer because Deepstack then downrezes the image. Although this option is available, if you reach out to BI support with issues related to Deepstack, this is the first thing they tell you to uncheck. Along with this, in the global AI setting you can select High, Medium, or Low. High uses the most CPU and takes the longest. I have tested all 3 and in my field's of view, LOW was just as accurate as High and took less time.

At the end of the day, it is best to try different functions and models with your field of view and then select the ones that work best for your system.
 

digger11

Getting comfortable
Joined
Mar 26, 2014
Messages
368
Reaction score
376
Do most users select "Use Mainstream if available"?
I recently discovered something else about this setting...

If you have "Burn label mark-up onto alert images" selected, and you have Substream(s) enabled for the camera, the above setting will also control whether the resulting image is from the Mainstream or the Substream. I have Pushover sending me the marked up image on a confirmed alert, and even though it does have an impact on my DeepStack processing times, I've decided to use the Mainstream so I get a full resolution image.
 
  • Like
Reactions: Wen

VLITKOWSKI

Young grasshopper
Joined
May 9, 2019
Messages
75
Reaction score
7
Location
France
Hello guys,

i'm using objects:0,dark on all my camera, but here is the log of deepstack

[GIN] 2022/02/07 - 18:58:08 |←[97;42m 200 ←[0m| 129.2734ms | 192.168.1.13 |←[97;46m POST ←[0m "/v1/vision/detection"
[GIN] 2022/02/07 - 18:58:22 |←[97;42m 200 ←[0m| 128.1605ms | 192.168.1.13 |←[97;46m POST ←[0m "/v1/vision/custom/dark"

objetcs and dark are used
 
Top