I currently use Dahua's IVS, but would prefer to be able to configure object recognition in BI instead. I am curious if you guys run the built-in Deepstack inference on the substream, or the main stream?
I noticed that when I run the analysis off the main stream (I don't use sub streams at the moment), the CPU spikes up to a 100% when the python process is running.
I have seen posts mentioning almost no CPU usage from Deepstack, so maybe processing full 4MP is the reason why I'm seeing high usage? Or perhaps you guys use hardware-accelerated inference with a GPU?
As an aside, I also noticed that (perhaps due to a delay in processing), the Deepstack labeling does not work as well as Dahuas, which is almost instant. In many cases by the time a label is applied, the object has already moved out of frame. I also get issues with cars that were already parked triggering alerts (might be able to tweak that).