In camera motion detection

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,907
Reaction score
21,294
I emailed Ken about implementing in camera motion for those using weaker pc's or many cams to reduce cpu use. This was his response. Hopefully we can get some information together for him for Hikvision, Dahua and other in demand cams. He indicated in camera motion does work with axis cameras.

[FONT=arial, sans-serif]"I have added camera-specific motion detection as the opportunity presents. Some manufacturers like Axis are forthcoming with the protocol, and some have to be reverse engineered as possible. If you have any detail on how your camera signals through the stream or to a client browser how it's triggered, please relay this information."[/FONT]

[FONT=arial, sans-serif]Does anyone have this info for any cameras?[/FONT]

[FONT=arial, sans-serif]Side note: I asked about line cross detection and he said that it will be added. This will be a significant improvement and will reduce false alerts. [/FONT]
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
OOOh, awesome! I hope there is enough knowledge around here to make that happen. In camera motion detection and line cross detection (are you talking in camera here too? or a BI implementation?) would be highly helpful. I'm sure those with many camera's or weaker computers will appreciate it tremendously. The ability to use a camera's built in features could be a game changer for BI, the motion detection in some cameras is superior to BI.

Plus if BI just looks for a trigger then all the fancy motion detection "stuff" like line cross detection, face recognition... would be usable by BI and save considerable CPU cycles, it would be huge IMO. :cool:
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,907
Reaction score
21,294
OOOh, awesome! I hope there is enough knowledge around here to make that happen. In camera motion detection and line cross detection (are you talking in camera here too? or a BI implementation?) would be highly helpful. I'm sure those with many camera's or weaker computers will appreciate it tremendously. The ability to use a camera's built in features could be a game changer for BI, the motion detection in some cameras is superior to BI.

Plus if BI just looks for a trigger then all the fancy motion detection "stuff" like line cross detection, face recognition... would be usable by BI and save considerable CPU cycles, it would be huge IMO. :cool:
I mean BI doing its own line cross detection....
I personally prefer blue iris motion because i can vary by profile and have different settings for alerts vs record..but this will be awesome if folks can supply the info needed..
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
I personally prefer blue iris motion because i can vary by profile and have different settings for alerts vs record..but this will be awesome if folks can supply the info needed..
Not sure why I would send an alert only and not record at the same time on motion detection, but you must have a reason for that. Some camera's actually do have sophisticated motion detection/profile type settings too, this is another thing that excites me about this, you could use all the *new* features (even ones we don't know of yet) that utilize multiple techniques in the camera to just send an alert that BI would understand. Using the built in proccessor of the camera is certain to save many BI CPU cycles, I bet it will be dramatically less.

So to all the techies out there with access to these cameras get your network packet analyzing software out and do some digging, we're counting on you! :D
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,907
Reaction score
21,294
My setup allows a higher threshold for alerts vs recording..so for recording i set the threshold very low so nothing is ever missed...for alerts i set the threshold much higher and have the area heavily masked so i only get actual movement i want to be alerted about....
I have yet to see a camera that has sunrise sunset profile capability (i know there are scripts for hikvision, but i mean nativity) so that i can have separate settings for the night v day...but yeah it would be great..
Maybe @nayr @networkcameracritic @Mike can provide some insight....
 
Last edited by a moderator:

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
I've been perusing the ONVIF specification and it looks like it uses SOAP and XML to do all the "talking" between the camera and recording device (in our case BI) and wouldn't require network packet captures. This is the *stuff* I do and it looks easy-pleasy to me...but I don't have a camera yet to test with (as you know just sold my set of cameras and still in the process of getting new ones) so I can't jump in and help - yet. :blue:

When I do I'll be glad to help, I think pegging the ONVIF method would get a considerable number of the cameras out there, even if the spec isn't tightly followed.
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
Unfortunately just eliminating the Blue Iris motion detection is not going to reduce CPU usage all that much. As far as I can tell, Blue Iris is always decoding all the incoming video streams. This is very CPU intensive work that must be done before Blue Iris can even start analyzing the video frames for motion. But it is also important for another reason. Live views. Blue Iris must decode the video streams before it can display them in its main interface, in its web interface, or in any mobile app.

Now, of course all decent NVR software also does live views, and with lower CPU usage. Often this is done by designing the NVR to never have to decode (or encode) video during its normal operations. This typically means the camera does all the video analytics (motion detection), and the client app (web browser or mobile app or desktop app) does all the video decoding. Then all the NVR has to do is feed all the compressed video streams (just a few megabytes per second total) to the right places at the right times. If optimized well enough, this job can be done for dozens of cameras on a really weak computer, like one with an Atom CPU.

Blue Iris was simply not designed this way.
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
Unfortunately just eliminating the Blue Iris motion detection is not going to reduce CPU usage all that much. As far as I can tell, Blue Iris is always decoding all the incoming video streams. This is very CPU intensive work that must be done before Blue Iris can even start analyzing the video frames for motion. But it is also important for another reason. Live views. Blue Iris must decode the video streams before it can display them in its main interface, in its web interface, or in any mobile app.

Now, of course all decent NVR software also does live views, and with lower CPU usage. Often this is done by designing the NVR to never have to decode (or encode) video during its normal operations. This typically means the camera does all the video analytics (motion detection), and the client app (web browser or mobile app or desktop app) does all the video decoding. Then all the NVR has to do is feed all the compressed video streams (just a few megabytes per second total) to the right places at the right times. If optimized well enough, this job can be done for dozens of cameras on a really weak computer, like one with an Atom CPU.

Blue Iris was simply not designed this way.
I believe the reason they are getting lower CPU usage is because they use the sub stream of a camera (and many times multiplex it into one stream for multi-views) and only use the primary stream when viewing a single camera. You could replicate this behavior in BI easy enough (except for the multiplexing - only Ken could do that), I've just never done it so I don't know how much impact it would have.

As far as decoding the streams, most modern video cards can decode it themselves - BI doesn't have to do it, however, I'm not sure Ken is leveraging this in BI. Ken has stated many times that D2D bypasses the encoding - so I'm not sure I buy that it is decoding the streams all the time. :cool:

After thought:

It would be nice if you could specify a sub-stream for each camera in BI (for those cameras that support it), then BI could use that for display and just record the primary stream, might be a good feature request for Ken over at BI.
 
Last edited by a moderator:

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,907
Reaction score
21,294
It would be nice if you could specify a sub-stream for each camera in BI (for those cameras that support it), then BI could use that for display and just record the primary stream, might be a good feature request for Ken over at BI.
You can do this by settings up a cloned camera and hidding the recording/high def cam
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
Ken has stated many times that D2D bypasses the encoding - so I'm not sure I buy that it is decoding the streams all the time. :cool:
Yes, D2D bypasses video encoding for recordings, and so D2D by itself also does not require decoding. But I am fairly certain the decoding is still done anyway. I have a few reasons for this.

I have a secondary Blue Iris machine that just does constant recording of four h264 sub streams, with no motion detection, and it runs in service mode with no GUI open and no remote viewers. This is the ideal situation in which no video decoding would actually be necessary.

But:

1. When I request the /image/index page in a browser, I get the latest frame from all the cameras in less than 50 milliseconds. I can keep refreshing this index image and keep getting fresh frames with a response time under 50 ms. If Blue Iris had to start up a bunch of h264 decoders to produce this index image, it would take a lot more time.

2. Blue Iris CPU usage only gets a very short, very small spike when producing an index image. This makes sense if Blue Iris was already decoding all the streams. The latest frame from each camera would already be available in a memory buffer and it would be a simple matter of scaling them down, appending the OSD text, and encoding to jpeg. However if it was starting up a bunch of h264 decoders, it would have to start at an i-frame and decode every partial frame up to the present time, for each camera, and that would be far slower and more CPU-heavy than I am seeing.
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
You can do this by settings up a cloned camera and hidding the recording/high def cam
Right, that's why I mentioned BI could do it, however, you have to admit it will complicate the setup - it would be far better if it was just an alternate stream setting for a camera. That way BI could use the secondary stream for display while recording the primary stream. Done this way I think it would give BI the same lower CPU usage that other NVR software has - and of course it would be way easier to manage.

It would be a hassle to have a 16 camera system and have to setup 32 cameras in BI. :D
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
Right, that's why I mentioned BI could do it, however, you have to admit it will complicate the setup - it would be far better if it was just an alternate stream setting for a camera. That way BI could use the secondary stream for display while recording the primary stream. Done this way I think it would give BI the same lower CPU usage that other NVR software has - and of course it would be way easier to manage.

It would be a hassle to have a 16 camera system and have to setup 32 cameras in BI. :D
I totally agree there. It would make it a lot less fun to full-screen one cam in the live view but it would have the potential to reduce CPU usage a lot while preserving full quality in recordings.

But I bet if you tried to hack that together with hidden cameras in BI right now, you'd end up with higher CPU usage than if you ignored the existence of sub steams.
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
Yes, D2D bypasses video encoding for recordings, and so D2D by itself also does not require decoding. But I am fairly certain the decoding is still done anyway. I have a few reasons for this.

I have a secondary Blue Iris machine that just does constant recording of four h264 sub streams, with no motion detection, and it runs in service mode with no GUI open and no remote viewers. This is the ideal situation in which no video decoding would actually be necessary.

But:

1. When I request the /image/index page in a browser, I get the latest frame from all the cameras in less than 50 milliseconds. I can keep refreshing this index image and keep getting fresh frames with a response time under 50 ms. If Blue Iris had to start up a bunch of h264 decoders to produce this index image, it would take a lot more time.

2. Blue Iris CPU usage only gets a very short, very small spike when producing an index image. This makes sense if Blue Iris was already decoding all the streams. The latest frame from each camera would already be available in a memory buffer and it would be a simple matter of scaling them down, appending the OSD text, and encoding to jpeg. However if it was starting up a bunch of h264 decoders, it would have to start at an i-frame and decode every partial frame up to the present time, for each camera, and that would be far slower and more CPU-heavy than I am seeing.
That is the ideal situation, however, I think you're underestimating the power of present day computers - 50msec is like forever for a computer. :)

Because of gaming (like it or not games is what drove the video market) encoding and decoding video is built into the hardware now (especially for h264) and they have no problem doing it in realtime - measured in micro seconds or even less. The stuff we are doing now on computers would have choked older OS's and hardware, in ten years the computing power and hardware integration (because it's not just about the raw power, there are many discrete components, chips and low-level programming that have improved everything) has progressed at a fast pace. Look out Start Trek here we come. :rolleyes:

I don't think that is enough anecdotal evidence to conclude that BI is decoding all the time, I'd like to see some harder evidence than that. :cool:
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
I totally agree there. It would make it a lot less fun to full-screen one cam in the live view but it would have the potential to reduce CPU usage a lot while preserving full quality in recordings.

But I bet if you tried to hack that together with hidden cameras in BI right now, you'd end up with higher CPU usage than if you ignored the existence of sub steams.
BI could switch to the primary screen when you full sized it - this is exactly what current NVRs do. I tend to agree with your assesment on setting up the dual camera thing in BI, however, I have not tried it so I don't know for sure...

Even if it was less I'd want to switch to the primary stream when I was viewing a camera full screen, that is why I think it has to be a Ken thing. It certainly could be done now, but would be too much a hassle for the user - IMO. :cool:
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
BI could switch to the primary screen when you full sized it - this is exactly what current NVRs do. I tend to agree with your assesment on setting up the dual camera thing in BI, however, I have not tried it so I don't know for sure...

Even if it was less I'd want to switch to the primary stream when I was viewing a camera full screen, that is why I think it has to be a Ken thing. It certainly could be done now, but would be too much a hassle for the user - IMO. :cool:
That should be do-able without too much of a latency hit as long as Blue Iris would keep all the frames back to the most recent iframe in memory. That way it would not have to stop and wait for a new iframe to arrive.
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
encoding and decoding video is built into the hardware now (especially for h264) and they have no problem doing it in realtime - measured in micro seconds or even less
One of these days I need to test whether a powerful graphics card actually lowers BI's CPU usage. It just so happens that my BI box and my gaming box are the same except the BI box uses CPU/motherboard graphics while the gaming box has a GTX 980. I bet the CPU usage would be identical between them.
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
I don't think you'll see a big difference because h264 encoding/decoding is built into even MB/CPU integrated graphics - and has been for some time. Here is an example of the Intel 2000 and 3000 vid - from 2011. Any video in the last 7 years or so has/had built in h264 *stuff*, only if your doing software decoding do you take a big hit (which nobody does anymore). :)
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,688
Reaction score
14,057
Location
USA
The question I'd like to finally answer for myself is would a high end gaming graphics card be better at decoding h264 than a high end CPU? Can Blue Iris even take advantage of a high end GPU? Probably not. And it also depends on the capabilities of the software. Based on all the windows media support Blue Iris has traditionally had, I bet Blue Iris uses Microsoft's video processing libraries for h264 too. That probably takes advantage of all the features of a nice CPU but I doubt a good Nvidia or AMD GPU would be used for anything more than pushing the pixels to the screen.

All this video processing is done with very low level code, which is way outside my comfort zone. It makes my brain hurt.
 

Zxel

Getting the hang of it
Joined
Nov 19, 2014
Messages
263
Reaction score
54
Location
Memphis, TN
The question I'd like to finally answer for myself is would a high end gaming graphics card be better at decoding h264 than a high end CPU? Can Blue Iris even take advantage of a high end GPU? Probably not. And it also depends on the capabilities of the software. Based on all the windows media support Blue Iris has traditionally had, I bet Blue Iris uses Microsoft's video processing libraries for h264 too. That probably takes advantage of all the features of a nice CPU but I doubt a good Nvidia or AMD GPU would be used for anything more than pushing the pixels to the screen.

All this video processing is done with very low level code, which is way outside my comfort zone. It makes my brain hurt.
If you're talking about CPU software decoding of h264 any modern video card (even the $30 types) will beat the most powerfull CPU - it's not even close. A powerful graphics card may even be slower at decoding an h264 stream than a less expensive card optimized for example an HTPC - because they spent more time designing the encoding/decoding of video streams. The encoding and decoding done by video cards is done by discrete components or a chip dedicated to it, has nothing to do with the gaming graphics GPU or speed. Think of it this way - h264 encoding/decoding is hard wired into graphics cards today, I can't even think of a video card that doesn't have dedicated h264 hardware (although I'm sure there is one out there - probably special purposed for a specific application).

I am well versed on this subject - I ran one of the first online gaming sites for years, I am very familiar with gaming graphics. :D

My guess is that BI just hands off the encoding/decoding to the OS which has all the low level coding to use the graphics card built in h264 *stuff*, this is the standard nowadays. The encoding/decoding of h264 streams is totally mainstream now, the video driver stack is probably more of an issue than the card itself.
 

ServiceXp

Getting the hang of it
Joined
Feb 12, 2015
Messages
211
Reaction score
18
Location
USA
This feature would be really nice to see implemented, even for my own selfish reasons. :loyal:
 
Top