You don't need a really high horsepower CPU to achieve low CPU utilization. Using sub streams will reduce CPU loading quite low. My system runs from a high of 25% to a low of 15% at night with 21 cameras, 20Mp/ps and around 200Mb/ps on an i7-6700K. Something newer, like an 8500, would probably run around 15% during the day and under 10% at night. A used business class machine would save you a ton of money compared to building a super box to get, basically, the same result.
Currently I have i7-8700 with 14 cameras about half 4K and about half 8K, using sub streams which are set for 1280 x 720 res, (Which I realize is somewhat high but I like the image quality, 640x480 looks pretty bad) all streams set for 15 FPS, going direct to disc. I have the camera viewing frame rate also set for 15. I have it set for Intel+VPP hardware decoding. I also have a GTX 960 graphics card installed because it's a long story but because of my monitor configuration I have to do this.
Both the Intel and the Nvidia GPU's are running at the same time.
CPU running at about 70% which I'd like to see lower..
Just thinking a newer CPU would have more GPU decoding horsepower, but maybe not?
Yu have something set wrong. My older 6700K runs lower than that with higher resolution on the sub stream and all the sub streams are set to 2048 for a bit rate. There are guys running on even older hardware with more cameras that don't hit your utilization level. I've got a GTX970 in my machine and don't even bother with hardware acceleration at all, it makes no significant difference that I can see.
Can you post a screen shot of the cameras tab in the BI console showing frame and bit rates?
Some of your sub bitrate is higher than the mainstream But most of them are about the same as the mainstream. You should be sub 10% with that machine.
You have something messed up here.
Folks here run 50 cams on a 4th gen at 30% CPU.
In many instances, now with the substreams, using Hardware Acceleration causes the CPU to go up because the CPU % needed to offload the video to the GPU is more than the savings.
If you are running too high a substream resolution, it defeats the purpose.
A D1 resolution with a 1024 bitrate uses a fraction of the CPU of an 8MP camera. And most cannot tell the difference, especially in a tile layout. You always get mainstream when you solo a camera.
Here is a thread where I posted regarding plates, but the same logic applies. Plate reader software can read a D1 resolution. For just one camera, this resulted in a savings of 30MP/s in BI which is a huge CPU savings with multiple cameras.
We have folks come here all the time thinking that more MP is better, whether it be for general purposes or for LPR. Those of us that have been around long enough know that sensor size is more important than MP. Those that have been here awhile know that I share a representative sample of...
I agree with wittaj, the bit rates on the sub streams are way too high to be efficient. You're running the same as or higher than main stream on them.
I would also shut off hardware acceleration, globally and in each camera. You're not gaining anything there and probably adding to the CPU load. I thought you were using the NVidia card for HA, not Intel. Either way, it isn't worthwhile doing once you set sub streams up properly.
OK, can I set each camera to default for HW decodeing in BI Video tab then set the camera tab to NO under HW accelaration or do I need to set each camera?
I will mess with the substream bit rate in each camera and report
So this is interesting, if I slow the bit rate within the camera down on the sub stream, it doesent change the numbers within BI.
If I slow the BR down on the main stream within the camera, it slows both streams down in BI?? Why?
That did the trick, had to fiddle a bit to get BI to look at 3rd stream because just upgraded from old version, (had to re-inspect) but CPU now at ~10%
Thanks so much for everybody's help, really appreciate it!