you imagine wrong; a lil raspbery pi powered by my USB port can handle decoding a dozen 1080p h264 streams.. you dont need a big stinking gpu to decode video efficiently; you just need appropriate hardware.. GeoVision says only h264 is supported on Nvidia; thats a shame because Intel QuickSync on the latest generation of GPU's has no problem handling the h265 Codec that is driving the 4k video adoption..
as I said; adding an expensive external GPU is counter productive.. were not rendering large VR worlds; just decompressing some highly compressed files in a reasonable time with hardware designed for the task.. Movies and security cameras are not the same thing.. Your BluRay movie will encode 1080p movies in excess of 30Mbps, your camera will do 6-7Mbps; that means if you have the hardware to decode a BD move you have the hardware to decode nearly 5-6 Security cameras no problemo.. but of course an intel gpu can handle many times more than that.. We got users doing >100Mbit video through onboard Gfx.
Like most things mileage may vary. I have 8 cams with a couple of them 4k and the rest are 1080p and when I view all 8 cams on a 4k set with most of the cams live viewing close to 30fps, my CPU is at 50 to 60% cpu when all cams are displayed. I can do tweaks here and there to bring the CPU down. Quicksync definitely helps as well but when I start to see the Intel Quiksync GPU spike at 100%, it makes me think it could use more power, almost like it's become a bottleneck. Can I ask, do you typically display your system on a 4k set? I notice a pretty large spike for both CPU and GPU when displaying on a 4k set compared to a 1080p set. I often switch back and forth between the two just to measure the cpu / gpu differences. Usually it's quite substantial.
Case it helps, I'm running a Skylake 6700K with a GTX-1060, Intel HD Graphics 530.
I'm pretty sure part of the issue is I haven't found the setting to run the cams in lower resolutions in live view yet. (i.e. if you are on a 4k set and you have just 4 1080p cams in live view, you want them all to be displayed in native 1080p, but when you switch to 8 cams, the cameras should switch from the higher resolutions to the smaller ones. Geo had a good way to do this but I haven't found this setting in Xprotect yet.). I'm sure it must exist though because it would save on CPU and GPU.
It seems that Milestone just isn't quite ready to add external dedicated GPU decoding just yet but it seems like they are headed that way. A few other companies already have it and Milestone has just been a little slow to market. Although, looking at their recent documentation, it does say the following:
"Hardware accelerated video decoding uses the GPU inside the
Intel CPU
and the GPU on an optionally installed dedicated
graphic adaptor to render each video stream on the display."
https://www.milestonesys.com/files/..._Accelerated_Video_Decoding_Feature_Brief.pdf
In their example they use a GTX 970.
I think with my particular set-up, if I can get smart live view resolution scaling to work, I should be able to drastically reduce my CPU and GPU usage. On Geovision, I had it working perfectly but haven't figured it out with Xprotect just yet. To be fair, I haven't spent much time tracking it down. On Geo it was right in the main settings.