This is mostly just as an "FYI" that CPAI has both a Mesh option and the ability to connect with remote systems for handling Ai detections. This can be VERY handy if you're running BI on a system that can't run with a GPU.
My BI system has the PCIe slot but no additional power(It only has like a 270w PSU and no cable for PCIe), so short of buying a stupid expensive 75w GPU for Ai, I tried using an old laptop I have with an M2000m GPU. First, I set it up in Mesh - which is as simple as 'enabling' Mesh on both systems and they find each other, somehow(Note: Docker CPAI requires additional configurations to connect in mesh) - but this seemed to have abnormally high latency, which I suspect is in part because my primary CPAI is a docker container, and it connects over only 1Gb which adds a lot of overhead. Next, I just targeted my laptop by chancing the IP from 127.0.0.1 to my laptop and opened the port... and whammy, it worked!
I went from using local/docker YOLOv5 6.2 Medium (CPU) averaging 200-700ms sometimes higher to YOLOv5 6.2 (GPU Cuda) Large averaging between 70ms-300ms. Considering this is going over only 1Gbe, I consider that pretty darn impressive.
My Bi box runs all NVMe for docker/OS, 32GB of DDR4 2400 and an i5-9500 so it's not 'slow' by any means, but getting Docker/CPAI off it has trimmed the CPU usage a lot which helps overall performance.
Long term, I'd obviously prefer to get an inexpensive 75w GPU for my BI system, but unfortunately these are still pretty spendy though I'm hoping CPAI either starts to support Intel GPUs or possibly we see some reduction on cost on 75w GPUs. I'd kill for an A2000 (I keep seeing youtube content creators using them and am jealous) because then I could do more than just CPAI (honestly CPAI doesn't need >2Gb vram).
Another thing I'd honestly like to see is just CPAI improving - I wasn't aware until this week for example that it downscales the image quality before inspecting, which is i think one reason I have tree stumps being detected as 60% likely Cows =)
But it's FOSS so I guess we get what we pay for? I've heard Frigate supports better Ai, though I've never run it to say for sure.
My BI system has the PCIe slot but no additional power(It only has like a 270w PSU and no cable for PCIe), so short of buying a stupid expensive 75w GPU for Ai, I tried using an old laptop I have with an M2000m GPU. First, I set it up in Mesh - which is as simple as 'enabling' Mesh on both systems and they find each other, somehow(Note: Docker CPAI requires additional configurations to connect in mesh) - but this seemed to have abnormally high latency, which I suspect is in part because my primary CPAI is a docker container, and it connects over only 1Gb which adds a lot of overhead. Next, I just targeted my laptop by chancing the IP from 127.0.0.1 to my laptop and opened the port... and whammy, it worked!
I went from using local/docker YOLOv5 6.2 Medium (CPU) averaging 200-700ms sometimes higher to YOLOv5 6.2 (GPU Cuda) Large averaging between 70ms-300ms. Considering this is going over only 1Gbe, I consider that pretty darn impressive.
My Bi box runs all NVMe for docker/OS, 32GB of DDR4 2400 and an i5-9500 so it's not 'slow' by any means, but getting Docker/CPAI off it has trimmed the CPU usage a lot which helps overall performance.
Long term, I'd obviously prefer to get an inexpensive 75w GPU for my BI system, but unfortunately these are still pretty spendy though I'm hoping CPAI either starts to support Intel GPUs or possibly we see some reduction on cost on 75w GPUs. I'd kill for an A2000 (I keep seeing youtube content creators using them and am jealous) because then I could do more than just CPAI (honestly CPAI doesn't need >2Gb vram).
Another thing I'd honestly like to see is just CPAI improving - I wasn't aware until this week for example that it downscales the image quality before inspecting, which is i think one reason I have tree stumps being detected as 60% likely Cows =)
But it's FOSS so I guess we get what we pay for? I've heard Frigate supports better Ai, though I've never run it to say for sure.