Running Bi and CPAI VMs

Hans007

Young grasshopper
Nov 21, 2016
80
17
So i managed to get CPAI up and running using Ubuntu 24.04 instead of Debian 12.
But the issue that i have is that eventho CPAI now sees and uses the GPU for Object Detection (YOLOv5 6.2) no triggers are sent from BI
And when i look at BI it says that GPU is used eventho non of the feeds are set to use GPU
1755115323995.png
1755115347172.png

did a manual "trigger now" from BI and it just cancels the trigger without even sending it to CPAI and even tried restarting both VMs
1755115455811.png
 
I believe that most people are likely running both servers on the same windows machine; either on an actual PC or on a single VM running windows, eventually somebody may answer!
 
Any idea or suggestions?
Try making sure the firewall on both the Windows machine or VM and the Ubuntu machine running CPAI allow port 32168 in both directions.
Clicking the BI Settings/AI/Open AI Dashboard should open the main CPAI page on the Ubuntu machine.
On the CPAI dashboard, clicking the blue CodeProjectAIExplorer button at the top of the window will allow you to pass an image to various types of detection to tell you if CPAI itself is working.
If necessary, you can copy some images from the Blue Iris machine to test it.
If you get no results, typically we would uninstall and reinstall the CPAI application.
I've done this, but to be honest, running CPAI on the Blue Iris machine is just easier.
Before the CPAI project separated from Code Project, their web page had a great deal of information.
Not so much any more.
But for me, 2.9.5 has been rock solid on Windows.(Until the inference number reaches about one million.)
Btw, I alert on cars also, just to make sure it's working.
Firewall is important!


Sent from my iPlay_50 using Tapatalk
 
So i managed to get CPAI up and running using Ubuntu 24.04 instead of Debian 12.
But the issue that i have is that eventho CPAI now sees and uses the GPU for Object Detection (YOLOv5 6.2) no triggers are sent from BI
And when i look at BI it says that GPU is used eventho non of the feeds are set to use GPU
View attachment 226487
View attachment 226488

did a manual "trigger now" from BI and it just cancels the trigger without even sending it to CPAI and even tried restarting both VMs
View attachment 226489
A thought...
Do you have Blue Iris camera(s) configured to trigger and record?
Then once you are triggering and recording in Blue Iris, you need to configure AI for the camera.
Once you get Blue Iris triggering consistently, you can check CPAI.
From what you posted of the CPAI home page, you are not reviewing anything from Blue Iris.
That would indicate to me either wrong IP address/port or firewall.
Again, get Blue Iris to trigger and record. That way you can test communication to CPAI on Ubuntu.
(I use cars).
You could actually set Blue Iris to trigger every 30 seconds or a minute.
Trying to figure this out just by triggering manually is tough.

Sent from my iPlay_50 using Tapatalk
 
A thought...
Do you have Blue Iris camera(s) configured to trigger and record?
Then once you are triggering and recording in Blue Iris, you need to configure AI for the camera.
Once you get Blue Iris triggering consistently, you can check CPAI.
From what you posted of the CPAI home page, you are not reviewing anything from Blue Iris.
That would indicate to me either wrong IP address/port or firewall.
Again, get Blue Iris to trigger and record. That way you can test communication to CPAI on Ubuntu.
(I use cars).
You could actually set Blue Iris to trigger every 30 seconds or a minute.
Trying to figure this out just by triggering manually is tough.

Sent from my iPlay_50 using Tapatalk
receiving


Sent from my iPlay_50 using Tapatalk
 
so after some more testing, this is what i have:
From Windows 11 - firewall and defender disabled, im able to access CPAI webgui via browser and also b doing "
Server listening on: " via poweshell..


From: CPAI - after installing module " Object Detection (YOLOv5 6.2) 1.10.0" CPAI are able to see and use the gpu (cuda) when doing "CodeProject.AI Explorer", but still no trigger are sendt from BI.

next step that i did was to also install module "Face Processing 1.12.3" and enabling it in BI. it now sends requests to Face but still nothing towards Object Detection:
1755531854797.png
1755531882304.png


here are the settings for one of the cams (they are all set up the same)

2025-08-18_17h47_16.png

2025-08-18_17h47_26.png
2025-08-18_17h47_36.png
2025-08-18_17h47_46.png
 
so after some more testing, this is what i have:
From Windows 11 - firewall and defender disabled, im able to access CPAI webgui via browser and also b doing "
Server listening on: " via poweshell..


From: CPAI - after installing module " Object Detection (YOLOv5 6.2) 1.10.0" CPAI are able to see and use the gpu (cuda) when doing "CodeProject.AI Explorer", but still no trigger are sendt from BI.

next step that i did was to also install module "Face Processing 1.12.3" and enabling it in BI. it now sends requests to Face but still nothing towards Object Detection:
View attachment 226754
View attachment 226755


here are the settings for one of the cams (they are all set up the same)

View attachment 226756

View attachment 226757
View attachment 226758
View attachment 226759
Any suggestions? Or do I really need to go back to a 2:1 setup :( meaning have both on one system (win 11)
 
Any suggestions? Or do I really need to go back to a 2:1 setup :( meaning have both on one system (win 11)
Have you checked this:
Last night I tried getting ver 2.9.5 on one of my Linux machines, but got hung up because I needed to install .Net version 9.0. I only had 8 something. And this morning I got tied up with something else.
Do you have the required cuda pieces installed?
Another thought, do you have enough resources on the Windows VM to run the CPAI on that VM? It's actually easier that way. The last time I ran CPAI on Linux was almost a year ago, and I think that was a docker.
I will try sometime tomorrow.
It seems to me you might be making this way too complicated, running on two vms, if I understand correctly.
And unless I don't understand your situation, you should be able to consistently produce alerts in Blue Iris. (Before trying to get the communication working) Then it will be easier to figure it out.

Sent from my iPlay_50 using Tapatalk
 
Have you checked this:
Last night I tried getting ver 2.9.5 on one of my Linux machines, but got hung up because I needed to install .Net version 9.0. I only had 8 something. And this morning I got tied up with something else.
Do you have the required cuda pieces installed?
Another thought, do you have enough resources on the Windows VM to run the CPAI on that VM? It's actually easier that way. The last time I ran CPAI on Linux was almost a year ago, and I think that was a docker.
I will try sometime tomorrow.
It seems to me you might be making this way too complicated, running on two vms, if I understand correctly.
And unless I don't understand your situation, you should be able to consistently produce alerts in Blue Iris. (Before trying to get the communication working) Then it will be easier to figure it out.

Sent from my iPlay_50 using Tapatalk
Thx for your suggestions, I the install dotnet 9 (full package) and also everything related to nvidia and CUDA. The weird part is I can get the face module working with BI and CPAI, but not the object detection module from BI to CPAI. But using the “explorer/vision” I’m able to run tests using. So CPAI does work using gpu but no request are sent from BI. Tried looking into logs but no traces in BI nor CPAI about attemps.

I will also do some more testing as I read a lot of folks having success with docker
 
You ARE getting responses from CPAI in the Face recognition module? Means IP address and port should be good.
When you set Blue Iris to alert on something common (Like cars) you don't get responses.
You say that Blue Iris in not sending requests to CPAI server.
When you look at the log in CPAI, you DO NOT see client requests from Blue Iris.

Like this:


2025-08-22 05:34:50: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 8764a2cb-afde-4966-b1e0-955d184de65c) ['Found Car'] took 139ms
2025-08-22 05:34:50: Client request 'custom' in queue 'objectdetection_queue' (#reqid 0a44f603-9b84-4462-8c7e-5b3d245b0a97)
2025-08-22 05:34:50: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 0a44f603-9b84-4462-8c7e-5b3d245b0a97)
2025-08-22 05:34:51: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 0a44f603-9b84-4462-8c7e-5b3d245b0a97) ['Found Car'] took 143ms
2025-08-22 05:34:51: Client request 'custom' in queue 'objectdetection_queue' (#reqid 7b5acd9b-8a3c-4eda-b7fd-e172e54dc1ed)
2025-08-22 05:34:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 7b5acd9b-8a3c-4eda-b7fd-e172e54dc1ed)
2025-08-22 05:34:51: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 7b5acd9b-8a3c-4eda-b7fd-e172e54dc1ed) ['Found Car'] took 152ms
2025-08-22 05:34:51: Client request 'custom' in queue 'objectdetection_queue' (#reqid 1712ee8f-0a77-4907-906f-5e4bd37c62fa)
2025-08-22 05:34:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 1712ee8f-0a77-4907-906f-5e4bd37c62fa)
2025-08-22 05:34:52: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 1712ee8f-0a77-4907-906f-5e4bd37c62fa) ['Found Car'] took 153ms
2025-08-22 05:34:52: Client request 'custom' in queue 'objectdetection_queue' (#reqid 3dd0ebe3-2bd1-432c-b71b-1918d888a722)
2025-08-22 05:34:52: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 3dd0ebe3-2bd1-432c-b71b-1918d888a722)
2025-08-22 05:34:52: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 3dd0ebe3-2bd1-432c-b71b-1918d888a722) ['Found Car'] took 166ms

If you DO NOT see client request in the CPAI log, then you can actually verify what Blue Iris is sending by doing a packet capture. The calls and responses are http calls. (Documented somewhere in CPAI docs, don't know where off hand).

See this:


If Blue Iris IS NOT sending requests for object detection, then the configuration in the camera / Blue Iris is wrong. (Do one camera at a time. Change one thing at a time.)

If that is the case I would suggest disable all but one camera, (maybe point at the street, detect cars), work on getting that one to work.

I will send screenshots of my configuration that I have been using for years. (I used to disable custom models and switch back and forth between CPAI and Deepstack just by changing the IP address and port.

Like some others have said, running CPAI on a linux machine or VM is throwing a lot more complication into the mix. Trust me, running CPAI on the same box as the Blue Iris is much easier. Version 2.9.5 has been pretty good for me, I've gotten 900,000 inferences before rebooting the machine. ( I did have an issue last week where CPAI quit for no apparent reason however.)
This version has been running on my Windows 11 box for almost eight months.
 
Ken does keep changing the UI and organization of the menu system, but I have not really changed anything substantial for years.

Screenshot 2025-08-22 061205.pngScreenshot 2025-08-22 061320.pngScreenshot 2025-08-22 061513.pngScreenshot 2025-08-22 061700.pngScreenshot 2025-08-22 060856.pngScreenshot 2025-08-22 060949.pngScreenshot 2025-08-22 061035.pngScreenshot 2025-08-22 061114.png
 
Upon looking at your AI window, I believe the Objects:0 in the Custom Models field may be the issue.
I don't think we've used that for years. I know I haven't.
 
IIRC, this was used when we did not have the Custom Models / Default Objects option check boxes on the main AI page.

See:
Post in thread 'Blue Iris Long Delay with CodeProject Object Detection' Blue Iris Long Delay with CodeProject Object Detection
So, on the main AI page, you have Default Objects selected, then in the camera models field, you are telling Blue Iris to not use default objects.
That could explain this behavior.
If you want to use a custom model, on the main AI page, check Custom Models and uncheck Default Objects. Then remove the objects:0 from the model field on the camera.

And take it from me, do one thing at a time.
I have screwed myself up many times by thinking I know what the issue is, and doing more than one thing at once.
 
so here an update, i spinned up a new lxc container (debian) running docker i followed the step one by one that was provided ad created a docker container with following image :cuda12_2-2.9.7, but got the same issue so i tried with another docker image: cuda12_2 (version 2.9.5) without touching anything in BI and i everything just worked. :) and i dont know why but i went from 100-200ms (running BI and CPAI 2.9.5 on same machine "dedicated pc") down to 20-50 ms (running proxmox with win11 in VM and CPAI in docker) ... :cool:
 
so here an update, i spinned up a new lxc container (debian) running docker i followed the step one by one that was provided ad created a docker container with following image :cuda12_2-2.9.7, but got the same issue so i tried with another docker image: cuda12_2 (version 2.9.5) without touching anything in BI and i everything just worked. :) and i dont know why but i went from 100-200ms (running BI and CPAI 2.9.5 on same machine "dedicated pc") down to 20-50 ms (running proxmox with win11 in VM and CPAI in docker) ... :cool:
Good deal.
If I may ask, what version of Blue Iris are you using?
It seems like an older one from your screen shots.
It doesn't matter, as long as it works.
Good luck.

Sent from my iPlay_50 using Tapatalk