CodeProject.AI Version 2.0

Amgclk65

Getting the hang of it
Joined
Jan 14, 2018
Messages
108
Reaction score
41
That makes sense, I’ll wait it out.
Thanks.
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Just a quick update on the topic. I started off on CP AI 2.0.8 with no GPU and the latest version of BI - all working fine. Decided to improve performance. Obtained and installed a GTX1060 6GB card, and started fidgeting with CP AI... naturally at the time the 403 error started appearing...

After various attempts at CP AI 2.1.10, 2.1.11, 2.2 and this morning 2.2.1 -> CP AI is finally installed, working fine standalone and correctly using GPU CUDA with 25-35ms times on the Explore tab with test images from my camera.

BI however does not like this version:

  • it does not recognize the path to the AI service, despite it recognizing it is up and running
  • it does not accept any path to custom models
  • testing clips with AI results in error 200 being thrown

I suppose either CP AI will need further fixes and/or BI will need some change to cater for this new version...

1694942367652.png

I suppose I picked the wrong time to upgrade, but CP AI is still in beta, so it's to be expected.
 

harleyl7

Pulling my weight
Joined
Jun 4, 2021
Messages
260
Reaction score
223
Just a quick update on the topic. I started off on CP AI 2.0.8 with no GPU and the latest version of BI - all working fine. Decided to improve performance. Obtained and installed a GTX1060 6GB card, and started fidgeting with CP AI... naturally at the time the 403 error started appearing...

After various attempts at CP AI 2.1.10, 2.1.11, 2.2 and this morning 2.2.1 -> CP AI is finally installed, working fine standalone and correctly using GPU CUDA with 25-35ms times on the Explore tab with test images from my camera.

BI however does not like this version:

  • it does not recognize the path to the AI service, despite it recognizing it is up and running
  • it does not accept any path to custom models
  • testing clips with AI results in error 200 being thrown

I suppose either CP AI will need further fixes and/or BI will need some change to cater for this new version...

View attachment 172512

I suppose I picked the wrong time to upgrade, but CP AI is still in beta, so it's to be expected.
Love the GPU times though. I tried the Google coral and it's not there yet, but hopefully it can get down to those times because the power consumption vs gpu is a lot better. Even though GPU doesn't use a ton more power. Currently though I have to use a 1080 with 3 fans so it's a little more power. My 1060 only has 6gb so I'm on the hunt for a low power GPU with enough ram.
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Love the GPU times though. I tried the Google coral and it's not there yet, but hopefully it can get down to those times because the power consumption vs gpu is a lot better. Even though GPU doesn't use a ton more power. Currently though I have to use a 1080 with 3 fans so it's a little more power. My 1060 only has 6gb so I'm on the hunt for a low power GPU with enough ram.
Do you find 6GB is not enough memory? With these issues I have been unable to see what kind of CPU/GPU usage I'd get in challenging conditions (think garden when it's windy). I'm currently running 10 cameras via AI ranging between 5mp and 3mp (most of them are 4, two are 5) at 15fps and using substreams also for AI. It's an old i7 4790k with 24GB RAM but it's only running BI and CP AI. Before getting the GT1060 I was running AI on the onboard GPU and having removed hardware decoding on BI it was okay, not fast for AI but OK and CPU usage was rarely above 20%. I think it will be substantially faster now, when these issues are resolved.
 
Last edited:

harleyl7

Pulling my weight
Joined
Jun 4, 2021
Messages
260
Reaction score
223
Do you find 6GB is not enough memory? With these issues I have been unable to see what kind of CPU/GPU usage I'd get in challenging conditions (think garden when it's windy). I'm currently running 10 cameras via AI ranging between 5mp and 3mp (most of them are 4, two are 5) at 15fps and using substreams also for AI. It's an old i7 4790k with 24GB RAM but it's only running BI and CP AI. Before getting the GT1060 I was not running AI on the onboard GPU and having removed hardware decoding on BI it was okay, not fast for AI but OK and CPU usage was rarely above 20%. I think it will be substantially faster now, when these issues are resolved.
My bad I meant to say GTX 1060 3GB. 3GB was enough for most AI detection, but not for ALPR.
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
8,072
Reaction score
21,146
Location
USA
My bad I meant to say GTX 1060 3GB. 3GB was enough for most AI detection, but not for ALPR.
I agree with you about memory for the GPUs. I know in gaming I have compared video cards, example; one video card had 2Gigs memory and the other 4Gigs and there was a big difference, that is limitations in what Res. and other settings a game could run at without issues.

Point is, when it comes to processing AI, I would assume the same applies...
 

tantrim

n3wb
Joined
Aug 20, 2023
Messages
29
Reaction score
14
Location
Texas
Love the GPU times though. I tried the Google coral and it's not there yet, but hopefully it can get down to those times because the power consumption vs gpu is a lot better. Even though GPU doesn't use a ton more power. Currently though I have to use a 1080 with 3 fans so it's a little more power. My 1060 only has 6gb so I'm on the hunt for a low power GPU with enough ram.
did you try the dual edge coral?
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Just a quick update on the topic. I started off on CP AI 2.0.8 with no GPU and the latest version of BI - all working fine. Decided to improve performance. Obtained and installed a GTX1060 6GB card, and started fidgeting with CP AI... naturally at the time the 403 error started appearing...

After various attempts at CP AI 2.1.10, 2.1.11, 2.2 and this morning 2.2.1 -> CP AI is finally installed, working fine standalone and correctly using GPU CUDA with 25-35ms times on the Explore tab with test images from my camera.

BI however does not like this version:

  • it does not recognize the path to the AI service, despite it recognizing it is up and running
  • it does not accept any path to custom models
  • testing clips with AI results in error 200 being thrown

I suppose either CP AI will need further fixes and/or BI will need some change to cater for this new version...

View attachment 172512

I suppose I picked the wrong time to upgrade, but CP AI is still in beta, so it's to be expected.
Just installed the 2.2.2 update. Same issues as 2.2.1 for BI
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
There is an issue with Blue Iris not using standard JSON format when receiving the response from CodeProject.AI. Blue Iris Support was made aware of the issue and should be fixed the next release of Blue Iris.
Mike, yes was reading about that on the CP AI forum. But there still is also the issue of BI not detecting CP AI is running, and that the config for auto start/stop for the service, or selection of the custom models doesn't work.

Nonetheless, with CP AI running and having selected the custom model folder for the Yolo module I am running, I see detection is happening anyway in CP AI, just BI not liking the returned JSON perhaps.

I was also meaning to ask, on a different note, how you go about using the .net version of Yolo with the GPU? When I use Object Detection (YOLOv5 .NET) I get around 250ms detection and on Object Detection (YOLOv5 6.2) I get around 25ms (so x10 faster). The .net shows direct ML and the other shows CUDA... Do you know how I can check if the .net version is using the onboard GPU instead? With task manager and checking what GPU is being used?
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Thanks. Thought so. So I just tested as you suggested. So the .net directML version uses GPU0 (my onboard intel) and the 6.2 uses GPU1 (the GTX1060)

I guess there is no way of making the .net use GPU1 - I thought I read somewhere that performance was improved when using directML vs CUDA.

Anyway, with Yolo 5 6.2 using CUDA and times around 25-35ms I think it is okay as is
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Thanks. Thought so. So I just tested as you suggested. So the .net directML version uses GPU0 (my onboard intel) and the 6.2 uses GPU1 (the GTX1060)

I guess there is no way of making the .net use GPU1 - I thought I read somewhere that performance was improved when using directML vs CUDA.

Anyway, with Yolo 5 6.2 using CUDA and times around 25-35ms I think it is okay as is
PS - what a beast of a card you are running... with 56GB memory and 300+ operations/second :)
 

pyspilf

Getting the hang of it
Joined
May 22, 2017
Messages
60
Reaction score
50
Location
Madrid, Spain
Yes, on the list of things to do. Can't quite yet. I am running BI and other servers in a rack, and have an old VGA monitor connected via KVM for when remote desktop doesn't work. I need to get a VGA adapter so I can use one of the ports on the GTX1060, since all other servers are also connected via VGA, so I need to keep this old monitor.
 

tantrim

n3wb
Joined
Aug 20, 2023
Messages
29
Reaction score
14
Location
Texas
so what gpu is the dual coral comparable with?

I'm trying to figure out if I want to just get a 2nd hand gpu or a dual coral with pci-e adapter. The PC I have has an available m.2 wifi slot but I don't think it would support the dual coral directly so I would need the pci-e adapter.
 
Top