CodeProject.AI Version 2.0

Coltect

Pulling my weight
Joined
Nov 3, 2017
Messages
55
Reaction score
129
Location
Australia
Hi all, Can someone explain to me how I would load custom models when I am using CPAI in Docker on a different machine than what blue iris is running on? Do I need to point both the docker and blue iris to the same folder that has the custom models? Does blue iris only need to know of the custom models in order to determine the tags that are trained within the model?
gwithers reply is probably the right way to do this and his info on using Docker has been very useful to me.
I have Docker Desktop running on Windows WSL2 on the same box as BI and could not get any Custom models going until I found Blue Iris and CodeProject.AI ALPR from MikeLud1 which says to create a directory with blank text files using the same names as the custom models. This seems to work and the Docker instance must be downloading/getting the right models as its log file shows it is processing ipcam-combined, general, license-plate etc. Once I had the dummy files in place I could also check the box for ALPR for plates in the BI settings.
 

MnM

Young grasshopper
Joined
May 14, 2014
Messages
95
Reaction score
20
gwithers reply is probably the right way to do this and his info on using Docker has been very useful to me.
I have Docker Desktop running on Windows WSL2 on the same box as BI and could not get any Custom models going until I found Blue Iris and CodeProject.AI ALPR from MikeLud1 which says to create a directory with blank text files using the same names as the custom models. This seems to work and the Docker instance must be downloading/getting the right models as its log file shows it is processing ipcam-combined, general, license-plate etc. Once I had the dummy files in place I could also check the box for ALPR for plates in the BI settings.
I have my CodeProject.AI ALPR docker configured with local host folders mapped to the docker folders. (I run my CodeProject.AI ALPR on a separate Linux server in docker).
I have my custom models in the correct folder and CodeProject.AI ALPR GUI can see them (via /vison.html URL).
All CodeProject.AI tests and benchmarks work fine for the custom models I use (including ALPR).

With the above ALPR in BI is always disabled.

I have created a new folder on the BI server, pointed the BI AI config to that folder, created empty file for ALPR (license-plate.pt) in the folder, saved the configuration and then restarted the BI service. ALPR for Plates was enabled after this. This is the only way I have found to enable ALPR in BI with CodeProject.AI ALPR running on a separate Linux server.
 

tedrpi

Getting the hang of it
Joined
Sep 2, 2015
Messages
85
Reaction score
36
Last edited:

ohlin5

n3wb
Joined
Dec 14, 2016
Messages
25
Reaction score
10
Would someone be so kind as to refresh my memory on what causes the whole "double tagging" issue where recognized objects have 2 squares and tags (i.e. dog) around them? I thought it was caused by having the "default object detection" box in BI Settings > AI checked but after unchecking that and rebooting I'm still getting the issue.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,028
Reaction score
48,795
Location
USA
Would someone be so kind as to refresh my memory on what causes the whole "double tagging" issue where recognized objects have 2 squares and tags (i.e. dog) around them? I thought it was caused by having the "default object detection" box in BI Settings > AI checked but after unchecking that and rebooting I'm still getting the issue.
That is two models actively searching for objects.
 

ohlin5

n3wb
Joined
Dec 14, 2016
Messages
25
Reaction score
10
That is two models actively searching for objects.
Ah...so 2 of the default included custom models you're saying? If that's the case what is the easiest way to disable one of more of these models? Just delete the file from the custom models folder lol?
 

Chicken

Getting the hang of it
Joined
Dec 30, 2016
Messages
45
Reaction score
35
if the coral support gets figured out is there a plan on which coral boards to support, usb, pci Or m.2?
 

spuls

Getting the hang of it
Joined
May 16, 2020
Messages
90
Reaction score
68
Location
at
Glad to see that they go multi-engine as well.

@Chicken: it won´t matter as long you run it on the same host without any layer in between (like docker)
 

tedrpi

Getting the hang of it
Joined
Sep 2, 2015
Messages
85
Reaction score
36

For folks having issues with BI not allowing a model size selection, CPAI not displaying a model size, or getting CPAI to stick to a model size selection, I'd suggest a uninstall of CPAI, deleting directories, reboot, and clean re-install.

Doing this fixed my issues for CPAI 2.08 and BI 5.7.0.4.
 

blulegend

Young grasshopper
Joined
Dec 3, 2017
Messages
43
Reaction score
11
For folks having issues with BI not allowing a model size selection, CPAI not displaying a model size, or getting CPAI to stick to a model size selection, I'd suggest a uninstall of CPAI, deleting directories, reboot, and clean re-install.

Doing this fixed my issues for CPAI 2.08 and BI 5.7.0.4.
Thanks, I did do a clean CPAI install for 2.0.7 because it didn’t work at all with BI initially. I haven’t gone to 2.0.8 yet. Will try today hopefully.
 

Cache450

n3wb
Joined
Jan 25, 2019
Messages
25
Reaction score
15
Location
Idaho
Ive been running CPAI cpu version for some time now but I recently picked up an nvidia Quadro P620 to try the gpu version.
I now have YOLOv5 6.2 running through GPU using ipcam-combined. seems to be working good...but, when I pull up task manager its shows that all of the traffic is going through internal graphics rather the the P620 GPU. Did I miss something somewhere?
 

CCTVCam

Known around here
Joined
Sep 25, 2017
Messages
2,675
Reaction score
3,505
if the coral support gets figured out is there a plan on which coral boards to support, usb, pci Or m.2?

On that subject I found out only the USB version supports Windows (at least from what I noticed) so it looks like we're stuck with that. It's a pity because there's an M.2 version with 2 TPU's on the one die for very little extra wattage so double the processing power.

As for 2 secs, I would imagine there's something amiss in the way the data is being processed as 4 TOPS (billion operations per second) should be fast enough to get good times.
 

blulegend

Young grasshopper
Joined
Dec 3, 2017
Messages
43
Reaction score
11
Installed 5.7.0.4 and 2.0.8. Seems to be working. And also can select the size of the detection now. But this new combo spikes to 100% CPU quite a bit now. Not good. Had to change from medium to small.
 

truglo

Pulling my weight
Joined
Jun 28, 2017
Messages
275
Reaction score
103
Ive been running CPAI cpu version for some time now but I recently picked up an nvidia Quadro P620 to try the gpu version.
I now have YOLOv5 6.2 running through GPU using ipcam-combined. seems to be working good...but, when I pull up task manager its shows that all of the traffic is going through internal graphics rather the the P620 GPU. Did I miss something somewhere?
Mine has a P400, but close enough...

Untitled.png

Also, are you using the recommended driver and cuda compute versions found here?


An important quote from that page...
Code:
If you have a CUDA enabled Nvidia card please then ensure you

[LIST=1]
[*]install the [B][URL='https://www.nvidia.com/download/index.aspx']CUDA Drivers[/URL] [/B]
[*]Install [B][URL='https://developer.nvidia.com/cuda-11-7-0-download-archive']CUDA Toolkit 11.7[/URL].[/B]
[*]Download and run our [B][URL='https://www.codeproject.com/KB/Articles/5322557/install_CUDnn.zip']cuDNN install script[/URL] [/B]to install cuDNN[B].[/B]
[/LIST]
[B]Nvidia downloads and drivers are challenging! [/B]Please ensure you download a driver that is compatible with CUDA 11.7, which generally means the CUDA driver version 516.94 or below. Version 522.x or above may not work. You may need to refer to the release notes for each driver to confirm

Since we are using CUDA 11.7 (which has support for compute capability 3.7 and above).we can only support NVidia CUDA cards that are equal to or better than a GK210 or Tesla K80 card. Please refer to [URL='https://en.wikipedia.org/wiki/CUDA#GPUs_supported']this table[/URL] of supported cards to determine if your card has compute capability 3.7 or above.

Newer cards such as the GTX 10xx, 20xx and 30xx series, RTX, MX series are fully supported.
 

MikeR33

Getting the hang of it
Joined
Jan 26, 2018
Messages
34
Reaction score
28
OMG, when you wish you'd never messed with it when it was working :( lol

I've tried repairs, uninstalls (including deleting every reference I can find manually), wmi repair, and still every time I restart the PC I then have to spend 10 minutes stopping, starting, stopping, starting CPAI from BI itself and also from the CP Dashboard. I'm getting really fed up with it now so taking a break before throwing it out the window lol

on 2.0.8 and latest BI.

Anyone else with the same issue?
 

Lta

n3wb
Joined
Jan 15, 2021
Messages
3
Reaction score
1
Location
teland
Need some help with enabling GPU.
I think i setup everything as it should be, what else could be wrong?


1677144596104.png
1677144669955.png
1677144711056.png
1677144760807.png
 
Top