5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Hmmm. Ok. I'll have to play with that. My biggest issue is the docker image didn't install the app info in the usual directory on my Unraid. So I can't figure out what folder to put the models in so that CodeProject will recognize them. Once I get that part then sharing the folder and accessing it in BI is simple.
You can also create a folder on your BI server, in this folder create empty text files with the name of the custom models you are using, for example ipcam-general.pt. Then point BI to this folder.
 

gwithers

Getting the hang of it
Joined
May 18, 2016
Messages
49
Reaction score
38
You can map a local folder on the host running the AI server Docker container to the AI server custom model folder. You can do it by starting the docker with parameters and that will map a local folder from the host to the AI server custom folder running inside the container. You can then put the custom models in the local folder on the host that will then be seen by AI server that is running inside the docker container.

Code:
docker run --restart unless-stopped -p 32168:32168 --name CodeProject.AI-Server -d -v /usr/share/CodeProject/AI:/usr/share/CodeProject/AI --mount type=bind,source=/some_local_folder,target=/app/AnalysisLayer/ObjectDetectionYolo/custom-models,readonly
 
Last edited:
Joined
Jan 13, 2017
Messages
26
Reaction score
5
You can also create a folder on your BI server, in this folder create empty text files with the name of the custom models you are using, for example ipcam-general.pt. Then point BI to this folder.
I tried this. I installed the IPcam-dark and IPcam-combined in a folder on BI server. Pointed BI to it and it didn't work.
 
Joined
Jan 13, 2017
Messages
26
Reaction score
5
You can map a local folder on the host running the AI server Docker container to the AI server custom model folder. You can do it by starting the docker with parameters and that will map a local folder from the host to the AI server custom folder running inside the container. You can then put the custom models in the local folder on the host that will then be seen by AI server that is running inside the docker container.

Code:
docker run --restart unless-stopped -p 32168:32168 --name CodeProject.AI-Server -d -v /usr/share/CodeProject/AI:/usr/share/CodeProject/AI --mount type=bind,source=/some_local_folder/,target=/app/AnalysisLayer/ObjectDetectionYolo/custom-models,readonly
I will have to give that a try. I'm new to the whole Docker thing. Also, I know that Unraid handles dockers a little bit differently than a standard Linux operating system. That gives me some direction to start with though thank you!
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,539
Reaction score
2,765
You can also create a folder on your BI server, in this folder create empty text files with the name of the custom models you are using, for example ipcam-general.pt. Then point BI to this folder.
I wonder why BI needs to know the custom model. Doesn't it just ship off the motion event to the CP AI server? I guess I don't understand why it needs to know what model is being used when the CP AI server already has this information. It seems like it frivolous information as far as BI is concerned.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I wonder why BI needs to know the custom model. Doesn't it just ship off the motion event to the CP AI server? I guess I don't understand why it needs to know what model is being used when the CP AI server already has this information. It seems like it frivolous information as far as BI is concerned.
When BI send the image to CP.AI it also send the model name to use, before doing this BI needs to know the available models.
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,539
Reaction score
2,765
When BI send the image to CP.AI it also send the model name to use, before doing this BI needs to know the available models.
I was thinking the model to use was specified in the CP AI dashboard as well. I must be remembered wrong as its been a minute since I set it up.
 
Joined
Jan 13, 2017
Messages
26
Reaction score
5
Ok so according to the log on the startup of the Docker, the files are contained at /usr/share/CodeProject. When I navigate to that folder /usr/share the CodeProject folder is not there.

edit Ran a file search on / for codeproject. No results. I'm totally confused. May just have to give up on running it in Unraid and run it locally.
 
Last edited:

sb603

n3wb
Joined
Jun 14, 2022
Messages
3
Reaction score
1
Location
US
I am able to cycle the portrait filter between CPU and GPU with no issues. Very strange
I was experiencing the same exact issues as you. After uninstalling CodeProject.AI and reinstalling I am now see GPU for Object Detection (YOLO) instead of just Portait Filter.
 

gwithers

Getting the hang of it
Joined
May 18, 2016
Messages
49
Reaction score
38
Ok so according to the log on the startup of the Docker, the files are contained at /usr/share/CodeProject. When I navigate to that folder /usr/share the CodeProject folder is not there.

edit Ran a file search on / for codeproject. No results. I'm totally confused. May just have to give up on running it in Unraid and run it locally.
That is because the folder location for CodeProject is INSIDE the docker container and when you go to the host running the docker you are actually searching the host. This is a tricky concept to get straight that is part of how containers work and not related to CodeProject. The way I found to facilitate this is to map a local folder "into" the docker container as per the parameters I posted previously. Basically it tells the containers to look at a specific host folder when referencing a location in the container. So if you put custom models in the host folder, it is also going into the container to that specific assigned location (i.e. the custom model location for AI Server). It is how I added a non standard model (delivery.pt) to AI server running in their docker. Any custom models you put in the mapped local folder on the host will show up in the AI Server custom models (see delivery model example below). It is no doubt a tricky concept to get straight. If this is still unclear, a Windows install is certainly more "natural" to interact with.

1667660420600.png
 
Last edited:

Corvus85

Getting the hang of it
Joined
Aug 18, 2021
Messages
495
Reaction score
79
Location
Australia
Did you disable everything else? Maybe thats why mine is crashing
You mean in the CP.ai server? Yes, just like the screenshot shows.

My problem seems to have resolved itself though. No idea why. It's probably just waiting to rear its ugly head again at an arbitrary date in the near future.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,448
Reaction score
47,574
Location
USA
That are the different models being ran during the analysis.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Anybody have an idea why I'm getting multiple hits on single moving objects when viewing the analysis? Is this normal?
Post a screenshot of your camera's AI settings. What look to be happening is the camera is using more then one model and you are seeing results for each model.
 
Top