CodeProject.AI Version 2.5

@digger11 & @VideoDad I have been extremely busy with my day job so I have not had much time to work on update my models and adding YOLOv8 models. My day job is starting to slowdown so in a month or so I will have more time to work on the models.
Thanks for the update @MikeLud1. I don't know if you saw my question on another thread about incorporating my "delivery" custom model into the CPAI install. I was just wondering if you know when that might be happening. When it does how does one see the updated models? Also, if I make updates in the future, do I need to notify @ChrisMaunder?
 
I'm observing odd behaviour

YOLOv5 3.1 works with my GTX970
YOLOv5 6.2 stopped working with GTX970 with the last 2 updates
YOLOv8 works with my gtx970 but does not have the custom models yet as far as I see

Haven't figured out why 6.2 has stopped working - only works with the cpu. Does not pick up the gpu at all.
 
  • Like
Reactions: David L
I'm observing odd behaviour

YOLOv5 3.1 works with my GTX970
YOLOv5 6.2 stopped working with GTX970 with the last 2 updates
YOLOv8 works with my gtx970 but does not have the custom models yet as far as I see

Haven't figured out why 6.2 has stopped working - only works with the cpu. Does not pick up the gpu at all.
You can roll back BI.

 
Could you share your System Info tab from the CodeProject.AI Server dashboard, and any error logs you see (if any)? Also curious to see the results if you run the nvidia-smi command and then the nvcc --version command.

I have nvcc locally, in the container it's "not found", I don't believe it's part of the docker image: codeproject/ai-server:cuda12_2
There are no errors, the .NET YOLO just will not use the GPU, it is clearly using CPU as inference times are 200ms. With YOLOv5, I get inference times of 25ms.
I have the nvidia-container-toolkit installed.

1711860100961.png

1711860187857.png
 
@MikeLud1 Great release, is this the lama2 13b version? And will it run on a Nvidia GPU? If so, what is the CUDA requirements?
 
@MikeLud1 Great release, is this the lama2 13b version? And will it run on a Nvidia GPU? If so, what is the CUDA requirements?
The model used is the below, it should run on both Nvidia GPU and CPU (CPU will be slow). To run on a GPU you need greater then 6.87 GB of VRAM


Code:
/ For loading via llama-cpp.from_pretrained
"CPAI_MODULE_LLAMA_MODEL_REPO":     "@TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
"CPAI_MODULE_LLAMA_MODEL_FILEGLOB": "*.Q4_K_M.gguf",

1712241194311.png
 
  • Like
  • Sad
Reactions: actran and David L
To update to CodeProject.AI Version 2.6.2 you do not need to uninstall if you are on V2.5.6 just disable AI in Blue Iris before installing. After all the module you selected during the install are installed you can enable AI in Blue Iris. If the custom model list is empty restart Blue Iris service and the list should update.
 
  • Like
Reactions: jrbeddow
To update to CodeProject.AI Version 2.6.2 you do not need to uninstall if you are on V2.5.6 just disable AI in Blue Iris before installing. After all the module you selected during the install are installed you can enable AI in Blue Iris. If the custom model list is empty restart Blue Iris service and the list should update.

Thank goodness LOL.

Do you think it is still best practice to uninstall?
 
  • Like
Reactions: David L
Thank goodness LOL.

Do you think it is still best practice to uninstall?
I just did an install on my main BI system and had no issues installing it without uninstall. If you are on the latest version installing an update without uninstalling should work. This might change if there is a major update.
 
Last edited:
Just upgraded to 2.6.2 and I have selected to use GPU instead of CPU. For some reason it still says CPU (DirectML)... Screenshot 2024-04-04 145752.png
 
what happened between blue iris 5.8.6.x and 5.8.7.x that could cause a massive increase in my codesense AI response time? only new thing i saw in the AI panel was pre-trigger images, but even with that set at 0 i went from 32ms average to 800ms+. I'm also getting CPU spikes 80%-100% when the cameras trigger.

i'm having a hard time finding any discussion about this so, i'm guessing it's user error somewhere.
 
what happened between blue iris 5.8.6.x and 5.8.7.x that could cause a massive increase in my codesense AI response time? only new thing i saw in the AI panel was pre-trigger images, but even with that set at 0 i went from 32ms average to 800ms+. I'm also getting CPU spikes 80%-100% when the cameras trigger.

i'm having a hard time finding any discussion about this so, i'm guessing it's user error somewhere.
I had an issue that was caused by the BI settings AI timeout restarting the CP.AI service. I changed the timeout from roughly 30 seconds to 60 seconds and it resolved my issues.
 
Right on. That's what I was kinda thinking....
If using GPU not CPU it should be using YOLOV5 6.2 rather than .NET with DirectML if I remember correctly. Try telling CP.AI to start 6.2 instead and it should change the default to that. Then uncheck GPU in BI settings, hit ok, go back into settings, re-select GPU, and hit OK again. That should make it start using GPU and the correct module.
 
If using GPU not CPU it should be using YOLOV5 6.2 rather than .NET with DirectML if I remember correctly. Try telling CP.AI to start 6.2 instead and it should change the default to that. Then uncheck GPU in BI settings, hit ok, go back into settings, re-select GPU, and hit OK again. That should make it start using GPU and the correct module.
I don’t have an upgraded GPU and I thought YOLO was for upgraded GPUs. .NET was for non upgraded GPUs.
 
The .NET module works with all GPUs. Depending on your Nvidia GPU .NET might be faster then CUDA


1712412444989.png