5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I meant to ask, when using a GPU, am I able to not use it only on some of the cameras? Would that help? Is it best to simply wait for a CodeProject update and full Blue Iris integration?
Try turning off Hardware accelerated decode on all your cameras, then reboot to see if CodeProject.AI will work then.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,432
Reaction score
47,556
Location
USA
I meant to ask, when using a GPU, am I able to not use it only on some of the cameras? Would that help? Is it best to simply wait for a CodeProject update and full Blue Iris integration?
Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration on (even if not using DeepStack or CodeProject). Some have also been fine.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, yet the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using a GPU and hardware acceleration.

It is best to just use the GPU now for AI.
 

clk8

Young grasshopper
Joined
Jul 18, 2022
Messages
30
Reaction score
24
Location
NY
I would like to suggest another possible solution to hardware decode. What I experienced was unstable FPS in Web UI when using my Nvidia GPU for CodeAI and video decode. What I did was switch my video decode to Intel+VPP . My CPU averages around 15% and my GPU less than 10%. i have nine 4MP cameras plus an LPR i am working on and BI says my CodeAI processes 50-100k requests per day. Runs stable for days on end.

Intel I5 6core
16GB RAM
Nvidia GTX 1050 2GB
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
Try turning off Hardware accelerated decode on all your cameras, then reboot to see if CodeProject.AI will work then.
Yes Mike, I had all the cameras set with "Nvidia NVDEC" but have now changed them all to as per your screenshots and rebooted. Unfortunately, I get the same nothing found result with green ticks. Is it time for me to throw in the towel?
wittaj, the GPU spikes are just as big. Is that normal?

Screenshot 2022-09-12 204031.png
Screenshot 2022-09-12 204600.pngScreenshot 2022-09-12 205234.png
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,432
Reaction score
47,556
Location
USA
Yes, the GPU spikes would be when AI is being done.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Yes Mike, I had all the cameras set with "Nvidia NVDEC" but have now changed them all to as per your screenshots and rebooted. Unfortunately, I get the same nothing found result with green ticks. Is it time for me to throw in the towel?
wittaj, the GPU spikes are just as big. Is that normal?

View attachment 139644
View attachment 139642View attachment 139643
When installing CodeProject.AI Server did you use the below script. Does it work using the test. CodeProject.AI team should be releasing a new version sometime this week that should help with getting the AI working.

1663013716048.png
1663013984008.png
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
When installing CodeProject.AI Server did you use the below script. Does it work using the test. CodeProject.AI team should be releasing a new version sometime this week that should help with getting the AI working.

View attachment 139645
View attachment 139646
I don't remember where I got the install link and got into a bit of a mess when trying to use your circled one. But I already had the test file and tried it again. No predictions.

Screenshot 2022-09-12 222021.png
 

JL-F1

Getting the hang of it
Joined
Jun 12, 2020
Messages
115
Reaction score
71
Location
USA
Where are we supposed to put that install script when we run it, what folder? Every time I try, it works about half way and gives a bunch of errors

Does it just install cuDNN? I know I already installed that correctly manually
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Where are we supposed to put that install script when we run it, what folder? Every time I try, it works about half way and gives a bunch of errors

Does it just install cuDNN? I know I already installed that correctly manually
You can run the script from any location, the errors might be because it sees cuDNN install already. The script will install the below files and set the correct paths.

1663023021216.png
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
So, I have some questions perhaps MikeLud1 or some of the other seasoned hardware evangelists can answer :lol:

Firstly, in my BI logs I see the following over and over and I don't recall seeing this until recently:

1663102388590.png

Like literally hundreds and hundreds - I don't know if that has anything to do with my introduction of CodeProject.AI but I suspect not.

Secondly, I have purchased an entire new (used) computer (i7-6700) and 2 Nvidia video cards in the last month in an attempt to get the CPAI running as fast as I can afford. The first card was a Nvidia GeForce GTX 970 - seemed to work well for BI and CP.AI but the new i7-6700 did not have the extra power connectors necessary so then I purchased a Nvidia GeForce GT 730 as it did not have any extra power requirements. When using this card and activating CUDA I get the following errors in the CP.AI console...

2022-08-29 11:51:19 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 95, in sceneclassification_callback
cl, conf = classifier.predict(img)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 46, in predict
logit = self.model.forward(image_tensors)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 249, in forward
return self._forward_impl(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 234, in _forward_impl
x = self.relu(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\activation.py", line 98, in forward
return F.relu(input, inplace=self.inplace)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\functional.py", line 1297, in relu
result = torch.relu_(input)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

and Lastly, I have followed this thread since inception and I believe I have done all the installation steps correctly for CUDA and run the installation script (install_CUDnn.bat) but when I run the "nvidia-smi" - I receive the following output (note that my CUDA version is not 11.7 despite the fact that I could swear I chose that version...) - is the CUDA version being 11.4 instead of 11.7 the reason for the errors above?


1663102231725.png

My CodeProject.AI is functioning fine (disabling CUDA support) as indicated below.

1663104088569.png

I'm rather shocked that out of 3 Nvidia cards (Quadro FX1800, GeForce GTX 970 and GeForce GT 730) only the 930 enabled CUDA on one or two of the detection options ...but that was on my previous machine which was an AMD FX-6300 (six core) and it seemed to run the CPU high all the time, which is why I opted to get the i7-6700.

Anyway - any suggestions would be greatly appreciated. I can run this in CPU mode for certain, but having the hardware - I'd rather see my money put to good use and utilize the hardware. ;)

--Dirk
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,432
Reaction score
47,556
Location
USA
@dirk6665 Are you running hardware acceleration? If so, turn it off globally and in each camera.

Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration on (even if not using DeepStack or CodeProject). Some have also been fine. I started to see that error when I was using hardware acceleration.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, yet the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using a GPU and hardware acceleration.

It is best to just use the GPU now for AI.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
So, I have some questions perhaps MikeLud1 or some of the other seasoned hardware evangelists can answer :lol:

Firstly, in my BI logs I see the following over and over and I don't recall seeing this until recently:

View attachment 139778

Like literally hundreds and hundreds - I don't know if that has anything to do with my introduction of CodeProject.AI but I suspect not.

Secondly, I have purchased an entire new (used) computer (i7-6700) and 2 Nvidia video cards in the last month in an attempt to get the CPAI running as fast as I can afford. The first card was a Nvidia GeForce GTX 970 - seemed to work well for BI and CP.AI but the new i7-6700 did not have the extra power connectors necessary so then I purchased a Nvidia GeForce GT 730 as it did not have any extra power requirements. When using this card and activating CUDA I get the following errors in the CP.AI console...

2022-08-29 11:51:19 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 95, in sceneclassification_callback
cl, conf = classifier.predict(img)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 46, in predict
logit = self.model.forward(image_tensors)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 249, in forward
return self._forward_impl(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 234, in _forward_impl
x = self.relu(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\activation.py", line 98, in forward
return F.relu(input, inplace=self.inplace)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\functional.py", line 1297, in relu
result = torch.relu_(input)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

and Lastly, I have followed this thread since inception and I believe I have done all the installation steps correctly for CUDA and run the installation script (install_CUDnn.bat) but when I run the "nvidia-smi" - I receive the following output (note that my CUDA version is not 11.7 despite the fact that I could swear I chose that version...) - is the CUDA version being 11.4 instead of 11.7 the reason for the errors above?


View attachment 139777

My CodeProject.AI is functioning fine (disabling CUDA support) as indicated below.

View attachment 139784

I'm rather shocked that out of 3 Nvidia cards (Quadro FX1800, GeForce GTX 970 and GeForce GT 730) only the 930 enabled CUDA on one or two of the detection options ...but that was on my previous machine which was an AMD FX-6300 (six core) and it seemed to run the CPU high all the time, which is why I opted to get the i7-6700.

Anyway - any suggestions would be greatly appreciated. I can run this in CPU mode for certain, but having the hardware - I'd rather see my money put to good use and utilize the hardware. ;)

--Dirk
Do you know which GT 730 you have? Use GPU-Z and post a screenshot like the below screenshot.


1663109437717.png

1663109235749.png
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
Do you know which GT 730 you have? Use GPU-Z and post a screenshot like the below screenshot.
Sure Mike...

1663117814063.png

Its only a 2MB card - perhaps this is the issue? I think I read in this forum you should have at least 4MB. But in CPU mode it does seem to be doing pretty well, some of my response times are as low as 11ms but then some swing to 8000+

--Dirk
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
@dirk6665 Are you running hardware acceleration? If so, turn it off globally and in each camera.

Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration on (even if not using DeepStack or CodeProject). Some have also been fine. I started to see that error when I was using hardware acceleration.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, yet the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using a GPU and hardware acceleration.

It is best to just use the GPU now for AI.

Thank you - I will try this suggestion and disable HW Acceleration on all cameras and in config. I hope this also solves why I am getting black screens when using the web interface and reviewing footage on some cameras.

--Dirk
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Sure Mike...

View attachment 139808

Its only a 2MB card - perhaps this is the issue? I think I read in this forum you should have at least 4MB. But in CPU mode it does seem to be doing pretty well, some of my response times are as low as 11ms but then some swing to 8000+

--Dirk
This card just makes the Compute capability version of 3.5, any low and will not work with the required CUDA version. Try install CUDA 11.7.1 and make sure you have HW Acceleration turned off.
If this does not work they are going to release a new version by the end of this week that should help GPUs with low memory.


1663119052401.png
 
Last edited:

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
BI v5.6.0.8, CodeProject.AI v1.5.6-Beta_0002 CPU only, i7-7700 CPU. See attached files.

I have 1 cloned camera dedicated to delivery.pt (v1.4) custom model only. It doesn't recognize USPS, and I see that also when I inspect the dat file.
actran,

Where does one find such a model? I am currently using 'Openlogo' but it does not recognize the new Prime Logo on the side of their delivery trucks and is hella slow ;)

--Dirk
 

VideoDad

Pulling my weight
Joined
Apr 13, 2022
Messages
157
Reaction score
208
Location
USA
actran,

Where does one find such a model? I am currently using 'Openlogo' but it does not recognize the new Prime Logo on the side of their delivery trucks and is hella slow ;)

--Dirk
My custom delivery model (delivery.pt) can be found in the following thread entitled People can mask USPS AI detections
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
My custom delivery model (delivery.pt) can be found in the following thread entitled People can mask USPS AI detections
This is exactly what I've been looking for THANKS! Now I can have BI announce when Amazon, Fedex or any of the others are in the driveway. Too bad BI didn't incorporate a text-to-speech function so one could use a variable to make the announcement (IE: "SPEAK: Attention, %tag% is in the driveway"). Creating different sound files for each delivery type is rather cumbersome.

The model file works great though! Thanks again!
-Dirk
 
Top