5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

im327

Young grasshopper
Joined
Jun 14, 2020
Messages
30
Reaction score
18
Location
USA
To make sure Code Project AI is working run a test with the below link


View attachment 138116
msi gtx 1660 Super 6GB GDDR6 not working here. gigabyte 1050 Ti 4GB GDDR5 Works! Any guess why? Looks like I have to return the 1660. Know of any cards with 6GB GDDR6 that are known to work? (Under $300) They both use the same driver.
 
Last edited:

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
I can confirm that the RTX2060 works. Can't say what the cost is in usd but in the UK they sell for ~£250

Are you using the same revision of drivers for both cards
 

im327

Young grasshopper
Joined
Jun 14, 2020
Messages
30
Reaction score
18
Location
USA
I can confirm that the RTX2060 works. Can't say what the cost is in usd but in the UK they sell for ~£250

Are you using the same revision of drivers for both cards
Thanks for the reply. Can you tell me the brand?
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
803
Reaction score
722
On a confirmed alert, perhaps via a "Run Program/Web Request/Do Command", trigger AI analysis on a different camera at a timestamp - <X minutes/seconds>?
Maybe to identify other object types on additional cameras not normally configured.

Perhaps, future feature request?
 

CrazyAsYou

Getting comfortable
Joined
Mar 28, 2018
Messages
247
Reaction score
263
Location
England, Near Sheffield
msi gtx 1660 Super 6GB GDDR6 not working here. gigabyte 1050 Ti 4GB GDDR5 Works! Any guess why? Looks like I have to return the 1660. Know of any cards with 6GB GDDR6 that are known to work? (Under $300) They both use the same driver.
I can't get it working on a 1650 GTX, starting to wonder if anyone has it working with any 16XX GTX model
 

im327

Young grasshopper
Joined
Jun 14, 2020
Messages
30
Reaction score
18
Location
USA
I can't get it working on a 1650 GTX, starting to wonder if anyone has it working with any 16XX GTX model
I think you're right. Both 1660 and the 1050 Ti take the same driver. I even did a clean install of windows 10 twice.
I also checked the hash of the cudnn files the script downloads vs the ones you get yourself when you have to login to nvidia. They match.
The only variable I could think of was the card. I have another computer running Blue Iris with deepstack gpu on the 1050 Ti so I put that card in the new computer.
Worked first try.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,206
Reaction score
4,247
Location
Brooklyn, NY
Attached are all the "modulesettings.json" files that will disable the modules and set the Custom Model module to high
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
 

im327

Young grasshopper
Joined
Jun 14, 2020
Messages
30
Reaction score
18
Location
USA
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
I had my gtx 1660 super not working with your setup of only custom module and ipcam-general.pt or ipcam-combined.pt. I used all your modulesettings.jason files you posted. I also tried it with all modules enabled.
The 1050 Ti worked with either setup.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,206
Reaction score
4,247
Location
Brooklyn, NY
@MikeLud1
Further to your explanation and recommendation to use low resolution streams for AI analysis. I'm having difficulty understanding how any software can analyse a snapshot of such low resolution for face identification. By way of example, I have attached a capture from my doorbell camera 640 x 480 of visitors. Needless to say, the face is unrecognised even though a good quality capture in higher resolution was in the library. I'm not sure that I could recognise the person from the image captured. The only way I could see face recognition working is for BI to crop just the face and send it for analysis at the original resolution which could be actually end up with an image less than 640 x 480.

Am I missing something fundamental? I would also be interested in your thought on the image processing times which seem excessive to me.
I am looking into how the face recognition handles image processing. Currently I am not using face recognition. From what I can tell from the code (see below) the face model resolution is 416 when set to high

1661973292922.png
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
I am looking into how the face recognition handles image processing. Currently I am not using face recognition. From what I can tell from the code (see below) the face model resolution is 416 when set to high

View attachment 138533
The only way I have succeeded to have a face recognised is to enable the switch to HD on trigger if available. Its interesting to note that the unknown faces filed are recognisable to the eye when this feature is enabled!
 

toastie

Getting comfortable
Joined
Sep 30, 2018
Messages
254
Reaction score
82
Location
UK
With the aim of helping to discover why my Alerts aren't been processed by Code Project AI, I include an attachment of the terminal output as a snap shot displayed when I open Global Settings>AI>Open in Browser the Service API URL Is the output normal and to be expected?
 

Attachments

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,206
Reaction score
4,247
Location
Brooklyn, NY
With the aim of helping to discover why my Alerts aren't been processed by Code Project AI, I include an attachment of the terminal output as a snap shot displayed when I open Global Settings>AI>Open in Browser the Service API URL Is the output normal and to be expected?
Everything looks normal, below is what mine looks like
1661979542857.png
 

105437

BIT Beta Team
Joined
Jun 8, 2015
Messages
2,030
Reaction score
934
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2GB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

1662060295578.png
1662060256428.png
 
Last edited:

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2MB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

View attachment 138666
View attachment 138665
I have personally tried a T400 with 2Gb vram and it failed to work. Presently it would appear that you need a minimum of 4 Gb to have success but it's early days and I guess it may in time work with more moderate requirements.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,879
Reaction score
48,516
Location
USA
I have personally tried a T400 with 2Gb vram and it failed to work. Presently it would appear that you need a minimum of 4 Gb to have success but it's early days and I guess it may in time work with more moderate requirements.
If it doesn't work that is a shame because many bought the P400 or GTX1030 because it was cheap and works well with Deepstack. If it doesn't work well with the new AI, folks will just stay with what works for them.
 

kklee

Pulling my weight
Joined
May 9, 2020
Messages
187
Reaction score
203
Location
Vancouver, BC
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2GB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

View attachment 138666
View attachment 138665
I'm running a P400 and switched from DS a couple of weeks ago. It seems to be working fine and the analysis times seem to be around 25% faster (using IPCam-General and IPCam-Combined). I've also disabled the face and scene modules.
 

Village Guy

Pulling my weight
Joined
May 6, 2020
Messages
291
Reaction score
161
Location
UK
I'm running a P400 and switched from DS a couple of weeks ago. It seems to be working fine and the analysis times seem to be around 25% faster (using IPCam-General and IPCam-Combined). I've also disabled the face and scene modules.
How much memory? This is the result I had with 2Gb running Python.
Scene detection appears to work. Objects and Faces not predicted. Nvidia T400 GPU

2022-08-26 10:03:28: retrieved detection_queue command
2022-08-26 10:03:28 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\detection.py", line 83, in objectdetection_callback
det = detector.predictFromImage(img, threshold)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\process.py", line 62, in predictFromImage
pred = self.model(img, augment=False)[0]
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 136, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 159, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 68, in forward
y[..., 0:2] = (y[..., 0:2] * 2 + self.grid) * self.stride # xy
RuntimeError: The size of tensor a (32) must match the size of tensor b (28) at non-singleton dimension 2
 
Last edited:

JL-F1

Getting the hang of it
Joined
Jun 12, 2020
Messages
118
Reaction score
71
Location
USA
I had my gtx 1660 super not working with your setup of only custom module and ipcam-general.pt or ipcam-combined.pt. I used all your modulesettings.jason files you posted. I also tried it with all modules enabled.
The 1050 Ti worked with either setup.
In the same boat, 1660 super , says API server online but when I run the test, shown in post #450, nothing found
 

kklee

Pulling my weight
Joined
May 9, 2020
Messages
187
Reaction score
203
Location
Vancouver, BC
How much memory? This is the result I had with 2Gb running Python.
Scene detection appears to work. Objects and Faces not predicted. Nvidia T400 GPU

2022-08-26 10:03:28: retrieved detection_queue command
2022-08-26 10:03:28 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\detection.py", line 83, in objectdetection_callback
det = detector.predictFromImage(img, threshold)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\process.py", line 62, in predictFromImage
pred = self.model(img, augment=False)[0]
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 136, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 159, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 68, in forward
y[..., 0:2] = (y[..., 0:2] * 2 + self.grid) * self.stride # xy
RuntimeError: The size of tensor a (32) must match the size of tensor b (28) at non-singleton dimension 2
My P400 has 2GB of RAM.
 
Top