5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
I installed the newest Beta (1.6.2) last night and I am super impressed with the newest numbers I am seeing. Prior I was getting about 2.7 operations per second using just CPU - now I am up to 7.5+ even using semi-complex images! Nice work CP.AI Team and MikeLud1 (Who's help in steering this means A LOT to the BI Community).

1664476109989.png

1664476188221.png

...And finally the delivery.pt model (Which I am uber thankful for VideoDad for creating) ...

1664476390073.png

It's still CPU and not GPU but if they are going to support Coral devices one day soon I may just wait for that because those are a lot more price friendly then buying yet ANOTHER NVidia card! ;)
 

MikeR33

Getting the hang of it
Joined
Jan 26, 2018
Messages
34
Reaction score
28
Took the plunge into CPAI yesterday as my Deepstack had steadily degraded yet again (despite clearing down all the temp files regularly) to the point of more timeouts than detections and rather than go through the whole uninstall\reinstall to get it working moved to CPAI instead. Running the direct windows install (1.6.2-Beta) with the default analyser (YOLO) currently and working consistently at the moment, not fast with about 400ms to identify compared to 120-150ms with deepstack (when it worked). Will leave it in default mode for a while to assess the stability before looking at moving to custom models\performance tweaks. Running the GPU version on a pretty old undervoltage\underclocked i7-2600k with undervoltage\underclocked GTX1060 (trying to keep my electric a bit lower lol). CPAI certainly seems to use less CPU compared to DS (when running the GPU version of both).
 

wepee

Getting the hang of it
Joined
Jul 16, 2016
Messages
248
Reaction score
57
Hi, I am just testing my newly installed CodeProject.AI server, today.

Just wonder, is normal to have a CPU spike(from 16% to 48%, sometimes even higher to 68%)
when I manually pressed the trigger now button on 1 of my cameras?

View attachment 141111
Hello guys, does anyone know if the CPU is kind of old for handling AI detection?
I have an old Intel i7-3770 CPU running at my office, connecting to 4 cameras only.
Should I buy an Nvidia card solely for AI detection?
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
784
Reaction score
697
Hello guys, does anyone know if the CPU is kind of old for handling AI detection?
I have an old Intel i7-3770 CPU running at my office, connecting to 4 cameras only.
Should I buy an Nvidia card solely for AI detection?
@wepee I think the decision to get a Nvidia card depends on the # of triggered events CodeProjectAI has to analyze for a given time period.
If your CPU gets so pegged during this time period such that it is unresponsive to other functions, then it probably makes sense to look at a Nvidia card.

In parallel, if you haven't done so, you may be able to reduce AI CPU impact by using @MikeLud1 custom models exclusively and/or uncheck "Use main stream" option, ...or try mode option available in latest BI release as shown in screenshot below. Note, low mode may give you insufficient AI detection accuracy.

Screen Shot 2022-09-30 at 7.54.47 AM.png
 

Attachments

Last edited:

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
A query about AI detection:-
Is there a reason why for a large number of alerts, the selected/asterisk image is "Nothing found" (and true) instead of a car at 94%? It's as though "nothing found" is in the yolov5l list of objects and is "found" with a higher confidence than the car. It's pouring with rain in the example below but is not the reason.

Screenshot 2022-09-30 182924.png
Screenshot 2022-09-30 182836.png
 

105437

BIT Beta Team
Joined
Jun 8, 2015
Messages
1,995
Reaction score
881
First day using CPAI. Detection times are averaging around 100 ms. Using ipcam-general and ipcam-animal. Here's my benchmark running a P400 GPU.

1664589219061.png
 

wepee

Getting the hang of it
Joined
Jul 16, 2016
Messages
248
Reaction score
57
@actran
I think the decision to get a Nvidia card depends on the # of triggered events CodeProjectAI has to analyze for a given time period.
If your CPU gets so pegged during this time period such that it is unresponsive to other functions, then it probably makes sense to look at a Nvidia card.
Ok got it. Thanks.
I may get a Nvidia card in the future, just to play around.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
A query about AI detection:-
Is there a reason why for a large number of alerts, the selected/asterisk image is "Nothing found" (and true) instead of a car at 94%? It's as though "nothing found" is in the yolov5l list of objects and is "found" with a higher confidence than the car. It's pouring with rain in the example below but is not the reason.

View attachment 141275
View attachment 141276
In the camera AI settings you need to have something in the To confirm box like person,car
 

wepee

Getting the hang of it
Joined
Jul 16, 2016
Messages
248
Reaction score
57
@actran
In parallel, if you haven't done so, you may be able to reduce AI CPU impact by using @MikeLud1 custom models exclusively and/or uncheck "Use main stream" option, ...or try mode option available in latest BI release as shown in screenshot below. Note, low mode may give you insufficient AI detection accuracy.
Yup everything you recommended I followed before I submitted my question.
Do I need to change the port number from 5000 to 32168 ?
If I want to detect only humans, do I need to remove objects that are not needed?
e.g: To confirm: Person.
and Leave out: Car, Truck, Bus, Bicycle, Boat?

2022-10-01_15-22-53.jpg
2022-10-01_15-24-15.jpg
 
Last edited:

actran

Getting comfortable
Joined
May 8, 2016
Messages
784
Reaction score
697
@wepee If you are using CodeProjectAI 1.6.x, then both ports 5000 and 32168 will work, but it doesn't hurt to update the port field to 32168 now since it will be the only one supported in the future.

BTW, your screenshots indicates you are using default object detection. Consider using custom models from @MikeLud1 . From my experience, his models take less time to detect objects, so more efficient.

In first screenshot below, make sure your custom model folder is set correctly---the default path is

C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models

if you are on CPAI 1.6.x

Mike's models are included with CPAI install. There are a number of them to choose from.

Make sure to uncheck "Default object detection".

AI.png


In 2nd screenshot, yes, only specify the object you want in "To confirm".
And in "Custom models" field, specify a custom model like Mike's ipcam-general.

That way, the AI is going to only use the specified custom model for detection.

AI trigger.png
 
Last edited:

actran

Getting comfortable
Joined
May 8, 2016
Messages
784
Reaction score
697
@wepee If you want to know what Mike's models can detect, see: GitHub - MikeLud/CodeProject.AI-Custom-IPcam-Models

Mike is also working on license plate detection DeepStack LPR Custom Model,
The license-plate model is already included with CodeProjectAI install, however, Mike is still working on the CodeProjectAI version of license OCR piece and plans to make that available in a month or so.
Obviously real cool stuff since everything is done locally and nothing is sent to an external 3rd party company---very appealing for anyone who cares about privacy.

Lastly, just in case you did not know, you can use this CodeProjectAI UI to test each model for performance. Use a snapshot from one of your cameras when testing.

performance.png
 
Last edited:

clk8

Young grasshopper
Joined
Jul 18, 2022
Messages
30
Reaction score
24
Location
NY
Also you should uncheck Use main stream if available, having it check does not improve accuracy it only slows down the detection.
@MikLud1,
Are you sure about this Mike? I know it has been said over and over that the image gets resized. However, I recently noticed when looking at my AI analysis on one particular camera that the image resolution in the upper right corner was not 840x480, but was instead, the native resolution on the main stream. All my other cameras said 840x480 in AI analysis. I check the use Mainstream if available setting and sure enough, that one camera had it checked. Seems me that would indicate that the image is not resized when using the main stream
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Mike is also working on license plate detection DeepStack LPR Custom Model,
The license-plate model is already included with CodeProjectAI install, however, Mike is still working on the CodeProjectAI version of license OCR piece and plans to make that available in a month or so.
Obviously real cool stuff since everything is done locally and nothing is sent to an external 3rd party company---very appealing for anyone who cares about privacy.
I have most of the work done. I still am doing some debugging and working on some improvement like the ability to use more then one ALPR camera, ability to OCR more then one plate in an image, and improved OCR.

1664635992587.png
1664636287872.png
1664635612760.png
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
@MikLud1,
Are you sure about this Mike? I know it has been said over and over that the image gets resized. However, I recently noticed when looking at my AI analysis on one particular camera that the image resolution in the upper right corner was not 840x480, but was instead, the native resolution on the main stream. All my other cameras said 840x480 in AI analysis. I check the use Mainstream if available setting and sure enough, that one camera had it checked. Seems me that would indicate that the image is not resized when using the main stream
Blue Iris does not do the resizing, it is done at the AI.
 

Tusabrat

n3wb
Joined
Sep 26, 2022
Messages
22
Reaction score
10
Location
Spain
Hi all

Is there any roadmap for allowing logic in the custom models? I have seen other people remark that they have the same issue as me, which is: I am using BI to monitor my 4 cats. I don't want to scroll through hours of 'person' alerts to find out who vomited on the carpet, so I have 'To cancel' set to 'person'. However, if a human and a cat happen to be in the same room, I still want an alert/recording, but I'm finding that the recording gets cancelled due to the presence of 'person'.

Anyone found a way around this?
 

jq5

n3wb
Joined
Jul 18, 2022
Messages
3
Reaction score
0
Location
USA
Hi all

Is there any roadmap for allowing logic in the custom models? I have seen other people remark that they have the same issue as me, which is: I am using BI to monitor my 4 cats. I don't want to scroll through hours of 'person' alerts to find out who vomited on the carpet, so I have 'To cancel' set to 'person'. However, if a human and a cat happen to be in the same room, I still want an alert/recording, but I'm finding that the recording gets cancelled due to the presence of 'person'.

Anyone found a way around this?
Can you not just put 'cat' in the 'To confirm' box and leave the 'To cancel' box empty?
 

VideoDad

Pulling my weight
Joined
Apr 13, 2022
Messages
157
Reaction score
208
Location
USA
I don't want to scroll through hours of 'person' alerts to find out who vomited on the carpet
If you include 'person' in the cancel then the clip isn't included if there is a person in the clip. So it is doing exactly what you asked.

If you only want clips where the AI sees cats, only include 'cat' but don't cancel for anything.

If you are still wanting to record other objects (eg. 'person' or 'cat') then I would create the camera with those objects included, then clone the camera and make one that only detects 'cat'. You'd then use that to scroll through events with the cats.

I think there is a way to filter the flagged events too, but I don't remember how to do that off the top of my head, but you might find it in the help file.
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
In the camera AI settings you need to have something in the To confirm box like person,car
Thanks Mike. I've been trying this - up until now I thought that when using custom detection, to confirm is left blank. The problem is, though, it looks like I have to include all wanted object types in the to confirm box. Just as an example, if I use "person,car" and then the neighbour's dog walks by (which I want to capture), the dog is relegated to a red X and the search continues through successive images (16 in total @250mS to cover headlights preceding a car) and ending with the asterisk identifying nothing found. The range of objects that I want to be confirmed is very large.

I now discover that the ipcam-xxx object files are your creation. Am I more likely to end up getting what I want using these instead of yolov5?
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
Blue Iris does not do the resizing, it is done at the AI.
So does that mean that, without hardware acceleration, the CPU load is not increased when "use main stream if available" is checked and it's all down to the GPU? But I think the question has been asked previously:- if the image is resized is there any advantage in checking use main stream if available? And is it resized all the way down to 640x480 prior to being analyzed?
 
Top