DeepStack How to alert on Delivery carriers

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I had Deepstack working to alert for person, car and etc. I found a good article on how to capture delivery carriers like Amazon, UPS and etc. I added the delivery.ps in the directory, ran the CMD with deepstack --command, the command seemed to be successfully (but it never closed the windows and got to the restore part, i guess is normal). Thinking ahead, I live on a moderate traffic road, normal average is a few cars a second. The Amazon, USPS and other carries always stop around that point on the picture. How do I use Deepstack without worrying about other cars going by and only capture and alert when these vehicle are stopped? Ideally I would like also to have sent an email but this is later in the process.

P.S. After I added the delivery.ps file and ran the CMD line, it seems that AI is not working anymore with an error 100 when I look at the AI tab.

Thanks for all your help.
 

Attachments

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I had Deepstack working to alert for person, car and etc. I found a good article on how to capture delivery carriers like Amazon, UPS and etc. I added the delivery.ps in the directory, ran the CMD with deepstack --command, the command seemed to be successfully (but it never closed the windows and got to the restore part, i guess is normal). Thinking ahead, I live on a moderate traffic road, normal average is a few cars a second. The Amazon, USPS and other carries always stop around that point on the picture. How do I use Deepstack without worrying about other cars going by and only capture and alert when these vehicle are stopped? Ideally I would like also to have sent an email but this is later in the process.

P.S. After I added the delivery.ps file and ran the CMD line, it seems that AI is not working anymore with an error 100 when I look at the AI tab.

Thanks for all your help.
I spent all day yesterday trying to figure it out, any suggestion would be greatly appreciated. Right now just looking for the right settings for motion.

Thanks

Sent from my SM-S906U using Tapatalk
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,013
Reaction score
48,781
Location
USA
Screenshots of your settings is a good start.

Did you then add the objects in your motion settings to correspond with the delivery trucks you want?

You do not need to run the command - you simply need to put the .pt file in the same directory BI is looking for the custom models at.

Deepstack is no longer supported by BI and eventually the code to make the newest AI - CodeProject work will inevitably break Deepstack functionality.

You would probably be best to move to the new AI as most that were running custom Deepstack models have moved to that platform.
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
Thanks for your help, appreciated. Yesterday I moved to codeproject and it seems to work on on person, cars and etc. I would like to use the delivery custom but I'm not sure it has been moved to codeproject yet. Should I clone the camera that sees the delivery truck and use motion b with a mask surrounding that area when normally the usps will stop? Any other tips that you may suggest?

Thanks

Sent from my SM-S906U using Tapatalk
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
806
Reaction score
731
@AlphaBlueIris Congrats moving to CodeProject.AI. You can get the delivery model here People can mask USPS AI detections

Read the thread for details. But essentially, you download delivery.pt and put it in CodeProject.AI custom model folder. Then configure your camera AI panel to use it. Best to have a separate "cloned" camera for detecting delivery trucks.

You can thank @VideoDad for this model.
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
it is actually thanks to you and other people that guided me to the right solution, that is appreciated. I will follow the link and learn more about it, although I'm still trying to figure out what would be the best camera setup since I live on a moderate traffic road, normal average is a few cars a second. The Amazon, USPS and other carries always stop around that point on the picture. How do I use CodeProject AI without worrying about other cars going by and only capture alerts when these vehicle are stopped?
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
806
Reaction score
731
@AlphaBlueIris If you have constant traffic, and you only want delivery truck detection when there is a stationary object in camera view, that's may be difficult to do if both moving and stationary vehicles are in same camera view.

If the location of the stationary object is different from the moving traffic, then you can perhaps mask (blackout) the area(s) in the camera view to block moving vehicles.

Otherwise, you will probably need an external sensor to trigger the custom delivery.pt detection at the desired time.
 
Last edited:

truglo

Pulling my weight
Joined
Jun 28, 2017
Messages
275
Reaction score
103
I have been running delivery.pt with cpai for several days now, and it is mostly working well. From my limited experience, I found a couple of items that I feel could be improved upon.

My one real gripe is likely due to user error, and is definitely not related to delivery.pt specifically. In the iOS BI app, alerts from the clone delivery cam don't show up; the only inkling I have that deliveries works from the app is the push alerts. When I login directly to the BI PC and select to display alerts from only the selected camera, then I select the master camera, the images of the confirmed alerts do show up in the list. I tried enabling hires jpegs etc, but for the life of me I can't figure out how to get the alerts to show up in the BI app itself. I'm guessing this would apply to all cloned cam alerts, although I only have one clone with delivery setup at the moment. Does anyone have any tips on how to get alerts from cloned cams to show up in the BI app?

Second, rental vehicle companies would be nice. I looked out the window and noticed my neighbors having their washing machine swapped out by a couple of guys driving an Enterprise rental truck. I made sure that's what was happening, as it reminded me of the time when I was a kid and my parent's neighbors across the street had their whole house lifted by pros driving a Ryder truck. It happened in broad daylight, and neighbors didn't connect the dots quick enough. They got most of the furniture, and of course all the electronic goodies they had. So anyways, having experience with my Amazon, USPS, and UPS push notifications recently and seeing Enterprise truck across the street today made me think, it could be very useful to add these rental truck companies to delivery.pt, or perhaps create a separate file for just the rental companies. If I knew where to start with creating and training AI models, I'd like to help. However every article I have seen on the subject assumes the reader already has a significant background in AI software.
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I'm jealous you got it to work. Do you live on a main street or secondary road? As I mentioned, I'm on a road that alghouth is not a main road it may get busy, like few cars every few minutes. Right now I have one camera that records 24/7 while the clone is only recording alerts around my perimeter of the house that dies not include the street. I guess, what I'm asking is, is it possible to check for delivery when the usps stops on my street?
How would I do that?
I just want to make sure I understand, you are actually getting an alerts when usps stops to delivery your mail correct indicating that xx% was a usps truck. Any help would be greatly helpful.

Sent from my SM-S906U using Tapatalk
 

truglo

Pulling my weight
Joined
Jun 28, 2017
Messages
275
Reaction score
103
I live on a secondary road, but at times between vehicles, pedestrians, bikes, etc... my road facing cameras can see 10+ hits per minute. To keep up with times when 2 objects pass seconds apart, I have to make sure the triggers and AI reset very quick (less +images and lower msec interval for ai, low make time for and low reset after trigger for motion). It works pretty good... objects have to pass less than a split second apart for one to be skipped, which is extremely rare (also unavoidable?). I am pretty certain the image interval in the camera AI tab tells bi "get each photo this many ms apart in video, and send to ai". So if your object goes in and out of frame quickly, you need a very low interval. For example, +10 images at 100msec means BI will try and send AI an image for T=0, T=0.100, T=0.200, ... to T=1.0sec for a total of 10 images. Obviously if the object moves past the frame in 0.5sec, the last 5 images sent are a waste of cpu. A setting of +10 at 50msec would be much more productive for this example giving AI all 10 images that are relevant... or just remove the wasted images and use +5 at 100msec. This bit of info seems to be lacking clarity in any guides/docs I've seen, and is absolutely critical to optimizing ai for each camera/situation. When you understand this clearly, you can have a wide range of values for different cams, and all of them work very efficiently (also get all of the alert images nicely centered, if you will). Since I realized this, my LPR confirm average has gone from like 50% to 95%; rare to have AI cancel an off screen plate anymore.

I'm not sure if it is possible to confirm a stopped USPS truck. My setup doesn't care if the vehicle is moving or not... I get a push notification anytime a garbage truck, usps, etc pass by, whether they stop nearby or not. If I was to try what you want, I'd probably start messing with BI motion invert... where it detects when motion stops. I'm not sure exactly how that functions, but guessing it triggers once after motion stops, resets when it sees motion, and triggers again when it stops (ie, won't constantly trigger unless there is motion). If that's the case, using that as a trigger to send images to delivery.pt might work the way you want. If you do it this way, I'd play with "start with motion leading image" and see what effect that has. Since you're intentionally sending static images to AI, you could probably get away with just one image and save a ton of cpu cycles. Also, I'm not sure how cancel stationary objects would interact with this; could eliminate parked cars, but might cause problems with 'occupied' states. I'd try playing with that too if it doesn't work well.
 
Last edited:

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I live on a secondary road, but at times between vehicles, pedestrians, bikes, etc... my road facing cameras can see 10+ hits per minute. To keep up with times when 2 objects pass seconds apart, I have to make sure the triggers and AI reset very quick (less +images and lower msec interval for ai, low make time for and low reset after trigger for motion). It works pretty good... objects have to pass less than a split second apart for one to be skipped, which is extremely rare (also unavoidable?). I am pretty certain the image interval in the camera AI tab tells bi "get each photo this many ms apart in video, and send to ai". So if your object goes in and out of frame quickly, you need a very low interval. For example, +10 images at 100msec means BI will try and send AI an image for T=0, T=0.100, T=0.200, ... to T=1.0sec for a total of 10 images. Obviously if the object moves past the frame in 0.5sec, the last 5 images sent are a waste of cpu. A setting of +10 at 50msec would be much more productive for this example giving AI all 10 images that are relevant... or just remove the wasted images and use +5 at 100msec. This bit of info seems to be lacking clarity in any guides/docs I've seen, and is absolutely critical to optimizing ai for each camera/situation. When you understand this clearly, you can have a wide range of values for different cams, and all of them work very efficiently (also get all of the alert images nicely centered, if you will). Since I realized this, my LPR confirm average has gone from like 50% to 95%; rare to have AI cancel an off screen plate anymore.

I'm not sure if it is possible to confirm a stopped USPS truck. My setup doesn't care if the vehicle is moving or not... I get a push notification anytime a garbage truck, usps, etc pass by, whether they stop nearby or not. If I was to try what you want, I'd probably start messing with BI motion invert... where it detects when motion stops. I'm not sure exactly how that functions, but guessing it triggers once after motion stops, resets when it sees motion, and triggers again when it stops (ie, won't constantly trigger unless there is motion). If that's the case, using that as a trigger to send images to delivery.pt might work the way you want. If you do it this way, I'd play with "start with motion leading image" and see what effect that has. Since you're intentionally sending static images to AI, you could probably get away with just one image and save a ton of cpu cycles. Also, I'm not sure how cancel stationary objects would interact with this; could eliminate parked cars, but might cause problems with 'occupied' states. I'd try playing with that too if it doesn't work well.
Truglo, what you said makes lot of sense, interesting enough, it looks like you know your bue iris very well. I will play with it tomorrow, w
Would I ask for too much if you could add some pictures of your setup, it will help me a lot.

Thanks

Sent from my SM-S906U using Tapatalk
 

truglo

Pulling my weight
Joined
Jun 28, 2017
Messages
275
Reaction score
103
The first shot shows my motion setup... red square around the part I think may do what you are after:
Untitled.jpg

Below is a shot of that clone camera's AI setup.
2.jpg

This is sorta OT from your post but may help with general AI setup. You may note I have zone G unchecked. That is because I don't need the whole frame to be sent to AI. Lemme explain...

When BI sends images to AI, it only sends pixels that are both defined as camera motions zones, and that motion zone is checked in camera AI. This behavior is obvious when you review AI analysis details. Any part of the image not configured as a motion zone and checked in cam AI, gets blacked out before BI sends to AI. There are situations where you want to ignore motion in some parts of the frame, but include that part of the frame in images sent to AI (for example, a 45degree angle on a narrow walkway surrounded by bushes blowing in the wind). In this case, I may want just a small strip zone for motion detection. This alone would make the images sent to AI almost useless (everything but the legs blacked out). To handle these situations, on top of my narrow strip zone that actually triggers the camera, I also configure zone G to cover the entire frame. I then uncheck zone G for motion triggers, but check it on in the camera AI page. Since zone G is now defined and checked in AI, BI sends the whole image without any blacked out areas to AI, with no ill effects on the desired motion triggering. This is another area that I found to be poorly documented (hint to doc maintainers)... or just forget the docs and in the AI tab instead of "Zones:", change it something like "AI Mask", and put a small note about how the motion zones are also used as AI masks in the motion page.
 
Last edited:

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
You are the best, thank you so much, I will try it first thing tomorrow morning. I will keep you posted.

Sent from my SM-S906U using Tapatalk
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
Today no much of delivery to test it out. As an FYI, this is my configuration. I have to admit, BlueIris is a little overwhelming. Currently I have one camera (Normal) Hapopy to share if you like that does the motion and etc. The other one is a clone and records 24/7. I then created another one, which I'm including all the print screens and those are my settings. I only have one zone, zone A that is where the delivery truck stops by.
Thanks for any feedbacks you may give, I opt out of the static object option because it seems that we have both similar configuration.
 

Attachments

truglo

Pulling my weight
Joined
Jun 28, 2017
Messages
275
Reaction score
103
You are not alone. BI is a double edge sword for many casual users; it has lots of power and flexibility that comes with a steeper learning curve. I was already up to speed on other BI knowledge, then had to put in about a full days worth of reading/experimenting before I got to where I am now... AI almost half way figured out, hehe.

Before reading on, first I recommend opening the BI AI toolbox (BI status button/AI tab), then double click on some alert that was cancelled by your existing settings. What happened to cause the cancellation will now be displayed in detail the toolbox window. The image will show clearly what your AI looked at, and how well each step in the process went. Once you've absorbed that, you can use the AI toolbox again later and see the affects of some adjustments I'm about to recommend.

Zone A is somewhat narrow, and the way you have everything setup BI is likely to send images to AI that have the logos blacked out. This is a perfect case to use the "zone g workaround" I mentioned above. In your zone setup, leave zone A as is. Change zone G cover the entire image, then save. Next go to your camera AI settings, where you have zones G and H disabled, and enable the G checkbox. These settings will make it where BI always sends complete frames to AI, logos and all (no blacked out pixels).

Now you probably don't want to add zone g to motion triggers. In the cameras object detection settings, you have "object crosses zones" unchecked. That means movement >100pixels in the substream on ANY zone will trigger, now including zone g. To fix this, you have to enable the "object crosses zones" checkox, and enter "A" in the list. This way BI will ignore motion in any zone except A.

Next, you mentioned you only want to confirm logos when the vehicle is stopped in zone A right? If that's the case, in your motion settings, try enabling the "OPPOSITE sense to detect non-motion" checkbox. Again, I haven't played with this myself, but guessing it's what you are after. If that tree could cast a moving shadow on the side of a stopped delivery van, you will probably have some added complexities to deal with (in BI motion triggers and zones setup). This setting may also interact with my next setting?... not sure but logic tells me pixels traveled before motion stops could be relevant? [edit: On a sidenote, I don't see a street side mailbox. If you just want to know when you actually get deliveries, you could instead trigger the clone on the walkway. It can still confirm the logo on the van for alerts, assuming the delivery person that triggered it is parked in view. This could give you the same result but with less processing for each passing vehicle.]

Lastly, you have the setting "object travels 100 pixels in substream". I believe 0.3mp is D1 resolution, or 740x480 pixels (roughly?). Eyeballing scaling this, your zone a looks roughly 400x100 substream pixels or so? Depending how well the camera sees the lower part of the truck etc, 'object center point traveling 100pixels' may be unreliable (oops, they left out 'center point' didn't they?). With certain lighting conditions, colors, reflections, etc... the apparent center point of motion can end up being all over the map. I would drop that to 60pixels to make it more sensitive to ensure you don't miss anything; AI should filter any false triggers anyways.

The rest of your settings look like they should work fine for what you want.

Best of luck... if it encourages you at all, mine hasn't missed a single one out of the last 20.. including very different looking garbage/recycling trucks... no video card just intel 4500k w/ intel beta HA w/ 6x 2-4mp cams. Cpai and delivery.pt is awesome!
 
Last edited:

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I'm so jealous, hopefully I will have the same results. Tomorrow, I will dedicate the entire day on it. By the way, I know you have put lot of your time into writing it up for me and others, I own you a beer.

Happy new year.

Sent from my SM-S906U using Tapatalk
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
I made the changes you have suggested. Could you please confirm this is what I read from your post? By the way, yes, you are right, the USPS will stop in that location in the picture and someone will drop the package next to my door.
 

Attachments

CCTVCam

Known around here
Joined
Sep 25, 2017
Messages
2,674
Reaction score
3,505
It sounds as if there may already be a model for this, but if not, you could try teaching the AI to recognise the logos. Most delivery companies use logo'ed vans so one way in might be to teach the ai to recognise the loos to separate delivery out from van.
 

AlphaBlueIris

Getting the hang of it
Joined
Jan 17, 2022
Messages
125
Reaction score
27
Location
Boston
@truglo. with the current configuration above, the camera did not trigger at all. I think I have followed your instructions thru the dot. Any other ideas?

Thanks
 
Top