OpenALPR Webhook Processor for IP Cameras

IReallyLikePizza2

Known around here
Joined
May 14, 2019
Messages
1,852
Reaction score
4,443
Location
Houston
Hopefully with time they get it sorted out, if I could have 6-8% idle with GPU doing the rest, I'd be very happy
 

CamCrazy

Pulling my weight
Joined
Aug 23, 2017
Messages
416
Reaction score
194
Location
UK
Hope so, I have emailed them regarding this and the response was that Rekor allocates CPU to accommodate the data link to GPU, I think they assume people with more than 1 ALPR camera will be using GPU. Sadly for us this is not the case, anyway, I voiced my opinion and they said it would be passed on so :idk:
 

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
v4.1.0 is released.
  • the processor will now store all images locally after the initial image is pulled from the agent.
  • for best results run the agent scrape after upgrading. it will perform a one time pull from the agent to get all images. the system log will show you its progress. you can safety stop/restart the service while this is happening without causing issues. you will need to start the scrape again, it will pick up where it left off.
  • scraping should now be much faster on larger databases, pulling the images is slow, the agent doesn't respond very fast.
  • If you do not run the agent scrape and try to open up an old plate, the processor will pull the missing plate image from the agent and store it locally so it doesn't have to do it again.
 

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
v4.2.0 fixes an outstanding issue with the systems log page using too much memory over time and crashing the browser. it also pre-loads 500 lines of logs on page load now and keeps 500 lines of text on screen, flushing old logs from the UI.
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
I’ll give this a try tonight and report back! Using “latest” will pull 4.2.0?
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
Running full scrape now and at first I had "Adding job for image...". Now I'm getting "Unable to retrieve image from agent..." over and over again.

How does one know it retrieved any images and is storing them? Is there a counter or visual cue somewhere that will show how many images the local db is storing?
 

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
If your processor has records for plates that your agent has since deleted it wont' be able to get the images for them. are the plates images working correctly when you browse them in the list?
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
That is probably it. I assume it starts from the earliest? My earliest plate pictures are not there. I'll let it run and see.
 

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
It starts with the most recent plates and works backwards
 

djmadfx

Getting the hang of it
Joined
Sep 29, 2014
Messages
106
Reaction score
19
I think I underestimated the disk space usage of 28,000 plate images. Are you able to add disk usage into the UI? I guess some sort of stat page.

I have 'openalprwebhookprocessor' within a local volume since it does have a database (didn't want to cause lock issues using over NFS or CIFS). Wonder if it might end up being best to have a volume mount so the images can just be stored on an NFS volume. My processor.db is currently 36 GB.
 

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
I think I underestimated the disk space usage of 28,000 plate images. Are you able to add disk usage into the UI? I guess some sort of stat page.

I have 'openalprwebhookprocessor' within a local volume since it does have a database (didn't want to cause lock issues using over NFS or CIFS). Wonder if it might end up being best to have a volume mount so the images can just be stored on an NFS volume. My processor.db is currently 36 GB.
how big is your rekor agent's db? how many plates are you seeing a day?
 

djmadfx

Getting the hang of it
Joined
Sep 29, 2014
Messages
106
Reaction score
19
Rekor's plateimages db is 41 GB. I get 250-300 plates/day (should be images since ~July when I started). I already run backups on the server where Rekor scout runs, so the database is backed up on a daily basis. Maybe have a setting to just keep the last X days of images within processor.db? I don't know what's best in this case.
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
Seems to be working fine now. .db size is growing and the images queued are counting down.
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
Rekor's plateimages db is 41 GB. I get 250-300 plates/day (should be images since ~July when I started). I already run backups on the server where Rekor scout runs, so the database is backed up on a daily basis. Maybe have a setting to just keep the last X days of images within processor.db? I don't know what's best in this case.
This is what I'm thinking I will do to. I don't think I need Rekor to hold 40GB and then have the webhook hold a duplicate 40GB. I also backup my VM daily so I have backups I can always revert to if something screws up.
 

biggen

Known around here
Joined
May 6, 2018
Messages
2,573
Reaction score
2,858
Alright the scrape just finished. I have ~130k plates. The db grew to 35GB in size. I'm not sure how many images that is because I have the Rekor agent set to hold only 32GB worth of plate images. It took about an hour to finish the scrape. Very Very cool indeed!

Is there a way to show how many actual plate images we are holding locally?? Maybe at the bottom of the webpage near the green status light?

Edit: Maybe a bug. On the log page if you click "Download last 24 hours of plates" at the top, a new window pops up called a "blob" that only has two brackets in it []. Not sure what that means. Why is that link up there and how is it different from the scrape in the agent page? I assume it's just to quickly grab the last 24 hours worth of plates and no more.
 
Last edited:

mlapaglia

Getting comfortable
Joined
Apr 6, 2016
Messages
849
Reaction score
506
It's for debugging plates that fail to process, it doesn't do anything with scraping.
 
Top