It's hosted publically on Github and has Server Side Public License, so anyone can fork it and continue to develop it even if the current team stops updating it.
Here is the list of all the current forks, btw Forks · codeproject/CodeProject.AI-Server
Here are the only errors I found in the log files while it was disconnected..
2023-02-01 09:55:52: Error checking for latest version: The SSL connection could not be established, see inner exception.
2023-02-01 09:55:52: Error checking for available modules: The SSL connection could not be established, see inner exception.
Here are the only errors I found in the log files while it was disconnected..
2023-02-01 09:55:52: Error checking for latest version: The SSL connection could not be established, see inner exception.
2023-02-01 09:55:52: Error checking for available modules: The SSL connection could not be established, see inner exception.
In addition to those, I am getting these as well when I disconnect the internet:
detect_adapter.py: Server connection error. Is the server URL correct?
detect_adapter.py: Pausing on error for 60 secs.
detect_adapter.py: [ClientConnectorError] : Unable to check the command queue objectdetection_queue. Is the server URL correct?objectdetection_queue: [ClientConnectorError] : Unable to check the command queue objectdetection_queue. Is the server URL correct?
(it was $150 when I bought it, down to $99 now). It runs off the PCIe bus power without needing any other power supply, so it is power/cost efficient, which was a factor in my choice. AI returns (originally Deepstack, now CPAI) are significantly faster with this card than using CPU.
Benchmarking one of my LPR cam images against LicensePlates v2 I get 16.4 operations per second. Detection LicensePlate v2 on that same image take 75ms round trip, 52ms processing, 51ms inference and has a 91% DayPlate result. (YOLOv5 6.2). I tried to get the numbers using CPU only, but I can't seem to get CPAI to go into CPU mode. Last time I tested I think it was in the 300ms+ range for the same sort of detection result.
As an Amazon Associate IPCamTalk earns from qualifying purchases.
Yeah 250msec is about right for cpu... that's what I was getting with intel 3570 anyways. That 1030 gpu does look solid... 384 cuda cores and 30W. That beats my p400 with 256 cores and 30W, which does about 13.0 ops/sec (~70-80msec makes... recently purchased used on ebay for $75). GPU does help a ton, especially when you have situations where several cams are sending to AI at the same time.
(it was $150 when I bought it, down to $99 now). It runs off the PCIe bus power without needing any other power supply, so it is power/cost efficient, which was a factor in my choice. AI returns (originally Deepstack, now CPAI) are significantly faster with this card than using CPU.
This is node-red dashboard @Alan_F created. It’s an awesome way of visualizing and storing all your LPR alerts and includes search and sorting functions. The process he created for this requires BI, CodeProject AI, MQTT, Node-Red, an SQL Database (I’m using MariaDB) and a local web server.
This is node-red dashboard @Alan_F created. It’s an awesome way of visualizing and storing all your LPR alerts and includes search and sorting functions. The process he created for this requires BI, CodeProject AI, Node-Red, an SQL Database (I’m using MariaDB) and a local web server.
It may just be easier to try to update everything in this post, as things changed incrementally, so here you go:
Components needed: Blue Iris set up to recognize tags, Node Red, a MySQL database, MQTT server, Internet Information Services or other web server to serve images.
Configure Blue Iris to write alert images to a folder (must use "Hi Res JPEG files" option for "Add to alerts list" on Trigger page)
Configure your LPR cam(s) On Alert action to send MQTT message as follows: { "plate":"&PLATE", "AlertImgPath":"&ALERT_PATH", "Alert_AI":"&MEMO", "Date":"%Y-%m-%d %H:%M:%S","Camera":"&CAM" }
Create a separate folder to store the images where ALPR detected a tag (outside Blue Iris)
The original images are deleted when Blue Iris deletes the clip/alert. Copying them elsewhere allows you to keep them as long as you want
This folder should be local to the Node Red instance
Set up Windows Internet Information Services (IIS) or another web server to serve the files in that folder on an available port (I'm using 8093 in the example)
Set up a MySQL database to store the records (anywhere this computer can connect to on the network). I'm running MariaDB in Docker on my Raspberry Pi.
Use the attached flow (main flow.zip) in Node Red to store the data from each MQTT message and copy the image file to your storage folder
Configure the nodes as needed: folder paths, mysql server config, etc.
Use the attached flow (dashboard flow.zip) to create a UI dashboard to view tags and images
Put the URL to the web server you set up in the dashboard flow node labeled "Set URL Here", keeping the format like the example
This query will create the database:
Code:
CREATE DATABASE `LPR` /*!40100 DEFAULT CHARACTER SET utf8mb4 */ ;
This isn't a comprehensive step-by-step set of instructions for each component. I have Node Red running directly on my Blue Iris machine because it was already there, and MQTT and the database are running in Docker on a Raspberry Pi. It could all be run on the BI machine if you want.
I'm happy to try to help anyone get this set up once you've hit the limit of what Google can answer for you.
Also, this is still a work in progress. The dashboard changed over the last 24 hours after @Vettester asked about getting the images displayed directly on the page instead of opening in a new tab. The way it works now, when you click on a row in the table, the image is displayed to the right. If you then click the image, it opens full size in a new tab.