|
We've seen a new release of Blue Iris come out and on our end it seems to have reduced a few of the issues we'd seen earlier. Maybe it's a combo of the update and this change.
cheers
Chris Maunder
|
|
|
|
|
I've done some testing on the ALPR module and it seem to generally cope well with various regions, but one thing I noticed is is seems to not be able to detect any square plates. I am going with this being the plate finder rather than the OCR/Syntax part of the pipeline.
This does put a bit of a limit on it's usefulness in the real world as it misses most motorcycles, the rear plates of a lot of commercial vehicles and smoe other vehicle types.
Nice job on the white on black etc though
@MikeLud.. I realise this is your own time spent on this so if you want some assistance PM me as this would not be my first ALPR rodeo so to speak.
|
|
|
|
|
Square plates. That raises a very interesting point.
Mike's been working on an updated version that I'm merging with the current code and testing. I'm hoping I'll have it done today. The changes mainly involve deskewing plates, and this works on finding the longest line on the assumption that plates are wider than they are high.
Any chance you could send me a couple of sample images with square plates that you're seeing issues with? I'll debug against them directly. (chris@codeproject.com)
cheers
Chris Maunder
|
|
|
|
|
See my email to you..
Cheers
Paul
|
|
|
|
|
Chris
To identify the square plates might need the license-plate model to be retrained. Also the post processing will need to be changed to identify two lines of text and return them.
|
|
|
|
|
These are all in the provided test samples.
"
BC-Licence-Plate.jpg

all the squares return nothing.
ABC123.PNG
your-text.jpg
oh-canada.jpg
I can understand why CN_CDNX_GI1.jpg is not working, because it probably only expects one plate at a time.
The rest seem to work as intended.
|
|
|
|
|
The ALPR module is designed to work with images that have a wider FOV similar to the below image. If you want to read the plate like the image you used try the OCR module


|
|
|
|
|
Chris can you email the image.
|
|
|
|
|
So far I have had no successful detections through BI, but the module is working, and is functioning as expected. it's currently setup as CPU, and a test of it takes 109ms with 100% confidence (10.jpg). This image is the closest to the car angle I have where it is approaching directly toward the camera. The Object Detection is using GPU. Same results with both GPU as well.
In BI though, the status says it was used, but a bunch of nothing every time. I increased the resolution to use mainstream, but if its cut down resolution like the others, it has no chance of improving.
Has anyone had any success with it outside of the testing page?

I'm about to end testing and disable it. It has been roughly a week so far with 0% detection rate.
|
|
|
|
|
Post screenshot of your main & camera AI settings that you are trying to do ALPR. Also post an image that you think it should detect the plate. The APLR works best if you use the license-plate model to find the license plate in the image first then BI send that image to run the ALPR module. Below are my settings.


|
|
|
|
|
I deleted all the other custom models. I only use the ipcam-combined.


I prefer not to release a image with a readable plate, but ill let you know I screen captured it from various ranges (6), sub stream and mainstream, and one in the absolute closest position possible. 4 of the images are from 4k. I could make out the plate at distance, but even for me, sourced from 4K, it took me a second due to the extreme blur. At the extreme close range, it was trivial to read, but neither the OCR or the plate reader could read it. Is it hopeless? Can the AI read slightly blurred text? Is the confidence set too high? It cant be the resolution because I cropped a 4K image into a smaller image, and no luck at all.
My camera FOV is 80 degrees, and 4k. When I count the pixels of the car, it has to be something like 270p at the far range, and 500p at the close range. At the absolute soonest it can detect, the image is likely only 150 or smaller.
|
|
|
|
|
Deleting the license-plate model is your reason the ALPR module does not work. The ALPR module uses the license-plate model to first identify the license plate in the image then crops the license plate that needs to be read.
|
|
|
|
|
I have been working on this problem for a day or two now with people on IPCams and they suggested I post here. I am getting these errors when the server is trying to load the ALPR module. I have upgraded, installed, deleted, re-installed and every which way we could come up with but can't seem to fix it. I even totally deleted the program sub directories and started from scratch and still a problem.
Can anyone here possibly tell me what's going on?
Thanks
14:15:30:Timed out attempting to install Module 'ALPR' ($A task was canceled.)
14:15:31:ALPR_adapter.py: Traceback (most recent call last):
14:15:31:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 8, in
14:15:31:ALPR_adapter.py: from analysis.codeprojectai import CodeProjectAIRunner
14:15:31:ALPR_adapter.py: File "../../SDK/Python\analysis\codeprojectai.py", line 30, in
14:15:31:ALPR_adapter.py: import aiohttp
14:15:31:ALPR_adapter.py: ModuleNotFoundError: No module named 'aiohttp'
|
|
|
|
|
Help? Anyone have these issues? ALPR module not identifying anything in the explorer. I am using the latest and greatest. I just don't understand why I am getting these errors and I assume it's the problem.
|
|
|
|
|
rbc1225 wrote: ModuleNotFoundError: No module named 'aiohttp'
This suggests the ALPR installation failed. aiohttp is a python package that's needed.
rbc1225 wrote: Timed out attempting to install Module
Suggests your internet connection was having issues. Is your connection stable and sufficiently fast?
cheers
Chris Maunder
|
|
|
|
|
I assume 30Mbs is sufficient. I have tried multiple times totally removing and installing again. Same problem. I have now seen in a forum on IPcam that another person seems to be having the same issue.
|
|
|
|
|
I am running an Ubuntu 22.04 VM (headless) in Proxmox, with the latest docker installed, using PCIE passthrough. I followed the Nvidia docker setup instructions (nvidia-container-toolkit, etc) and the CP.AI docker setup. Upon boot, docker doesn't recognize the GPU (NVidia Tesla P4).
I found nvidia-persistenced - when I try that, docker will start the CP.AI container, but YOLO still runs in CPU mode.
I had lolminer sitting on the host because I was struggling with crashes under GPU load, so I used it as a stress test. (Ultimately determined that those were caused by inadequate cooling of the P4 card, which is now fixed). I've found that if after rebooting, I start lolminer (running nvidia-persistenced isn't necessary), let it run for a few seconds, kill it, and then start the CP.AI docker, everything works as intended. It seems that lolminer is doing something in terms of loading the drivers that the CP.AI docker isn't.
Seems like perhaps there's something in the CP.AI startup sequence that could be improved (perhaps something specific to PCIe passthrough)? Or maybe I'm just missing a step somewhere?
Thanks for the great work.
|
|
|
|
|
Not sure it is apples to apples but I had similar until I used nvidia-docker. OK with that.
Running very well here, 70-80 msec times with ancient P620 GPU, PCIE passthrough on ESXi, Debian VM.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
Thanks for the idea, but I do have nvidia-docker2 installed.
Not sure. Anyway, I just added an rc.local script to fire up lolminer for 3 seconds on boot and it all auto starts fine after that. A little hacky, but it works. And at least for the past day or so seems stable.
And yes, the performance is pretty mind-blowing. I'm using a Tesla P4 I picked up from eBay for $100 and the actual processing times for 4k images with max model size are ~40ms.
|
|
|
|
|
Any chance you could briefly write up what you did? I'll add it to the FAQ
cheers
Chris Maunder
|
|
|
|
|
Sure, you mean in terms of my workaround, right?
|
|
|
|
|
OK, actually I think something else must have been at work.
I was working on a writeup and wanted to get the exact error message I was seeing, so I removed the script call I added to rc.local. Everything is still starting up and running fine after restarts (both of the VM and the entire Proxmox node), without lolminer and without persistence mode. So I don't know what I had going on yesterday.
I'd gotten really thrown off because I got things working pretty easily but it was unstable. I'd read some people talking about how GPU passthrough could be flaky in Proxmox so I was attributing it to that and starting to question my whole setup plan, when I ultimately discovered that the issue was the GPU thermals and not the software at all. So after chasing down the wrong rabbit hole for a day or two, I probably wasn't at my best by that point.
In any case, if it's useful, I don't really have much to add to what's already out there in terms of the GPU passthrough setup, but here is a bit of info and mostly pointers to resources and some mention of what I did in case someone else runs into the issue and it does turn out that my workaround was helping somehow:
The GPU passthrough setup is well-documented in several places. Going through a few, I ended up finding this to be my favorite:
GPU Passthrough to VM - 3os[^]
I only deviated from that walkthrough in that I downloaded the driver for the VM direct from Nvidia instead of downloading from the package manager, to make sure I got a version with CUDA 11.7 as I'd read that was important for CP.ai. (Don't know 100% if a newer version is still problematic, but 11.7 is working great for me). Otherwise I followed it, and it worked.
Then once the card showed up in the Ubuntu 22.04 VM (confirmed with nvidia-smi), I proceeded to these instructions to setup nvidia-container-toolkit (also has the command to install docker if you don't already have it running):
Installation Guide — NVIDIA Cloud Native Technologies documentation[^]
At that point I followed the CP.ai instructions to setup the GPU docker (just used volumes instead of the bind mounts) Running CodeProject.AI Server in Docker - CodeProject.AI Server v2.0.6[^]
Initially I was having trouble getting the CP.ai container to start and then it would start but not find the GPU. What seemed to help was running either nvidia-smi or lolminer briefly before starting the docker. I tried it several times and after a reboot, the container wouldn't start and/or recognize the GPU after starting until I did one or the other.
Now everything is working as it should be without that workaround, which I can't explain, so I can't replicate the situation to confirm or deny any more about it. But if you get the GPU set up and are having issues getting CP.ai to load, perhaps it's worth a shot.
Incidentally, I also had CP.ai working in Docker inside of a Proxmox LXC. As I mentioned, I was struggling with stability due to the thermals so I tried every way of running it that I could. I can confirm that the LXC also worked fine once I got everything sorted. In that situation, instead of blocking the GPU drivers altogether on the Proxmox OS, you need to block the Nouveau driver. Then the setup is fairly similar, you're just doing the driver setup on the Proxmox base OS instead of in the VM.
|
|
|
|
|
The key error appears to be this:
"
4:16:44: ALPR_adapter.py: Error: Your machine doesn't support AVX, but the installed PaddlePaddle is avx core, you should reinstall paddlepaddle with no-avx core. "
|
|
|
|
|
What is the configuration of the machine you are running CodeProject.AI Server on?
Of particular interest is the CPU.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
It just an N5095 mini PC so definitely doesnt have AVX.
I was seeing what I could do with a low spec machine. Not sure how hard it would be for the installer to detect that AVX was unavailable an install the non-avx version of paddleOCR which would likely solve this.
Granted it wouldnt exactly be a stellar server but if it worked it could easily power ALPR or such off a home CCTV camera on cheap readily available hardware which quite a lot of people would likely want to do.
|
|
|
|