|
The ALPR module is designed to work with images that have a wider FOV similar to the below image. If you want to read the plate like the image you used try the OCR module


|
|
|
|
|
Chris can you email the image.
|
|
|
|
|
So far I have had no successful detections through BI, but the module is working, and is functioning as expected. it's currently setup as CPU, and a test of it takes 109ms with 100% confidence (10.jpg). This image is the closest to the car angle I have where it is approaching directly toward the camera. The Object Detection is using GPU. Same results with both GPU as well.
In BI though, the status says it was used, but a bunch of nothing every time. I increased the resolution to use mainstream, but if its cut down resolution like the others, it has no chance of improving.
Has anyone had any success with it outside of the testing page?

I'm about to end testing and disable it. It has been roughly a week so far with 0% detection rate.
|
|
|
|
|
Post screenshot of your main & camera AI settings that you are trying to do ALPR. Also post an image that you think it should detect the plate. The APLR works best if you use the license-plate model to find the license plate in the image first then BI send that image to run the ALPR module. Below are my settings.


|
|
|
|
|
I deleted all the other custom models. I only use the ipcam-combined.


I prefer not to release a image with a readable plate, but ill let you know I screen captured it from various ranges (6), sub stream and mainstream, and one in the absolute closest position possible. 4 of the images are from 4k. I could make out the plate at distance, but even for me, sourced from 4K, it took me a second due to the extreme blur. At the extreme close range, it was trivial to read, but neither the OCR or the plate reader could read it. Is it hopeless? Can the AI read slightly blurred text? Is the confidence set too high? It cant be the resolution because I cropped a 4K image into a smaller image, and no luck at all.
My camera FOV is 80 degrees, and 4k. When I count the pixels of the car, it has to be something like 270p at the far range, and 500p at the close range. At the absolute soonest it can detect, the image is likely only 150 or smaller.
|
|
|
|
|
Deleting the license-plate model is your reason the ALPR module does not work. The ALPR module uses the license-plate model to first identify the license plate in the image then crops the license plate that needs to be read.
|
|
|
|
|
I have been working on this problem for a day or two now with people on IPCams and they suggested I post here. I am getting these errors when the server is trying to load the ALPR module. I have upgraded, installed, deleted, re-installed and every which way we could come up with but can't seem to fix it. I even totally deleted the program sub directories and started from scratch and still a problem.
Can anyone here possibly tell me what's going on?
Thanks
14:15:30:Timed out attempting to install Module 'ALPR' ($A task was canceled.)
14:15:31:ALPR_adapter.py: Traceback (most recent call last):
14:15:31:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 8, in
14:15:31:ALPR_adapter.py: from analysis.codeprojectai import CodeProjectAIRunner
14:15:31:ALPR_adapter.py: File "../../SDK/Python\analysis\codeprojectai.py", line 30, in
14:15:31:ALPR_adapter.py: import aiohttp
14:15:31:ALPR_adapter.py: ModuleNotFoundError: No module named 'aiohttp'
|
|
|
|
|
Help? Anyone have these issues? ALPR module not identifying anything in the explorer. I am using the latest and greatest. I just don't understand why I am getting these errors and I assume it's the problem.
|
|
|
|
|
rbc1225 wrote: ModuleNotFoundError: No module named 'aiohttp'
This suggests the ALPR installation failed. aiohttp is a python package that's needed.
rbc1225 wrote: Timed out attempting to install Module
Suggests your internet connection was having issues. Is your connection stable and sufficiently fast?
cheers
Chris Maunder
|
|
|
|
|
I assume 30Mbs is sufficient. I have tried multiple times totally removing and installing again. Same problem. I have now seen in a forum on IPcam that another person seems to be having the same issue.
|
|
|
|
|
 I am having the same issue... but that module is installed...
ocker exec -it aicode_codeprojetai_1 bash
root@7fc37909277b:/app/server# pip3 install aiohttp
Requirement already satisfied: aiohttp in /usr/local/lib/python3.8/dist-packages (3.8.4)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (22.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (1.3.3)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (1.8.2)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (6.0.4)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (1.3.1)
Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp) (2.1.1)
Requirement already satisfied: idna>=2.0 in /usr/local/lib/python3.8/dist-packages (from yarl<2.0,>=1.0->aiohttp) (3.4)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:
root@7fc37909277b:/app/server# cd /usr/
|
|
|
|
|
I am running an Ubuntu 22.04 VM (headless) in Proxmox, with the latest docker installed, using PCIE passthrough. I followed the Nvidia docker setup instructions (nvidia-container-toolkit, etc) and the CP.AI docker setup. Upon boot, docker doesn't recognize the GPU (NVidia Tesla P4).
I found nvidia-persistenced - when I try that, docker will start the CP.AI container, but YOLO still runs in CPU mode.
I had lolminer sitting on the host because I was struggling with crashes under GPU load, so I used it as a stress test. (Ultimately determined that those were caused by inadequate cooling of the P4 card, which is now fixed). I've found that if after rebooting, I start lolminer (running nvidia-persistenced isn't necessary), let it run for a few seconds, kill it, and then start the CP.AI docker, everything works as intended. It seems that lolminer is doing something in terms of loading the drivers that the CP.AI docker isn't.
Seems like perhaps there's something in the CP.AI startup sequence that could be improved (perhaps something specific to PCIe passthrough)? Or maybe I'm just missing a step somewhere?
Thanks for the great work.
|
|
|
|
|
Not sure it is apples to apples but I had similar until I used nvidia-docker. OK with that.
Running very well here, 70-80 msec times with ancient P620 GPU, PCIE passthrough on ESXi, Debian VM.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
Thanks for the idea, but I do have nvidia-docker2 installed.
Not sure. Anyway, I just added an rc.local script to fire up lolminer for 3 seconds on boot and it all auto starts fine after that. A little hacky, but it works. And at least for the past day or so seems stable.
And yes, the performance is pretty mind-blowing. I'm using a Tesla P4 I picked up from eBay for $100 and the actual processing times for 4k images with max model size are ~40ms.
|
|
|
|
|
Any chance you could briefly write up what you did? I'll add it to the FAQ
cheers
Chris Maunder
|
|
|
|
|
Sure, you mean in terms of my workaround, right?
|
|
|
|
|
OK, actually I think something else must have been at work.
I was working on a writeup and wanted to get the exact error message I was seeing, so I removed the script call I added to rc.local. Everything is still starting up and running fine after restarts (both of the VM and the entire Proxmox node), without lolminer and without persistence mode. So I don't know what I had going on yesterday.
I'd gotten really thrown off because I got things working pretty easily but it was unstable. I'd read some people talking about how GPU passthrough could be flaky in Proxmox so I was attributing it to that and starting to question my whole setup plan, when I ultimately discovered that the issue was the GPU thermals and not the software at all. So after chasing down the wrong rabbit hole for a day or two, I probably wasn't at my best by that point.
In any case, if it's useful, I don't really have much to add to what's already out there in terms of the GPU passthrough setup, but here is a bit of info and mostly pointers to resources and some mention of what I did in case someone else runs into the issue and it does turn out that my workaround was helping somehow:
The GPU passthrough setup is well-documented in several places. Going through a few, I ended up finding this to be my favorite:
GPU Passthrough to VM - 3os[^]
I only deviated from that walkthrough in that I downloaded the driver for the VM direct from Nvidia instead of downloading from the package manager, to make sure I got a version with CUDA 11.7 as I'd read that was important for CP.ai. (Don't know 100% if a newer version is still problematic, but 11.7 is working great for me). Otherwise I followed it, and it worked.
Then once the card showed up in the Ubuntu 22.04 VM (confirmed with nvidia-smi), I proceeded to these instructions to setup nvidia-container-toolkit (also has the command to install docker if you don't already have it running):
Installation Guide — NVIDIA Cloud Native Technologies documentation[^]
At that point I followed the CP.ai instructions to setup the GPU docker (just used volumes instead of the bind mounts) Running CodeProject.AI Server in Docker - CodeProject.AI Server v2.0.6[^]
Initially I was having trouble getting the CP.ai container to start and then it would start but not find the GPU. What seemed to help was running either nvidia-smi or lolminer briefly before starting the docker. I tried it several times and after a reboot, the container wouldn't start and/or recognize the GPU after starting until I did one or the other.
Now everything is working as it should be without that workaround, which I can't explain, so I can't replicate the situation to confirm or deny any more about it. But if you get the GPU set up and are having issues getting CP.ai to load, perhaps it's worth a shot.
Incidentally, I also had CP.ai working in Docker inside of a Proxmox LXC. As I mentioned, I was struggling with stability due to the thermals so I tried every way of running it that I could. I can confirm that the LXC also worked fine once I got everything sorted. In that situation, instead of blocking the GPU drivers altogether on the Proxmox OS, you need to block the Nouveau driver. Then the setup is fairly similar, you're just doing the driver setup on the Proxmox base OS instead of in the VM.
|
|
|
|
|
The key error appears to be this:
"
4:16:44: ALPR_adapter.py: Error: Your machine doesn't support AVX, but the installed PaddlePaddle is avx core, you should reinstall paddlepaddle with no-avx core. "
|
|
|
|
|
What is the configuration of the machine you are running CodeProject.AI Server on?
Of particular interest is the CPU.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
It just an N5095 mini PC so definitely doesnt have AVX.
I was seeing what I could do with a low spec machine. Not sure how hard it would be for the installer to detect that AVX was unavailable an install the non-avx version of paddleOCR which would likely solve this.
Granted it wouldnt exactly be a stellar server but if it worked it could easily power ALPR or such off a home CCTV camera on cheap readily available hardware which quite a lot of people would likely want to do.
|
|
|
|
|
Interesting.
According to Intel® Celeron® Processor N5095, the processor supports Intel® SSE4.2 which includes AVX. So why PaddlePaddle thinks it doesn't is a mystery.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
Yeah I looked that up so I ran hwInfo and that reports no AVX support. Then I discovered that intel hobbled AVX support in Jasper Lake celerons in a microcode release.
So really I was bringing this up because I would aamgine a lot of people would use cheaper celeron mini pc's as home AI servers and this is a bit of an issue for that.

|
|
|
|
|
We've added this to our TODO list
cheers
Chris Maunder
|
|
|
|
|
|
on 2.0.7-Beta, testing localhost:32168
while few times face is recognized, other times it just responds with error, same issue with 2.0.6-Beta
in err_trace, getting this message
File \"C:\\Program Files\\CodeProject\\AI\\AnalysisLayer\\bin\\windows\\python37\\venv\\lib\\site-packages\\torch\\nn\\modules\\conv.py\", line 443, in _conv_forward\n self.padding, self.dilation, self.groups)\nRuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR\n"
"An Error occured during processing"
System Info:
Operating System: Windows (Microsoft Windows 10.0.19042)
CPUs: 1 CPU x 4 cores. 8 logical processors (x64)
GPU: (NVidia) CUDA: 11.4 Compute:
System RAM: 16 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.2
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Video adapter info:
NVIDIA GeForce MX450:
Adapter RAM 2 GiB
Driver Version 30.0.14.7298
Video Processor NVIDIA GeForce MX450
Intel(R) Iris(R) Xe Graphics:
Adapter RAM 1,024 MiB
Driver Version 30.0.101.1404
Video Processor Intel(R) Iris(R) Xe Graphics Family
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
NTERNAL_ERROR\n"
|
|
|
|