|
I'm struggling with the latest version of CPAI and BI, I'm not sure what is wrong, but essentially I'm getting AI timeouts. Also interference times when it is working, is much higher than previous versions.
UPDATE: Switching back to 2.0.2, I've been able to run BI stable for the last 4 days.
modified 9-Feb-23 9:16am.
|
|
|
|
|
Running an Ubuntu docker with GPU.
Object Detection (YOLOv5 6.2) with ipcam-generaln is enabled and using the GPU with no issues.
Object Detection (YOLOv5 .NET) is disabled.
I installed the ALPR module. Seems to have installed fine but there are 2 issues:
- not using the GPU - even when I manually select it it goes back to CPU showing these errors:
ALPR_adapter.py: /app/modules/ALPR/bin/linux/python38/venv/lib/python3.8/site-packages/skimage/util/dtype.py:27: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`. (Deprecated NumPy 1.24)
ALPR_adapter.py: np.bool8: (False, True),
- It does not detect any license plates with error being
Object Detection (YOLOv5 6.2): /app/AnalysisLayer/ObjectDetectionYolo/custom-models/license-plate.pt does not exist
Object Detection (YOLOv5 6.2): Unable to create YOLO detector for model license-plate
I looked for license-plate.pt on the githubs (so I can download it and put in the required folder) but I could not find it.
Any idea on how to fix the 2 issues?
|
|
|
|
|
|
Thanks for that.
I have downloaded it and moved it into the custom folder. I tested it and it detects license plates fine now.
The GPU issues remains:
14:27:02:Module 'License Plate Reader' (ID: ALPR)
14:27:02:Active: True
14:27:02:GPU: Support disabled
14:27:02:Parallelism: 0
14:27:02:Platforms: windows,linux,macos,macos-arm64
14:27:02:FilePath: ALPR_adapter.py
14:27:02:ModulePath: ALPR
14:27:02:Install: PostInstalled
14:27:02:Runtime:
14:27:02:Queue: ALPR_queue
14:27:02:Start pause: 1 sec
14:27:02:Valid: True
14:27:02:Environment Variables
14:27:02:PLATE_CONFIDENCE = 0.4
Edit:
Tried again to select GPU for the ALPR module. Still shows CPU in the Status window but the logs show:
14:30:58:Module 'License Plate Reader' (ID: ALPR)
14:30:58:Active: True
14:30:58:GPU: Support enabled
14:30:58:Parallelism: 0
14:30:58:Platforms: windows,linux,macos,macos-arm64
14:30:58:FilePath: ALPR_adapter.py
14:30:58:ModulePath: ALPR
14:30:58:Install: PostInstalled
14:30:58:Runtime:
14:30:58:Queue: ALPR_queue
14:30:58:Start pause: 1 sec
14:30:58:Valid: True
14:30:58:Environment Variables
14:30:58:PLATE_CONFIDENCE = 0.4
|
|
|
|
|
20:26:40:License Plate Reader: Retrieved ALPR_queue command
20:26:40:Client request 'custom' in the queue (...fd170f)
20:26:40:Request 'custom' dequeued for processing (...fd170f)
20:26:40:Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command
20:26:41:Object Detection (YOLOv5 6.2): Detecting using license-plate
20:26:41:Response received (...fd170f)
20:26:41:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...fd170f) took 96ms
20:26:41:ALPR_adapter.py: [2023/02/02 20:26:41] ppocr WARNING: Since the angle classifier is not initialized, the angle classifier will not be uesd during the forward process
20:26:41:License Plate Reader: [Exception] : Traceback (most recent call last):
I have tried to reinstall the ALPR module and it is still firing these errors.
Any tips?
Operating System: Windows (Microsoft Windows 10.0.19044)
CPUs: 1 CPU x 6 cores. 12 logical processors (x64)
GPU: NVIDIA GeForce RTX 3070 (8 GiB) (NVidia)
Driver: 528.02 CUDA: 12.0 Compute: 8.6
System RAM: 32 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.2
System GPU info:
GPU 3D Usage 22%
GPU RAM Usage 7.9 GiB
Video adapter info:
NVIDIA GeForce RTX 3070:
Adapter RAM 4 GiB
Driver Version 31.0.15.2802
Video Processor NVIDIA GeForce RTX 3070
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
|
|
|
|
|
Hey all,
The ipcamtalk community sent me here to ask for @chris-maunder. If you can read this, I hope you can help me.
I installed 2.0.7-Beta and now I have extremely high analysis times of ~9000ms. Logs for the cancelled alerts don't really show anything obvious. I'm using Blue Iris version 5.6.9.6 on an i5-11500 CPU, 16GB RAM, no discrete GPU. I've got it set to 'default object detection' and using the 'small' model.
Anytime movement is detected by Blue Iris, the CPU usage is pinged at 100% for a solid ~10 seconds for each event. I know this is related to CPAI because whenever I disable it completely, my CPU usage doesn't rise past ~35%.
It's so bad that it cripples my camera feeds and reduces them to slideshows as long as CPAI is engaged.
I've tried using both 'Object Detection (YOLOv5 .NET)' and 'Object Detection (YOLOv5 6.2)' both (separately) to no avail. I've even tried turning 'use GPU' on and off each model and it makes zero difference to my CPU/GPU usage.
Rebooting the machine doesn't help.
I'm running Blue Iris 5.6.9.6.
Any ideas?
modified 2-Feb-23 9:19am.
|
|
|
|
|
Do you have ALPR enabled in Blue Iris? Is the ALPR module running?
I think I have regained stability from disabling both. Not entirely certain.
|
|
|
|
|
No I don't have anything else running apart from Object Detection (YOLOv5 6.2).
Any ideas? I know my system doesn't have a discreet GPU, but it's fairly new. Surely it shouldn't be performing this bad?
|
|
|
|
|
I'm just another user with similar issues. I wish I understood what's happening and how to fix.
My system is basically crippled right now. I'm not upset as I understand that this is all bleeding edge. I do hope it gets resolved soon by the dev's.
|
|
|
|
|
It may be bleeding edge, but it seems that other people on older systems don't see the crippling performance hits I'm seeing, so there must be a fix or something I can do to rectify it. My camera system is essentially unusable now.
|
|
|
|
|
BI 5.6.9.7 and CPAI 2.06/2.07 appear to have some real problems playing together on my BI box. Whatever it is, it's definitely not a viable combination on my PC.
At the moment I've rolled back to BI 5.6.8.4, but even with that CPAI 2.07 still seemed to have all kinds of issue starting, stopping, etc.
The only viable combination for me is BI 5.6.8.4 and CPAI 1.6.8. I've not tried CPAI 2.02 (no idea where to get it).
|
|
|
|
|
I too have this same issue, mine is going from ~15% up to ~96%.
I'm running an AMD CPU with a Radon RX550 with 16 gb of memory with all cameras set to DirectX VA2
- since the CPAI update, some of the obvious triggers from the past often do not trigger, and it appears to trigger late when it does (slow analysis timing). Use Custom Models-No (tried it already) Use GPU-No (tried it) ALPR-No Default Object Detection-Medium. Tried adjusting the model size to small (yes, I rebooted), - no change
Watching task manager during a detection cycle

Not detecting and settling down

It is really frustrating trying to nail down the right settings. 
|
|
|
|
|
Most likely you don't have BlueIris setup properly. The most common complaints with BlueIris years ago used to be CPU use. There is no way a single camera could spike the CPU that hard UNLESS you don't use substreams on your camera feeds. This was a huge pain for me too, but once I did these 2 things, my CPU usage went from wild spikes of 60-80% to less than 3%.
1) configure substreams on all camera feeds, and set the recording tab settings to record both feeds. BlueIris is pretty seamless once the extra feed is enabled.
2) set the GLOBAL settings CAMERA hardware acceleration decode to Intel + VPP. Do this at the GLOBAL level so you can set all the individual cameras to DEFAULT. Hit the restart camera on all cameras to have the settings take effect.
By far the most improvement is with #1, but #2 is very significant as well.
If I can run the AI with CPU alone on a i7 4790K, you can on that CPU for sure. On my older machine, it would definitely spike the CPU for a second if I had multiple camera activations, but one alone wouldn't 100% the CPU. I had it that way for a while testing an early build of CodeProject.AI before GPU support. It would say it performs at a "acceptable" level, but it could 100% and slideshow like you said if too many cameras activated at the same time.
The last optional suggestion is to get a "cheap" Nvidia GPU to run the AI. The performance benefit is very significant. Most of the load is off the CPU at this point. get one with at least 8GB. If its a dedicated server type, I would say the minimum is 6GB. You can run with 4GB, but you will need to be careful with your AI settings.
Do these things and your CPU will never 100% ever again.
|
|
|
|
|
Listen, I appreciate the earnest reply, but I'm not a BI newbie. I'm very well aware of all of the optimizations, and I've applied them all.
In fact, the second one you listed is actually now redundant, as the improvements based on configuring substreams make selecting 'Intel+VPP' insignificant.
Without CPAI enabled, my BI machine CPU idles at less than 4% and at absolute max (with concurrent remote streams) peaks around 35%.
So I know it's got to do with CPAI. It's not a 'single camera' pinging the CPU. It's CPAI.
I unfortunately don't have a GPU, however I know people with older systems without a GPU who claim to run CPAI within acceptable performance parameters.
Do you have any other suggestions?
|
|
|
|
|
Pardon the basics, but here are few of my thoughts. I don't have CPAI installed in my BI system so my config is far different. For testing, I only use ipcam-general and only set detect for person. I have probably made every possible user error.
Before looking at BI, you should probably use the AI explorer to check CPAI performance. I use custom object detection to see detection times (usually around 60 msec) and compare to BI times. In AI server, I stop everything but Object Detection, you are probably using the .net version, I use the YOLOv5 6.2 since I am running in Ubuntu. Test with several testdata images.
1. When I have seen those kinds of times, the BI log usually says AI not responding. I check the "log to file" option and examine the log file for errors.
2. Check camera/trigger/AI settings for real time images number and one each times. Presently, I am using 4/100ms
3. How many zones are you checking (I limit to one) and what is max trigger times?
4. For testing, turn off both detect static images and use mainstream.
5. Check the log in the AI server to make sure there are no errors when a trigger occurs.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
 That's the thing. In the AI explorer, the inference times are far quicker than in BI. usually in the range of 100-200ms. I'm using YOLOv5 6.2 on Windows.
My real time images don't differ much from other users who has less powerful systems, and I have them set to every 250ms, not 100ms, so you'd think that would be even less resource-intensive.
Every camera requires a different amount of zones. Some cameras have one. Others have up to four. The Inference times are above ~7000ms regardless of which camera is triggered.
What would turning off 'detect static images' and selecting mainstream images do for me? What would I be looking for? Isn't selecting mainstream images in BI redundant since it automatically reduces the size of the images anyway before sending them to CPAI?
I've checked the log in the AI server, and here's a sample:
Quote: 24:33:25:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...62efda) took 3919ms
24:33:25:Object Detection (YOLOv5 6.2): Detecting using ipcam-dark
24:33:25:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...b6c3d4) took 4745ms
24:33:25:Response received (...b6c3d4)
24:33:25:Object Detection (YOLOv5 6.2): Detecting using ipcam-dark
24:33:25:Response received (...c3da4e)
24:33:25:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...c3da4e) took 4282ms
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...b6eec4) took 4757ms
24:33:26:Response received (...b6eec4)
24:33:26:Object Detection (YOLOv5 6.2): Detecting using ipcam-dark
24:33:26:Response received (...029b13)
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...029b13) took 4827ms
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...21d530) took 3702ms
24:33:26:Response received (...21d530)
24:33:26:Response received (...6b6e5d)
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...6b6e5d) took 4887ms
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...b90fb8) took 4619ms
24:33:26:Response received (...b90fb8)
24:33:26:Response received (...06283e)
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...06283e) took 4040ms
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...3325a6) took 3370ms
24:33:26:Response received (...3325a6)
24:33:26:Response received (...1d79f7)
24:33:26:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...1d79f7) took 2848ms
24:33:27:Response received (...48dc9d)
24:33:27:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...48dc9d) took 1903ms
24:33:27:Response received (...a1a979)
24:33:27:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...a1a979) took 1645ms
24:33:27:Response received (...88a96c)
24:33:27:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...88a96c) took 1206m
|
|
|
|
|
Turning off detect static images reduces the activity in the AI logs, at least it used to. I think BI has to send "requests" periodically to keep track, maybe keeps it busier. Using mainstream seems to increase the amount of rendering.
My times, with ipcam-general, detect persons only, with gpu (Nvidia) run in the 65-75 msec range most of the time. If I change the settings to Mainstream, 250 msec, detect static images and then restart BI service, my times go over 100msec, sometimes in the 120's. I use the 3rd or 4th alerts as the first one or two will be much higher than average. When I go back to my original settings, it seems like I have to restart the BI service to get all the way back. I tend to always restart the service when I make changes.
If the system is busy (I am running the test on an 8 year old machine) the times will jump up for that trigger. This is why I look in the log file for averages. I am running the latest BI and the latest Docker CPAI with gpu.
The times you are seeing seem out in left field. I would run some tests with only one camera, custom model ipcam-general, person only for several triggers. Maybe different cameras one at a time.
Sorry, I am out of ideas. Good luck.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
Thanks for your input, but I've already tried running it with one camera only and I still get the same inference times.
I don't send mainstream pictures, and I only detect persons and cars using the ipcam-general model. The default model is even worse performance.
How can I get in contact with @chris-maunder? I was told by someone at ipcamtalk that he might be able to help me.
|
|
|
|
|
Not really, but I spent a great deal of time and effort into optimizing CodeProject.AI to be both highly accurate, and use as few resources as possible.
this is what I use.
1) YOLO (python), HIGH model size, but it is the ONLY module enabled.
2) In BI, I configured ipcam-combined. I know a great deal of people have luck with ipcam-general, but I have found that the detection quality of general is too POOR, and not worth the efficiency trade off. ipcam-general is by far the fastest model out there, but ipcam-combined isn't that much slower. I found that a model with more things in it have fewer false positives. When the model only has people and vehicles, everything looks like a person or vehicle. With enough variety, it can filter out those false positives a lot better. My dog is often a person, but what are you going to do...
I use GPU, but I managed to keep the VRAM usage at a very stable limit of around 2.3-2.4 GB. Overall RAM consumption is only 2.5GB.
Not sure if any of that helps, but...
|
|
|
|
|
No that doesn't help at all since you use a GPU which invalidates everything you said beforehand.
|
|
|
|
|
Member 15744855
I do have BI set up correctly and have been using it for more than 5 years now. According to recommended BI practice, both main and substreams are set to 15FPS... Hardware Acceleration is active. I don't have a single camera, I have 13, and 7 use CPAI when enabled. NOTE: When CPAI is disabled, my CPU load is around +/-12 to ~40% max. When CPAI is enabled the spikes reach 96%. What you wrote are "some common settings for BI, BUT, in this setup, those don't help with the CPAI issues.
|
|
|
|
|
Exactly my point. Thanks for backing me up. Still waiting for @chris-maunder to notice and reply to my post. I don't know how else to get in contact with him.
|
|
|
|
|
I've been watching, but until I have more data I could not add much.
I have two suggestions based on what I've seen and our testing
- Make sure you have the latest Blue Iris. I know Ken @ Blue Iris has been making tweaks
- Manually edit the modulesettings.json file in the AnalysisLayer/ObjectDetectionYolo (or ObjectDetectionNet if you're using that one) folder and set
'parallelism': 1
cheers
Chris Maunder
|
|
|
|
|
Hi Chris,
Thanks for replying.
I do have the latest BI.
What will adding 'parallelism': 1 do exactly?
I do wish CPAI had Google Coral support, as I have a spare USB TPU on-hand.
|
|
|
|
|
It looks like we may be using too many threads, and that is slowing down the processing.
I'm working on a fix, but tuning the algorithm that chooses the number of threads will needs some testing.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|