|
Matthew, thanks for the replies. My WMI repository was indeed corrupt. I ran the following procedure and repaired the WMI. Once that was done, v2.0.7 installed and run successfully. It's interesting that v2.0.2 would install/run successfully. Did something change beginning with v2.0.3 that required WMI to be valid?
Procedure to repair corrupted WMI that worked for me
• Disable and stop the winmgmt service
• Remove or rename C:\Windows\System32\wbem\repository
• Enable and start the winmgmt service
• Open Command Prompt as Administrator
• Run the following commands:
cd C:\Windows\System32\wbem\
for /f %s in ('dir /b *.mof') do mofcomp %s
NOTE: This will take a minute or so to complete.
for /f %s in ('dir /b en-us\*.mfl') do mofcomp en-us\%s
Hope this helps others with the same issue!
|
|
|
|
|
I'm going to add this to our FAQ. Thank you!
cheers
Chris Maunder
|
|
|
|
|
You're welcome! Thought it was strange that versions less than v2.0.2 wasn't affected by this.
|
|
|
|
|
I think that we made a change that used a library that used WMI, and it didn't catch the exception and handle it.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
Makes sense, thanks for the follow-up!
|
|
|
|
|
My CodeProjectAI keeps on crashing. I believe this started with 2.0.7
Could someone please offer a suggestion to help resolve this error?
12:41:25:Started Object Detection (YOLOv5 6.2) module
12:41:26:detect_adapter.py: Traceback (most recent call last):
12:41:26:detect_adapter.py: File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 21, in
12:41:26:detect_adapter.py: from options import Options
12:41:26:detect_adapter.py: File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\options.py", line 2, in
12:41:26:detect_adapter.py: import torch
12:41:26:detect_adapter.py: File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\__init__.py", line 124, in
12:41:26:detect_adapter.py: raise err
12:41:26:detect_adapter.py: OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies.
12:41:26:Module ObjectDetectionYolo has shutdown
|
|
|
|
|
Could this be related? Cannot enable GPU which was working fine in the previous beta.
2023-01-28 13:18:15: detect_adapter.py: OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies.
2023-01-28 18:07:05: Started Object Detection (YOLOv5 6.2) module
2023-01-28 18:07:05: detect_adapter.py: Traceback (most recent call last):
2023-01-28 18:07:05: detect_adapter.py: File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 16, in <module>
2023-01-28 18:07:05: detect_adapter.py: from analysis.codeprojectai import CodeProjectAIRunner
2023-01-28 18:07:05: detect_adapter.py: File "../../SDK/Python\analysis\codeprojectai.py", line 36, in <module>
2023-01-28 18:07:05: detect_adapter.py: from analysis.requestdata import AIRequestData
2023-01-28 18:07:05: detect_adapter.py: File "../../SDK/Python\analysis\requestdata.py", line 8, in <module>
2023-01-28 18:07:05: detect_adapter.py: from PIL import Image
2023-01-28 18:07:05: detect_adapter.py: ImportError: cannot import name 'Image' from 'PIL' (unknown location)
2023-01-28 18:07:05: Module ObjectDetectionYolo has shutdown
modified 28-Jan-23 18:09pm.
|
|
|
|
|
I suggest a full uninstall and removal of c:\Program files\Codeproject Then do a full re install from scratch and that should get you going
|
|
|
|
|
Thank you for that suggestion. In fact I had tried that already. What I did not do was delete the folder.
After doing that I reinstalled and wound up with the same issue.
After 3 of 4 hours of playing about I believe I have it isolated to the use of the plate reader module. As soon as I enable that feature in Blue Iris the next detection crashes AI.
If I look on the explorer I see that GPU plate reading always fails. Switching to CPU and I get successful reads. Unfortunately when wither processor is selected AI will crash when Blue Iris analyzes an image.
I've kept is disabled in Blue Iris now for 15 minutes and no crashes and no 10000ms analysis times.
|
|
|
|
|
After 4 or 5 clean up's and reinstalls things seem to be working again.
I think I see what's been happening and I've removed those bits. 24 hours and all is well.
modified 30-Jan-23 12:52pm.
|
|
|
|
|
Got same installation issues while installing ALPR as described for 2.0.6. Installation was done from scratch.
- Installation path for python37 is "...\python37\python37\..." and must be manually moved up to ensure that "venv" folder will be created.
- Additional: ALPR doesn't work with GPU, must be switched to CPU.
- Question: Which license plate custom file can be used as alternative. It seems that only US plates will be detected. Testing Europe plates fails very often. I'm using a model from Github-Deepstack, which worked well with platerecognizer cloud.
Detailed description here:
2.0.6 corrects download module issues
Installed on following system:
Operating System: Windows (Microsoft Windows 10.0.14393)
CPUs: 1 CPU x 10 cores. 20 logical processors (x64)
GPU: NVIDIA T600 (4 GiB) (NVIDIA)
Driver: 30.0.14.7381
System RAM: 64 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.2
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Video adapter info:
NVIDIA T600:
Adapter RAM 4 GiB
Driver Version 30.0.14.7381
Video Processor NVIDIA T600
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
modified 28-Jan-23 6:32am.
|
|
|
|
|
AI logs showing the following ..
An item with the same key has already been added. Key: ObjectDetectionYolo
any ideas?
|
|
|
|
|
Can you send me a snippet from the logs (under /logs) that shows the issue?
It might also be worth checking C:\ProgramData\CodeProject\AI\modulesettings.json. It should be:
{
"Modules" : [
"Module1" : { ... }
"Module2" : { ... }
If you see a repeated module name that could one issue (but this is unlikely)
When is this happening? When you're installing / uninstalling?
cheers
Chris Maunder
|
|
|
|
|
Hi Chris ,
just noticed this today . See below logs
checked C:\ProgramData\CodeProject\AI\modulesettings.json and all looks ok
10:16:48: Started Object Detection (YOLOv5 6.2) module
10:16:48: Error trying to start Object Detection (YOLOv5 6.2) (detect_adapter.py)
10:16:48: An item with the same key has already been added. Key: ObjectDetectionYolo
10:16:48: at System.Collections.Generic.Dictionary`2.TryInsert(TKey key, TValue value, InsertionBehavior behavior)
at System.Collections.Generic.Dictionary`2.Add(TKey key, TValue value)
at CodeProject.AI.API.Server.Frontend.ModuleRunner.StartProcess(ModuleConfig module)
10:16:48: *** Please check the CodeProject.AI installation completed successfully
|
|
|
|
|
When is this happening? When you're installing / uninstalling? Starting up?
cheers
Chris Maunder
|
|
|
|
|
|
What version of CodeProject.AI Server are you running?
The latest code does a TryAdd rather than an Add, so it doesn't have this issue.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
is there a way of getting a fully offline installer or getting the stuff together to offline install??? Thanks
djpc
|
|
|
|
|
Not yet.
An offline installer would require a fairly narrow target due to the Python packages involved.
By this I mean that dependant on your OS, your GPU, your driver/SDK version (eg CUDA version for NVidia) there are different python wheels that would have to be downloaded. Not impossible, but it would expand the number of variables we'd have to manage and limit the number of hardware/OS combinations we could support.
It's totally possible. Just requires work and we're not at the point we can dive into something like that yet.
cheers
Chris Maunder
|
|
|
|
|
What would be more cost effective in term of AI performance, to run CodeProject.AI server on small Windows box with CUDA capable videocard or on Mac mini M2?
|
|
|
|
|
I just ran some benchmarks on my mac mini M1 vs my intel i5 with a NVidia 3060 GPU.
| YOLOv5 (6.1) (Python) | YOLO DirectML (C#) | M1 | 8 ops/sec | 13 ops/sec | CUDA | 37 ops/sec | 93 ops/sec |
Same code, same image, same model.
Mac mini was around $1,100 CDN, and the CUDA machine about $1,250 CDN
I do not yet have an M2, but they are cheaper and more powerful. I don't, however, think they are cheaper or more powerful to the point where $/ops/sec are matched to the CUDA machine. However, the CUDA machine is a beast I have to leave in the garage, while the mini sits on my desk, quiet, fanless, and out of the way.
cheers
Chris Maunder
|
|
|
|
|
Huh, I see, the difference is obvious.
I thought Apple CPU is going to be at least at the same league as CUDA from the AI performance perspective.
But you are right. I can put mac mini anywhere and it's going to sit quiet and just work.
Ok, then another question, taking aside cost issues, would increasing number of cores and/or amount of RAM make mini (or Studio ) faster for CP.AI tasks?
|
|
|
|
|
That I don't know. Increasing RAM depends on what else is running. 16GB will be more than enough for simple inferencing and other apps at the same time. More cores? For my setup, it probably wouldn't help. When I run the benchmark I hit ~65% CPU using the C# YOLO detector, 0% GPU, and using the Python MPS-enabled YOLO, I get around 35%CPU / 13% GPU usage.
Clearly we have some bottlenecks that need to be opened up to enable us to get closer to 100% utilisation. I'm not sure if more cores would help, or if it's a disk / bus speed issue, or if we're just inefficient code
cheers
Chris Maunder
|
|
|
|
|
I ran one benchmark test, traffic-pexels / ipcam-combined.
I reset HWMonitor, and after the test, it looked like this (all core performance looks OK - 13900K 24 cores 32 threads).
And again with pexels-fox / ipcam-combined = 14.3 ops/s

Nothing hit 100%
max 182.08 watts on the CPU, max stock boost frequency 5.5ghz(most) 5.8ghz(2 cores only), 4.3ghz (all e cores) (max 72C)
I have 64GB DDR4 3600, but I doubt this is a bottleneck.
|
|
|
|
|
I literally just tested something similar against a GTX 1080 vs a i9 13900K.
13900K loses by A LOT, but the ratio is closer to 5x difference in favor of the GPU.
|
|
|
|