|
wabash11 wrote: are there going to be updates to this
Updates to the latest version? We're always updating, so yes.
wabash11 wrote: why codeproject/ai-server:gpu and codeproject/ai-server are separate
The GPU image is twice the size of the CPU, so the CPU is provided for those who know they won't need the GPU bloat.
(and: you're very welcome!)
cheers
Chris Maunder
|
|
|
|
|
Hello,
Currently, I am using the latest docker image of CodeprojectAI Server (codeproject/ai-server:gpu v2.5.4). I use a reverse proxy to access the docker so I match a fqdn:443 to serverfqdn:32168.
It was working like a charm until version (I am not sure exactly of the version) v2.5.0.
The problem
Now when accessing fqdn:443 from any device, the Web Server is unable to correctly show the Website, it tells me that it can't check updates and report logs or status.
What I tried (without success) :
- Changing docker environment variable CPAI_PORT or ASPNETCORE_URLS or both did not change the port that listen in the docker
- Changing appsettings.json or appsettings.docker.json did not have any effect in the docker (IPv6, CPAI port, legacy port, …)
- Changing serversettings.json did not have any effect in the docker (Disabling IPv6, changing CPAI port, disabling legacy port, …)
Workaround :
The only way I made it work was to change /app/server/wwwroot/assets/server.js and replace the string `${apiServiceProtocol}//${apiServiceHostname}${apiServicePort}` by "https://fqdn" (without the port number).
This fixes the Service Url with the correct port number (443 instead of 32168) which allows the interface to fully work.
Question
Do you have a clean way to fix the Service URL or CPAI port ?
I am also interested in disabling legacy port and IPv6.
Thank you so much for this project !
PS : Sorry for my english
|
|
|
|
|
The Docker image exposes port 32168 and 5000, so changing the port inside the docker container won't work because that port isn't exposed. What about mapping ports on the docker command line? -p <portb>:<portb>
Disabling legacy port and IPv6:
in appsettings.json, under"ServerOptions", are
DisableLegacyPort - set to true to disables port 5000
DisableIPv6 - set to true to disable IPv6
cheers
Chris Maunder
|
|
|
|
|
Hello Chris,
Thank you for taking the time to answer.
About appsettings, you're right. I didn't know that exposedport are predefined and may not be automatically updated by the container. Actually changing DisableLegacyPort in appsettings.json works, and as you said, docker continues to expose port 5000. So after disabling Legacy port I have :
root@docker:/mnt/docker/cctv# docker exec -it cctv-codeprojectai curl -I http://localhost:5000
curl: (7) Failed to connect to localhost port 5000 after 0 ms: Connection refused
Before disabling Legacy port I have :
root@docker:~# docker exec -it cctv-codeprojectai curl -I http://localhost:5000
HTTP/1.1 200 OK
Content-Length: 29083
Content-Type: text/html
Date: Sat, 24 Feb 2024 11:12:57 GMT
Server: Kestrel
Accept-Ranges: bytes
ETag: "1da5ec134e5181b"
Last-Modified: Tue, 13 Feb 2024 21:11:27 GMT
Also changing CPAI_PORT seems to allow the docker to listen on both port 32168 and the port configured but docker container do not automatically expose this port. But obviously this does not help
About mapping ports on the docker command line : this is what I actually do. I am mapping host port 32168 to docker port 32168. Then I map my reverse proxy https://virtualserverFQDN:443 to docker http://dockerFQDN:32168. I can access the webGUI of codeproject server ai without any problem.
But the interface does not work correctly and Service URL reports https://virtualserverFQDN:32168 instead of https://virtualserverFQDN:443 (or https://virtualserverFQDN). Web interface reports status Offline and is not able to check if there is an update available. All tabs (Status, Server Logs, System Info, Mesh, Install Modules) are empties. So it seems that the port is not correctly detected.
I checked which version has breaked my setup :
codeproject/ai-server:cuda12_2-2.5.0 (2.5-RC4): works with Reverse Proxy - Service URL correctly indicates https://virtualserverFQDN (without port 32168)
codeproject/ai-server:cuda12_2-2.5.1 (2.5.1): DOES NOT work with Reverse Proxy - Service URL correctly indicates https://virtualserverFQDN:32168 (with port 32168)
So starting from version 2.5.1 Service URL is not correctly detected. I compared /app/server/wwwroot/assets/server.js between versions and it has changed starting from v2.5.1 (this corroborates my tests).
To workaround my issue I can use http://dockerFQDN:32168 (without the reverse proxy but I prefer not to do that) or I can modify /app/server/wwwroot/assets/server.js and force Service URL.
EDIT 2024-02-25 :
I think that the problem comes from this line in server.js :
const apiServicePort = ":" + (window.location.port || 32168);
I tried with port 444 and the port is correctly detected. It seems that for default port (443 for https), window.location.port returns an empty string so it is replaced by 32168 in the line that follows for apiServicePort then apiServiceUrl is wrongly replaced with port 32168 :
const apiServiceUrl = `${apiServiceProtocol}//${apiServiceHostname}${apiServicePort}`;
To solve my issue it would be great to fix server.js with (hoping there is no dependencies) :
Replacing
const apiServiceUrl = `${apiServiceProtocol}//${apiServiceHostname}${apiServicePort}`; by
let apiServiceUrl = `${apiServiceProtocol}//${apiServiceHostname}`;
if (window.location.port !== "")
apiServiceUrl = `${apiServiceProtocol}//${apiServiceHostname}${apiServicePort}`;
modified 25-Feb-24 5:00am.
|
|
|
|
|
Observing 40-50% CPU and 2Gb+ memory usage for ObjectDetectionYOLOv5Net process during startup. Requests are queued and dequeued without being processed while in this state that lasts for approximately 5 minutes. 0% load on Intel GPU until the issue resolves itself. System Info and Startup Log below:
Server version: 2.5.4
System: Windows
Operating System: Windows (Microsoft Windows 11 version 10.0.22631)
CPUs: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz (Intel)
1 CPU x 8 cores. 16 logical processors (x64)
GPU (Primary): Intel(R) UHD Graphics 750 (128 MiB) (Intel Corporation)
Driver: 31.0.101.5330
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.16
Default Python:
Video adapter info:
Intel(R) UHD Graphics 750:
Driver Version 31.0.101.5330
Video Processor Intel(R) UHD Graphics Family
System GPU info:
GPU 3D Usage 15%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
2024-02-22 21:02:17: *** STARTING CODEPROJECT.AI SERVER
2024-02-22 21:02:17: RUNTIMES_PATH = C:\Program Files\CodeProject\AI\runtimes
2024-02-22 21:02:17: PREINSTALLED_MODULES_PATH = C:\Program Files\CodeProject\AI\preinstalled-modules
2024-02-22 21:02:17: MODULES_PATH = C:\Program Files\CodeProject\AI\modules
2024-02-22 21:02:17: PYTHON_PATH = \bin\windows\%PYTHON_NAME%\venv\Scripts\python
2024-02-22 21:02:17: Data Dir = C:\ProgramData\CodeProject\AI
2024-02-22 21:02:17: ** Server version: 2.5.4
2024-02-22 21:02:17: ModuleRunner Start
2024-02-22 21:02:17: Starting Background AI Modules
2024-02-22 21:02:20: Running module using: launcher
2024-02-22 21:02:20:
2024-02-22 21:02:20: Attempting to start ObjectDetectionYOLOv5Net with C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5Net\bin\ObjectDetectionYOLOv5Net.exe
2024-02-22 21:02:20: Starting C:\Program Files...in\ObjectDetectionYOLOv5Net.exe
2024-02-22 21:02:20:
2024-02-22 21:02:20: ** Module 'Object Detection (YOLOv5 .NET)' 1.9.3 (ID: ObjectDetectionYOLOv5Net)
2024-02-22 21:02:20: ** Valid: True
2024-02-22 21:02:20: ** Module Path: <root>\modules\ObjectDetectionYOLOv5Net
2024-02-22 21:02:20: ** AutoStart: True
2024-02-22 21:02:20: ** Queue: objectdetection_queue
2024-02-22 21:02:20: ** Runtime: dotnet
2024-02-22 21:02:20: ** Runtime Loc: Shared
2024-02-22 21:02:20: ** FilePath: bin\ObjectDetectionYOLOv5Net.exe
2024-02-22 21:02:20: ** Pre installed: False
2024-02-22 21:02:20: ** Start pause: 1 sec
2024-02-22 21:02:20: ** Parallelism: 0
2024-02-22 21:02:20: ** LogVerbosity:
2024-02-22 21:02:20: ** Platforms: all
2024-02-22 21:02:20: ** GPU Libraries: installed if available
2024-02-22 21:02:20: ** GPU Enabled: enabled
2024-02-22 21:02:20: ** Accelerator:
2024-02-22 21:02:20: ** Half Precis.: enable
2024-02-22 21:02:20: ** Environment Variables
2024-02-22 21:02:20: ** CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5Net\custom-models
2024-02-22 21:02:20: ** MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5Net\assets
2024-02-22 21:02:20: ** MODEL_SIZE = medium
2024-02-22 21:02:20:
2024-02-22 21:02:20: Started Object Detection (YOLOv5 .NET) module
2024-02-22 21:02:24: Current Version is 2.5.4
2024-02-22 21:02:24: Server: This is the latest version
2024-02-22 21:03:13: ObjectDetectionYOLOv5Net.exe: Application started. Press Ctrl+C to shut down.
2024-02-22 21:03:13: ObjectDetectionYOLOv5Net.exe: Hosting environment: Production
2024-02-22 21:03:13: ObjectDetectionYOLOv5Net.exe: Content root path: C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5Net
2024-02-22 21:03:14: Object Detection (YOLOv5 .NET): Object Detection (YOLOv5 .NET) module started. in Object Detection (YOLOv5 .NET)
2024-02-22 21:04:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid bbfea436-78f4-4f90-b368-d325eef6f163)
2024-02-22 21:04:17: Request 'custom' dequeued from 'objectdetection_queue' (#reqid bbfea436-78f4-4f90-b368-d325eef6f163)
2024-02-22 21:04:17: Request 'custom' dequeued from 'objectdetection_queue' (#reqid c70c7395-4285-417b-9b44-cb9c0fbdd71a)
2024-02-22 21:04:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid c70c7395-4285-417b-9b44-cb9c0fbdd71a)
2024-02-22 21:04:18: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 040b92de-3ade-404b-9ee3-aea9d7f93967)
2024-02-22 21:04:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid 040b92de-3ade-404b-9ee3-aea9d7f93967)
2024-02-22 21:04:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid 4bf064bb-8e8b-48c6-bd19-1d07e7bdab79)
2024-02-22 21:04:18: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 4bf064bb-8e8b-48c6-bd19-1d07e7bdab79)
2024-02-22 21:04:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid 5ac4f1f8-2a6a-4233-9c4d-914f14e1bdda)
2024-02-22 21:04:18: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 5ac4f1f8-2a6a-4233-9c4d-914f14e1bdda)
2024-02-22 21:04:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid ee7f240b-e952-42ac-8c2e-3c02f1915c8d)
2024-02-22 21:04:18: Request 'custom' dequeued from 'objectdetection_queue' (#reqid ee7f240b-e952-42ac-8c2e-3c02f1915c8d)
2024-02-22 21:04:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid 91d5dea6-2431-4e3d-91be-7544f4a179b0)
2024-02-22 21:04:18: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 91d5dea6-2431-4e3d-91be-7544f4a179b0)
2024-02-22 21:04:19: Client request 'custom' in queue 'objectdetection_queue' (#reqid adb1e3e2-3085-447d-ae4f-85644ffee055)
2024-02-22 21:04:19: Request 'custom' dequeued from 'objectdetection_queue' (#reqid adb1e3e2-3085-447d-ae4f-85644ffee055)
2024-02-22 21:04:20: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 3669e9c7-af75-4131-aea8-45a90ed79c82)
2024-02-22 21:04:20: Client request 'custom' in queue 'objectdetection_queue' (#reqid 3669e9c7-af75-4131-aea8-45a90ed79c82)
2024-02-22 21:04:20: Client request 'custom' in queue 'objectdetection_queue' (#reqid 89348c0c-a206-42c5-b44b-105d78280000)
2024-02-22 21:04:20: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 89348c0c-a206-42c5-b44b-105d78280000)
2024-02-22 21:06:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid 1b957bde-c79d-4200-8604-c0bb61c35552)
2024-02-22 21:06:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid dce6d651-a7ed-4160-920b-17742e61c90f)
2024-02-22 21:06:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid e0383fcb-4ed9-42f9-ad60-aee20e2fe96d)
2024-02-22 21:06:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid 577ff136-b2c8-426b-baf3-dd3280391b3f)
2024-02-22 21:06:17: Client request 'custom' in queue 'objectdetection_queue' (#reqid fd9a5b0e-a400-48fc-8850-eb325461aa02)
2024-02-22 21:06:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid dea6e5d1-d38d-493d-82d4-c74062771b0b)
2024-02-22 21:06:18: Client request 'custom' in queue 'objectdetection_queue' (#reqid d9e8483b-9630-45f8-9a0f-2fcbe5cc745e)
2024-02-22 21:06:19: Client request 'custom' in queue 'objectdetection_queue' (#reqid 0de7412a-7222-42e5-9336-2682de516982)
2024-02-22 21:06:19: Client request 'custom' in queue 'objectdetection_queue' (#reqid fe4be73c-b5b9-432b-8b10-ac06061ff554)
2024-02-22 21:06:19: Client request 'custom' in queue 'objectdetection_queue' (#reqid 344b0636-51cb-4526-a305-92eb75010225)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 1b957bde-c79d-4200-8604-c0bb61c35552)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid dce6d651-a7ed-4160-920b-17742e61c90f)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid e0383fcb-4ed9-42f9-ad60-aee20e2fe96d)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 577ff136-b2c8-426b-baf3-dd3280391b3f)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid fd9a5b0e-a400-48fc-8850-eb325461aa02)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid dea6e5d1-d38d-493d-82d4-c74062771b0b)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid d9e8483b-9630-45f8-9a0f-2fcbe5cc745e)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 0de7412a-7222-42e5-9336-2682de516982)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid fe4be73c-b5b9-432b-8b10-ac06061ff554)
2024-02-22 21:07:51: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 344b0636-51cb-4526-a305-92eb75010225)
2024-02-22 21:08:16: Request 'custom' dequeued from 'objectdetection_queue' (#reqid ffb59ace-ffec-463c-ad18-b40792e1a61a)
2024-02-22 21:08:16: Client request 'custom' in queue 'objectdetection_queue' (#reqid ffb59ace-ffec-463c-ad18-b40792e1a61a)
2024-02-22 21:08:16: Client request 'custom' in queue 'objectdetection_queue' (#reqid 843ae196-40d8-46e1-bde2-13711cbe1eca)
2024-02-22 21:08:16: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 843ae196-40d8-46e1-bde2-13711cbe1eca)
2024-02-22 21:08:16: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid ffb59ace-ffec-463c-ad18-b40792e1a61a) ['No objects found'] took 137ms
2024-02-22 21:08:16: Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (#reqid 843ae196-40d8-46e1-bde2-13
|
|
|
|
|
I've not seen or heard of that one.
Any anti-virus running?
cheers
Chris Maunder
|
|
|
|
|
Bitdefender Total Security with an exclusion for C:\Program Files\CodeProject
|
|
|
|
|
Hi Chirs,
Thanks for the 2.5.4 update, the ALPR is now using the GPU (Tesla P4) without any problem! It sped up the ALPR process almost two times! (From avg. 75ms down to avg 40ms) I also noticed that the YOLOv5 5.6.2 is now faster than earlier version. It was avg. 55ms and now it cut down to avg 25ms or so.
I personally care a lot about the speed so it was great!
|
|
|
|
|
Awesome to hear this .... I got bit in the previous release with NAN errors and have been hesitant to upgrade from RC9. I think you and I are a few of the only that are using the p4 too. What CUDA and Driver are you running?
|
|
|
|
|
My CUDA is 12.2 and the Driver is 538.15. I got it from the following Google Cloud:
Drivers for NVIDIA RTX Virtual Workstation (vWS) | Compute Engine Documentation | Google Cloud[^]
It is GRD driver. Even though it says the CUDA version is 16.3 for Driver 538.15 on the website, my computer is showing CUDA as 12.2. But either way, it works for me.
Separately, it might not relate to your situation, but when I tried to get the files I needed for Python 3.9 (requirements.windows.cuda11_6.txt), I got an error saying I needed Pillow version ">10.0.0" to download them. I picked version 11_6 because my CUDA version was 11.6 then. After I fixed that problem and got it working, I tried a new driver (version 538.15) to see if ALPR would still use CUDA, and it did.
|
|
|
|
|
CUDA is only at version 12.x, so GRD is hallucinating, I think.
cheers
Chris Maunder
|
|
|
|
|
Hi I get this error when using YOLOv5NET.
17:03:54:ObjectDetectionYOLOv5Net.exe: fail: CodeProject.AI.Modules.ObjectDetection.YOLOv5.ObjectDetector[0]
17:03:54:ObjectDetectionYOLOv5Net.exe: Unable to load the model at C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5Net\assets\yolov5m.onnx
Any suggestions how to rectify the problem?
|
|
|
|
|
Is there a C:\Program Files\CodeProject\AI\modules\install.log file you can paste here?
cheers
Chris Maunder
|
|
|
|
|
Hi Chris,
There is no folder called "
\install.log "
only
C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5Net is listed after "modules"
|
|
|
|
|
Hello,
I'm testing Object Detection (Coral) 2.1.4 with a dual M2 Coral TPU and i'm not sure to understand everything about it.
What is the difference of the 3 models (MobileNetSSD, YOLOV8 (not working for me), EfficientDetLite) ?
Why detections are so bad in small/medium model size and slow (180ms) in large model ?
Why Object Detection (YOLOv5 6.2) 1.9.1 detect me way better (in hudge, 40ms) ? I guess because Coral is slow ?
Is Multi-TPU really work ?
Very often, when i try to stop it, the python.exe process is not kill and i need to kill it manually.
Why sometime the TPU is not detected and the module run on CPU ?
Do you think the module can be optimize or YOLO on gpu will be better anyway ?
Thanks.
|
|
|
|
|
Here's a write up of MobileNet vs EfficientDet[^]. YOLO is a separate beast and was designed more for speed than accuracy, and my understanding is it wasn't developed with resource constrained systems in mind.
Model size: bigger is typically better for accuracy (more precision, more data) but smaller often means faster (less data to chew through)
Coral slower than YOLO6.2: this depends on your OS and hardware. I find Coral on Windows to be sub-optimal, but still faster than YOLO 6.2 when the Coral Object detector actually uses the Coral hardware. On Windows, the Coral USB I have often stops working. It's great on my Raspberry Pi and Linux boxes though.
Coral multi-TPU works amazingly well if you have multiple Coral TPU units, and I would definitely recommend PCIe based cards rather than USB.
Why sometime the TPU is not detected and the module run on CPU
I assume you're on Windows? And USB Coral or PCIe? Coral is just a little flaky on Windows.
Do you think the module can be optimize or YOLO on gpu will be better anyway
Only you can answer which module works best for your setup by trying the various modules and different model sizes.
cheers
Chris Maunder
|
|
|
|
|
I am on Windows 11 with a Dual M2 Coral.
I see when the Coral is used or not, the CPU usage is not the same and with a real Coral usage in Large, i get 200ms vs 40ms for my RTX3070 on YOLO6.2 in Hudge.
And in real usage with Blue Iris, detections are less precise with Coral than GPU.
|
|
|
|
|
I would not expect good performance from the Coral TPU with the large YOLO models. The large models are the size of roughly 8 TPUs’ caches. (I have 8 TPUs in my machine and get about 10 FPS with the large YOLO models.)
|
|
|
|
|
In what way does YOLOv8 not work for you? Once it works, that’s what I would recommend in the small or medium sizes.
The multi-TPU code won’t generally run a single inference much faster, but it will run many in parallel so you should have double the throughput or better.
|
|
|
|
|
I get the following using YoloV8 with a PCI Coral when testing an image.
10:14:29:Started Object Detection (Coral) module
10:14:30:objectdetection_coral_adapter.py: Using model yolov8, size small
10:14:30:objectdetection_coral_adapter.py: TPU detected
10:14:30:objectdetection_coral_adapter.py: Using Edge TPU
10:14:43:Object Detection (Coral): [IndexError] : Traceback (most recent call last):
File "/app/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 167, in _do_detection
result = do_detect(opts, img, score_threshold)
File "/app/modules/ObjectDetectionCoral/objectdetection_coral.py", line 222, in do_detect
objs = detect.get_objects(interpreter, score_threshold, scale)
File "/app/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/adapters/detect.py", line 214, in get_objects
elif common.output_tensor(interpreter, 3).size == 1:
File "/app/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/adapters/common.py", line 29, in output_tensor
return interpreter.tensor(interpreter.get_output_details()[i]['index'])()
IndexError: list index out of range
|
|
|
|
|
Try using it with the multi-TPU code enabled.
|
|
|
|
|
Thanks that helped. Multi model definitely gives odd detection that I can't narrow down. It only finds one car and one person over and over again no matter what size/model I use. As soon as I turn it off, I see differences when changing models/sizes. Any info I can provide that would help narrow it down?
|
|
|
|
|
Hm. That’s probably a question for Chris. He’s more familiar with how the CPAI modules are put together and how to debug them. It may be defaulting to the small model, which should still give reasonable performance.
|
|
|
|
|
I'm also seeing this error with YOLOv8, or it just hangs, or I get the below error:
13:14:16:objectdetection_coral_adapter.py: E driver/mmio_driver.cc:254] HIB Error. hib_error_status = 0000000000000002, hib_first_error_status = 0000000000000002
13:14:16:objectdetection_coral_adapter.py: E driver/mmio_driver.cc:254] HIB Error. hib_error_status = 0000000000000002, hib_first_error_status = 0000000000000002
I've been unable to get YOLOv8 working with Coral PCIE in CPAI 2.5.4 on Windows at all, with any size model or multi-tpu setting. I've occasionally had MobileNet SSD or EfficientDet-Lite working, but the accuracy tends to be 60% confidence for a person vs 85% confidence with YOLO+ipcam models. Or it won't detect a person at all with night time images.
So I'm keen to get YOLO working with some custom ipcam models on Coral - but that just doesn't seem possible yet / ever.
|
|
|
|
|
Googling that HIB error doesn’t give exact results, but there are lots of potential issues that pop up like hardware issues and kernel-level issues. I haven’t seen it in my own system. Maybe the solution is to just run YOLOv5 or efficentdet?
|
|
|
|