|
I'm done trying this for now.
I've also been trying to get a handle on a memory leak that started on 6/6 when I installed 2.6.5. Blue Iris goes from ~600 mBytes to ~9 gBytes over about 15 hours. Eventually Blue Iris shows only "nothing found".
Back to Old Faithful.
|
|
|
|
|
Hi,
Following upgrade to CPAI 2.6.5 on Ubuntu I have seen the Coral module crash several times, with the UI showing "Lost Contact" with the module and the following log messages:
09:02:11:Response rec'd from Object Detection (Coral) command 'detect' (...ee25ac) [''] took 21ms
09:03:17:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:03:18:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:13:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:15:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:17:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
09:04:17:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
and again a few hours later:
11:48:52:Response rec'd from Object Detection (Coral) command 'detect' (...f1837e) [''] took 19ms
11:50:22:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
11:51:22:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
I have a second instance that is still on 2.6.2 which has not exhibited this behaviour, so assume this to be an issue with the new code.
modified 11-Jun-24 8:34am.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
sure..
Server version: 2.6.5
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1268L v3 @ 2.30GHz (Intel)
4 CPUs x 1 core. 1 logical processors (x64)
System RAM: 2 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: 7.0.119
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
VMware SVGA II Adapter:
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
and from the other system thats not exhibiting the same behaviour:
Server version: 2.6.2
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1268L v3 @ 2.30GHz (Intel)
4 CPUs x 1 core. 1 logical processors (x64)
System RAM: 2 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: 7.0.119
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
VMware SVGA II Adapter:
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Thanks very much for that. We have an update to the module we are going to deploy. Stay tuned.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Sadly still broken
09:56:41:ObjectDetectionCoral went quietly
09:56:41:
09:56:41:Module 'Object Detection (Coral)' 2.3.1 (ID: ObjectDetectionCoral)
09:56:41:Valid: True
09:56:41:Module Path: <root>/modules/ObjectDetectionCoral
09:56:41:Module Location: Internal
09:56:41:AutoStart: True
09:56:41:Queue: objectdetection_queue
09:56:41:Runtime: python3.9
09:56:41:Runtime Location: Local
09:56:41:FilePath: objectdetection_coral_adapter.py
09:56:41:Start pause: 1 sec
09:56:41:Parallelism: 16
09:56:41:LogVerbosity:
09:56:41:Platforms: all
09:56:41:GPU Libraries: installed if available
09:56:41:GPU: use if supported
09:56:41:Accelerator:
09:56:41:Half Precision: enable
09:56:41:Environment Variables
09:56:41:CPAI_CORAL_MODEL_NAME = YOLOv8
09:56:41:CPAI_CORAL_MULTI_TPU = False
09:56:41:MODELS_DIR = <root>/modules/ObjectDetectionCoral/assets
09:56:41:MODEL_SIZE = Small
09:56:41:
09:56:41:Started Object Detection (Coral) module
09:56:42:objectdetection_coral_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
09:56:42:objectdetection_coral_adapter.py: NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
09:56:42:objectdetection_coral_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
09:56:42:objectdetection_coral_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
09:56:42:objectdetection_coral_adapter.py: If you are a user of the module, the easiest solution will be to
09:56:42:objectdetection_coral_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
09:56:42:objectdetection_coral_adapter.py: We expect that some modules will need time to support NumPy 2.
09:56:42:objectdetection_coral_adapter.py: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 937, in _bootstrap
09:56:42:objectdetection_coral_adapter.py: self._bootstrap_inner()
09:56:42:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
09:56:42:objectdetection_coral_adapter.py: self.run()
09:56:42:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 917, in run
09:56:42:objectdetection_coral_adapter.py: self._target(*self._args, **self._kwargs)
09:56:42:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 83, in _worker
09:56:42:objectdetection_coral_adapter.py: work_item.run()
09:56:42:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
09:56:42:objectdetection_coral_adapter.py: result = self.fn(*self.args, **self.kwargs)
09:56:42:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 45, in initialise
09:56:42:objectdetection_coral_adapter.py: self.enable_GPU = self.system_info.hasCoralTPU
09:56:42:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/codeproject_ai_sdk/system_info.py", line 344, in hasCoralTPU
09:56:42:objectdetection_coral_adapter.py: from pycoral.utils.edgetpu import list_edge_tpus
09:56:42:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
09:56:42:objectdetection_coral_adapter.py: from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
09:56:42:objectdetection_coral_adapter.py: AttributeError: _ARRAY_API not found
09:56:45:Object Detection (Coral): [SystemError] : Error during main_loop: Traceback (most recent call last):
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/codeproject_ai_sdk/module_runner.py", line 576, in main_loop
output = await callbacktask
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 90, in process
return self._list_models()
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 161, in _list_models
from objectdetection_coral_singletpu import list_models
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_singletpu.py", line 65, in
from pycoral.utils.edgetpu import make_interpreter
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
SystemError: initialization of _pywrap_coral raised unreported exception
09:56:45:objectdetection_coral_adapter.py: Unable to load OpenCV or numpy modules. Only using PIL.
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_singletpu.py", line 65, in
09:56:39:objectdetection_coral_adapter.py: from pycoral.utils.edgetpu import make_interpreter
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
09:56:39:objectdetection_coral_adapter.py: from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
09:56:39:objectdetection_coral_adapter.py: AttributeError: _ARRAY_API not found
09:56:39:objectdetection_coral_adapter.py: TPU detected
09:56:39:objectdetection_coral_adapter.py: An exception occurred initialising the module: initialization of _pywrap_coral raised unreported exception
09:56:39:objectdetection_coral_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
09:56:39:objectdetection_coral_adapter.py: NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
09:56:39:objectdetection_coral_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
09:56:39:objectdetection_coral_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
09:56:39:objectdetection_coral_adapter.py: If you are a user of the module, the easiest solution will be to
09:56:39:objectdetection_coral_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
09:56:39:objectdetection_coral_adapter.py: We expect that some modules will need time to support NumPy 2.
09:56:39:objectdetection_coral_adapter.py: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 937, in _bootstrap
09:56:39:objectdetection_coral_adapter.py: self._bootstrap_inner()
09:56:39:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
09:56:39:objectdetection_coral_adapter.py: self.run()
09:56:39:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 917, in run
09:56:39:objectdetection_coral_adapter.py: self._target(*self._args, **self._kwargs)
09:56:39:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 83, in _worker
09:56:39:objectdetection_coral_adapter.py: work_item.run()
09:56:39:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
09:56:39:objectdetection_coral_adapter.py: result = self.fn(*self.args, **self.kwargs)
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 90, in process
09:56:39:objectdetection_coral_adapter.py: return self._list_models()
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 161, in _list_models
09:56:39:objectdetection_coral_adapter.py: from objectdetection_coral_singletpu import list_models
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_singletpu.py", line 65, in
09:56:39:objectdetection_coral_adapter.py: from pycoral.utils.edgetpu import make_interpreter
09:56:39:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
09:56:39:objectdetection_coral_adapter.py: from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
09:56:39:objectdetection_coral_adapter.py: AttributeError: _ARRAY_API not found
09:56:39:Object Detection (Coral): [SystemError] : Error during main_loop: Traceback (most recent call last):
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/codeproject_ai_sdk/module_runner.py", line 576, in main_loop
output = await callbacktask
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 90, in process
return self._list_models()
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 161, in _list_models
from objectdetection_coral_singletpu import list_models
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_singletpu.py", line 65, in
from pycoral.utils.edgetpu import make_interpreter
File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
SystemError: initialization of _pywrap_coral raised unreported exception
09:56:41:ObjectDetectionCoral went quietly
13:15:25:
13:15:25:Started Object Detection (Coral) module
13:15:27:objectdetection_coral_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
13:15:27:objectdetection_coral_adapter.py: NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
13:15:27:objectdetection_coral_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
13:15:27:objectdetection_coral_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
13:15:27:objectdetection_coral_adapter.py: If you are a user of the module, the easiest solution will be to
13:15:27:objectdetection_coral_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
13:15:27:objectdetection_coral_adapter.py: We expect that some modules will need time to support NumPy 2.
13:15:27:objectdetection_coral_adapter.py: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 937, in _bootstrap
13:15:27:objectdetection_coral_adapter.py: self._bootstrap_inner()
13:15:27:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
13:15:27:objectdetection_coral_adapter.py: self.run()
13:15:27:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 917, in run
13:15:27:objectdetection_coral_adapter.py: self._target(*self._args, **self._kwargs)
13:15:27:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 83, in _worker
13:15:27:objectdetection_coral_adapter.py: work_item.run()
13:15:27:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
13:15:27:objectdetection_coral_adapter.py: result = self.fn(*self.args, **self.kwargs)
13:15:27:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 45, in initialise
13:15:27:objectdetection_coral_adapter.py: self.enable_GPU = self.system_info.hasCoralTPU
13:15:27:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/codeproject_ai_sdk/system_info.py", line 344, in hasCoralTPU
13:15:27:objectdetection_coral_adapter.py: from pycoral.utils.edgetpu import list_edge_tpus
13:15:27:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
13:15:27:objectdetection_coral_adapter.py: from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
13:15:27:objectdetection_coral_adapter.py: AttributeError: _ARRAY_API not found
13:15:30:objectdetection_coral_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
13:15:30:objectdetection_coral_adapter.py: NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
13:15:30:objectdetection_coral_adapter.py: Unable to load OpenCV or numpy modules. Only using PIL.
13:15:30:objectdetection_coral_adapter.py: TPU detected
13:15:30:objectdetection_coral_adapter.py: An exception occurred initialising the module: initialization of _pywrap_coral raised unreported exception
13:15:30:objectdetection_coral_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
13:15:30:objectdetection_coral_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
13:15:30:objectdetection_coral_adapter.py: If you are a user of the module, the easiest solution will be to
13:15:30:objectdetection_coral_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
13:15:30:objectdetection_coral_adapter.py: We expect that some modules will need time to support NumPy 2.
13:15:30:objectdetection_coral_adapter.py: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 937, in _bootstrap
13:15:30:objectdetection_coral_adapter.py: self._bootstrap_inner()
13:15:30:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
13:15:30:objectdetection_coral_adapter.py: self.run()
13:15:30:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/threading.py", line 917, in run
13:15:30:objectdetection_coral_adapter.py: self._target(*self._args, **self._kwargs)
13:15:30:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 83, in _worker
13:15:30:objectdetection_coral_adapter.py: work_item.run()
13:15:30:objectdetection_coral_adapter.py: File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
13:15:30:objectdetection_coral_adapter.py: result = self.fn(*self.args, **self.kwargs)
13:15:30:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 66, in initialise
13:15:30:objectdetection_coral_adapter.py: import objectdetection_coral_singletpu as odcs
13:15:30:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/objectdetection_coral_singletpu.py", line 65, in
13:15:30:objectdetection_coral_adapter.py: from pycoral.utils.edgetpu import make_interpreter
13:15:30:objectdetection_coral_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ObjectDetectionCoral/bin/linux/python39/venv/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in
13:15:30:objectdetection_coral_adapter.py: from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
13:15:30:objectdetection_coral_adapter.py: AttributeError: _ARRAY_API not found
modified 26-Jun-24 8:17am.
|
|
|
|
|
We've updated to Coral module to 2.3.3, and I've tested it again. Could you please uninstall, and reinstall the module again, once more with feeling?
Thanks,
Sean Ewington
CodeProject
modified 27-Jun-24 13:52pm.
|
|
|
|
|
Thanks for the update, I have yet to see any errors from Coral 2.3.3 on CPAI 2.6.5...
Im now running one instance of this and another with Coral 2.2.2 on CPAI 2.6.2, both have run ~120k inferences with only a handful (15-20) failed inferences
|
|
|
|
|
Cool project. Thank you.
I've deployed it in an LXC under Proxmox, and another LXC with the full development project. FWIW I also have it running on an RPI 5 with NVME drive and a Coral TPU deployed to docker with the dockge interface.
I've successfully run the "Optical Character Recognition" against images of road signs and of a technical manual in .pdf form. Works very well.
For grins I ran the OCR against an image of my terrible cursive handwriting. Of three paragraphs it got two thirds of the Date correct as I had written it as "month d, yyyy" The remainder of the sample was gibberish.
In my investegations I've come across, among others;
* Transkribus[^]
* Pen2text.com
Pen2text blew me away in the ease to making a simple test on that same sample of my, horribly illegible, handwriting. It missed two words that frankly looked like a leaky pen.
At any rate for either of these projects I feel I'd have to hire legal representation to understand the ownership and use of the OCR'ed sources and results.
I'd sure appreciate any tips on open source, self hosted, trainable OCR software suitable for a collection of perhaps fifty cursive multipage letters written in the same hand six or more decades ago. Once processed the text would be fed to a model to allow for chat of that subject matter.
Bonus points for pointers to open source archival platforms to organize the letters, that has an api so that I could correlate the OCRed text to the collection of images. Why reinvent that, archiving, wheel so to speak.
Thanks for listening by way or your reading.
Jeff
KF7CRU @jhalbrecht
|
|
|
|
|
Since upgrading to 2.6.5 I get AI not responding and no detections. I have to enable the service to start with Blue Iris after every reboot. Also changing from enable GPU or disabling GPU doesn't seem to change anything. I am using an Intel CPU with integrated GPU and used to be able to select enable GPU. Also the Codeproject AI status does not indicate Direct ML even after several detections. I am using Yolo.Net. Please advise? Everything I mentioned seemed to work fine with 2.6.2.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
One thing, I recommend going to Blue Iris main AI settings and unchecking auto stop/start.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Here it is
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 530 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2111
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.20
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 530:
Driver Version 31.0.101.2111
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I unchecked Start/Stop in Blue Iris. Restarted Blue Iris PC. I have to manually start Codeproject in Codeproject Dashboard but at least I now see Direct ML after I do this. Only problem is that Custom Models are not displaying in AI Main settings. If I hit the three dots, I get the Refresh AI to display models. Looks like the custom models are not loading for some reason. How would I refresh AI? Please advise?
|
|
|
|
|
Restart the Blue Iris service and the custom model list should update
|
|
|
|
|
Looks like all these steps seem to make things work normally again. I'm assuming this is a temporary measure at the moment until further bugs are ironed out?
|
|
|
|
|
Hi,
Since the install of the latest version of CP on my Blue Iris server, I get the message "Alert Cancelled AI not responding".
Consequently I do not get any notification on my phone when someone triggers a camera because there is no analysis done.
I reinstalled CP, no success.
Any suggestion on whatto do next?
I am on Windows 10
Thanks,
Michel.
|
|
|
|
|
It works now
I uninstalled CP, then I used the software Everything from Voidtools to find every single file left that had CodeProject in it's name and deleted them all, then reinstalled CP and restarted the server and it works as it should.
Thanks
|
|
|
|
|
Issue exists only with newer CPAI builds, occurs several times a day on different hardware (Intel with Tesla P4 vs Ryzen with RTX3090)
19:00:49:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.17763)
CPUs: Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz (Intel)
1 CPU x 11 cores. 22 logical processors (x64)
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 538.67, CUDA: 12.2.140 (up to: 12.2), Compute: 6.1, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.10
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Microsoft Hyper-V Video:
Driver Version 10.0.17763.2145
Video Processor
NVIDIA Tesla P4:
Driver Version 31.0.15.3867
Video Processor Tesla P4
System GPU info:
GPU 3D Usage 8%
GPU RAM Usage 6.4 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
===================================================================
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: AMD Ryzen 9 7950X 16-Core Processor (AMD)
1 CPU x 16 cores. 32 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3090 (24 GiB) (NVIDIA)
Driver: 555.85, CUDA: 12.5.40 (up to: 12.5), Compute: 8.6, cuDNN: 8.5
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 8.0.1
.NET SDK: 8.0.101
Default Python: 3.10.6
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 3090:
Driver Version 32.0.15.5585
Video Processor NVIDIA GeForce RTX 3090
System GPU info:
GPU 3D Usage 9%
GPU RAM Usage 2.1 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Are you able to replicate this issue with any specific image? My guess is there's something with the image itself that's unexpected for the YOLO processor.
Another option is to switch to the .NET YOLO module, or the YOLOv8 module and see if that helps.
cheers
Chris Maunder
|
|
|
|
|
Wrong number of channels in the image? (Greyscale?)
|
|
|
|
|
Not sure if this is a similar issue.
I'm running CPAI v2.6.5 in a Docker container, CPU no CUDA, running in Linux Mint 21.2. Blue Iris is running in a Windows 10 VM.
I randomly get these errors in the logs :-
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...c92ba6) ['Found person'] took 235ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...9d98dc) ['Found person'] took 287ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...9d0db5) ['Found person'] took 315ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...f491c2) ['Found person'] took 276ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...ad6b97)
11:17:03:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYOLOv5-6.2/detect.py", line 141, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (60) must match the size of tensor b (48) at non-singleton dimension 2
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...3e06bb) ['Found person'] took 286ms
11:17:03:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...763521) ['Found person'] took 171ms
11:17:03:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...24956a) ['Found person'] took 152ms
11:17:04:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:04:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...e74617) ['Found person'] took 107ms
11:17:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...fc3cf2) ['Found person'] took 112ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2105e0) ['Found person'] took 209ms
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...16dbff) ['Found person'] took 217ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...5db6db) ['Found person'] took 301ms
11:17:41:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...84eb3d) ['Found person'] took 321ms
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...295089) ['Found person'] took 225ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...bf8b54) ['Found person'] took 239ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...6c3d47) ['No objects found'] took 321ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2a2db6) ['No objects found'] took 346ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...2d20c3) ['Found person'] took 278ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...c15b4f) ['No objects found'] took 327ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...916432) ['Found person'] took 331ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...6d18a9) ['No objects found'] took 295ms
11:17:42:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYOLOv5-6.2/detect.py", line 141, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 74, in forward
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...fab41f)
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...ac344d) ['Found person'] took 201ms
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...85f20d) ['No objects found'] took 157ms
11:17:42:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
11:17:42:Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (...57a27c) ['No objects found'] took 95ms
This has been happening for a long time on previous versions. I cna't remember when it started but it was a lot of version ago. I put it down to Blue Iris sending too many requests too close together and haven't bothered reporting it until I saw this thread. It doesn't really cause a problem as the error clears very quickly and detection continues as normal.
|
|
|
|
|
Hello,
Since Raspberry just announced their new "Raspberry Pi AI Kit" ( https://www.raspberrypi.com/products/ai-kit/[^] ) what would the possibility of getting a Hailo AI module natively added to CP.AI? I'm currently using a dual edge Coral TPU on my Pi5 but it has it's limitations
|
|
|
|
|
Yeah I saw that - $70 for some serious power is pretty awesome.
The Hailo stack seems straightforward (though their site leaves something to be desired). Without access to the hardware we can't do anything here, but I'm sure it would be a very straightforward exercise for someone to adapt any of the existing object detection modules to use the Hailo models and TensorRT. The segmentation example, for instance, seems super simple.
cheers
Chris Maunder
|
|
|
|
|