|
It went on until this:
11:03:16:Response timeout. Try increasing the timeout value
11:04:06:ObjectDetectionYOLOv8: Creating Virtual Environment (Local)... Done
11:04:06:ObjectDetectionYOLOv8: Checking for Python 3.8...(Found Python 3.8.18) All good
11:07:22:ObjectDetectionYOLOv8: Upgrading PIP in virtual environment... done
11:08:07:ObjectDetectionYOLOv8: Installing updated setuptools in venv... Done
11:08:07:ObjectDetectionYOLOv8: [models-yolo8-pt.zip]
11:08:07:ObjectDetectionYOLOv8: End-of-central-directory signature not found. Either this file is not
11:08:07:ObjectDetectionYOLOv8: a zipfile, or it constitutes one disk of a multi-part archive. In the
11:08:07:ObjectDetectionYOLOv8: latter case the central directory and zipfile comment will be found on
11:08:07:ObjectDetectionYOLOv8: the last disk(s) of this archive.
11:08:07:ObjectDetectionYOLOv8: unzip: cannot find zipfile directory in one of models-yolo8-pt.zip or
11:08:07:ObjectDetectionYOLOv8: models-yolo8-pt.zip.zip, and cannot find models-yolo8-pt.zip.ZIP, period.
11:08:08:ObjectDetectionYOLOv8: Downloading Standard YOLO models... already exists...Expanding... Done.
11:08:08:ObjectDetectionYOLOv8: Downloading Custom YOLO models... already exists...Expanding... Done.
11:08:08:ObjectDetectionYOLOv8: Moving contents of custom-models-yolo8-pt.zip to custom-models...done.
11:08:08:ObjectDetectionYOLOv8: Installing Python packages for Object Detection (YOLOv8)
11:08:08:ObjectDetectionYOLOv8: Installing GPU-enabled libraries: If available
11:08:09:ObjectDetectionYOLOv8: Searching for python3-pip...All good.
11:08:11:ObjectDetectionYOLOv8: Ensuring PIP compatibility... Done
11:08:11:ObjectDetectionYOLOv8: Python packages will be specified by requirements.linux.cuda11_5.txt
11:16:54:ObjectDetectionYOLOv8: - Installing PyTorch, an open source machine learning framework... (✅ checked) Done
11:19:07:ObjectDetectionYOLOv8: - Installing TorchVision, for working with computer vision models... (✅ checked) Done
At this point it crashed the CodeProject Container and it restarted.
Upon entering the app again, I get the following:
11:32:49:System: Docker
11:32:49:Operating System: Linux (Linux 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022)
11:32:49:CPUs: AMD Ryzen Threadripper 3960X 24-Core Processor (AMD)
11:32:49: 1 CPU x 24 cores. 48 logical processors (x64)
11:32:49:GPU (Primary): NVIDIA GeForce RTX 3070 (8 GiB) (NVIDIA)
11:32:49: Driver: 551.23, CUDA: 12.4 (up to: 12.4), Compute: 8.6, cuDNN:
11:32:49:System RAM: 63 GiB
11:32:49:Platform: Linux
11:32:49:BuildConfig: Release
11:32:49:Execution Env: Docker
11:32:49:Runtime Env: Production
11:32:49:.NET framework: .NET 7.0.15
11:32:49:Default Python: 3.10
11:32:49:App DataDir: /etc/codeproject/ai
11:32:49:Video adapter info:
11:32:49:STARTING CODEPROJECT.AI SERVER
11:32:49:RUNTIMES_PATH = /app/runtimes
11:32:49:PREINSTALLED_MODULES_PATH = /app/preinstalled-modules
11:32:49:MODULES_PATH = /app/modules
11:32:49:PYTHON_PATH = /bin/linux/%PYTHON_NAME%/venv/bin/python3
11:32:49:Data Dir = /etc/codeproject/ai
11:32:49:Server version: 2.5.3
11:32:49:Overriding address(es) 'http://+:32168, http://+:5000'. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead.
11:32:52:
11:32:52:Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
11:32:52:Valid: True
11:32:52:Module Path: <root>/preinstalled-modules/ObjectDetectionYOLOv5-6.2
11:32:52:AutoStart: True
11:32:52:Queue: objectdetection_queue
11:32:52:Runtime: python3.8
11:32:52:Runtime Loc: Shared
11:32:52:FilePath: detect_adapter.py
11:32:52:Pre installed: True
11:32:52:Start pause: 1 sec
11:32:52:Parallelism: 0
11:32:52:LogVerbosity:
11:32:52:Platforms: all,!raspberrypi,!jetson
11:32:52:GPU Libraries: installed if available
11:32:52:GPU Enabled: enabled
11:32:52:Accelerator:
11:32:52:Half Precis.: enable
11:32:52:Environment Variables
11:32:52:APPDIR = <root>/preinstalled-modules/ObjectDetectionYOLOv5-6.2
11:32:52:CUSTOM_MODELS_DIR = <root>/preinstalled-modules/ObjectDetectionYOLOv5-6.2/custom-models
11:32:52:MODELS_DIR = <root>/preinstalled-modules/ObjectDetectionYOLOv5-6.2/assets
11:32:52:MODEL_SIZE = Medium
11:32:52:USE_CUDA = True
11:32:52:YOLOv5_AUTOINSTALL = false
11:32:52:YOLOv5_VERBOSE = false
11:32:52:
11:32:52:Started Object Detection (YOLOv5 6.2) module
11:32:53:
11:32:53:Module 'Face Processing' 1.10.1 (ID: FaceProcessing)
11:32:53:Valid: True
11:32:53:Module Path: <root>/preinstalled-modules/FaceProcessing
11:32:53:AutoStart: True
11:32:53:Queue: faceprocessing_queue
11:32:53:Runtime: python3.8
11:32:53:Runtime Loc: Shared
11:32:53:FilePath: intelligencelayer/face.py
11:32:53:Pre installed: True
11:32:53:Start pause: 3 sec
11:32:53:Parallelism: 0
11:32:53:LogVerbosity:
11:32:53:Platforms: all,!raspberrypi,!jetson
11:32:53:GPU Libraries: installed if available
11:32:53:GPU Enabled: enabled
11:32:53:Accelerator:
11:32:53:Half Precis.: enable
11:32:53:Environment Variables
11:32:53:APPDIR = <root>/preinstalled-modules/FaceProcessing/intelligencelayer
11:32:53:DATA_DIR = /etc/codeproject/ai
11:32:53:MODE = MEDIUM
11:32:53:MODELS_DIR = <root>/preinstalled-modules/FaceProcessing/assets
11:32:53:PROFILE = desktop_gpu
11:32:53:USE_CUDA = True
11:32:53:YOLOv5_AUTOINSTALL = false
11:32:53:YOLOv5_VERBOSE = false
11:32:53:
11:32:53:Started Face Processing module
11:32:55:Server: This is a new, unreleased version
11:32:56:face.py: GPU in use: NVIDIA GeForce RTX 3070
And when I start it, it only starts as CPU, not GPU.
Here is my container start command:
docker run --name CodeProjectcuda12_2-2.5.3 -d -p 32168:32168 --gpus all --mount type=bind,source=C:\ProgramData\CodeProject\AI\docker\data,target=/etc/codeproject/ai --mount type=bind,source=C:\ProgramData\CodeProject\AI\docker\modules,target=/app/modules codeproject/ai-server:cuda12_2-2.5.3
And here is my system tab:
Server version: 2.5.3
System: Docker
Operating System: Linux (Linux 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022)
CPUs: AMD Ryzen Threadripper 3960X 24-Core Processor (AMD)
1 CPU x 24 cores. 48 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3070 (8 GiB) (NVIDIA)
Driver: 551.23, CUDA: 12.4 (up to: 12.4), Compute: 8.6, cuDNN:
System RAM: 63 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
.NET framework: .NET 7.0.15
Default Python: 3.10
Video adapter info:
System GPU info:
GPU 3D Usage 24%
GPU RAM Usage 3.7 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Hopefully that is everything to help you out with helping me out, hehe.
|
|
|
|
|
Probably oranges and apples.
I have old, ancient NVIDIA card and I have to use NVIDIA-docker run....restofstuff. I am running in a debian system docker image.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
I figured it out. Thanks!!
|
|
|
|
|
What was the problem and solution?
cheers
Chris Maunder
|
|
|
|
|
I increased the timeout of the "ModuleInstallTimeout" to something crazy (8 hours)
|
|
|
|
|
Hi,
Where can i download 2.5.1 .deb installer for Ubuntu? When downloading this one from here, it references to 2.5.1-RC2 package, as you can see in the image.
Thank you.
|
|
|
|
|
|
That link downloads 2.5.1-RC2 version, is not the 2.5.4 as zip name suggests.
|
|
|
|
|
|
I'm having the same problem. I cleared my old version and installed 2.5.4 fresh, only to find that it reinstalled 2.5.1-RC2.
|
|
|
|
|
Can you please try downloading again?
cheers
Chris Maunder
|
|
|
|
|
Been digging through the discussions trying to see if anyone had tried going from 2.1.9 to 2.5.1 and if so were they able to do an in place upgrade. I understand that 2.1.x and later supports in place upgrades, but typically you have to do a couple hops when the dot release number is a few versions ahead of where you are. Is this the case in my situation or should I be able to just run the installer for 2.5.1 and it (cross my fingers) works?
Thanks in advance
|
|
|
|
|
It should in-place upgrade, but I would do a full install just to be safe since you will need to update the modules as well anyway, due to schema and SDK updates.
Is there anything specific you're worried about losing? Custom models, logs, settings?
cheers
Chris Maunder
|
|
|
|
|
Just typical upgrade potential issues like files not getting updated or orphaned etc. I've also had problems with making sure there is a clean uninstall with other products in the past when moving to a newer version. I can try the upgrade in place and if I have to I will remove everything and start with a fresh install.
Thanks,
|
|
|
|
|
|
CodeProject.AI runs on localhost:32168. Did you map the folders as shown in the article?
cheers
Chris Maunder
|
|
|
|
|
i was mapped to another port 32169, because with 32168 i receive an error that is busy or used from another
modified 16-Feb-24 10:21am.
|
|
|
|
|
Fair enough.
However, did you map folders?
cheers
Chris Maunder
|
|
|
|
|
source=C:\ProgramData\CodeProject\AI\docker\data and C:\ProgramData\CodeProject\AI\docker\modules doesn't exist
with last update
|
|
|
|
|
i get this error when i map to 32168
Failed to run image. (HTTP code 500) server error - Ports are not available: exposing port TCP 0.0.0.0:32168 -> 0.0.0.0 : listen tcp 0.0.0.0:32168: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
|
|
|
|
|
Hi,
When checking my license plate I get this message:
Expand ▼
18:36:59:License Plate Reader: [ValueError] : Traceback (most recent call last):
File "/app/modules/ALPR/ALPR_adapter.py", line 57, in process
result = await detect_platenumber(self, self.opts, image)
File "/app/modules/ALPR/ALPR.py", line 132, in detect_platenumber
bounding_box_result = ocr.ocr(numpy_plate, rec=False, cls=False)
File "/app/modules/ALPR/bin/linux/python38/venv/lib/python3.8/site-packages/paddleocr/paddleocr.py", line 674, in ocr
if not dt_boxes:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
18:36:59:Response rec'd from License Plate Reader command 'alpr' (...ee7c57)
No license plate can be checked because of this.
System info:
Expand ▼
Server version: 2.5.3
System: Docker
Operating System: Linux (Linux 5.10.55+ #69057 SMP Fri Jan 12 17:02:57 CST 2024)
CPUs: 12th Gen Intel(R) Core(TM) i7-12650H (Intel)
1 CPU x 7 cores. 7 logical processors (x64)
System RAM: 12 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
.NET framework: .NET 7.0.15
Default Python: 3.10
Video adapter info:
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
There will be an update to the module coming shortly to fix this.
|
|
|
|
|
Can you please uninstall then reinstall the module? A fix was put out yesterday
cheers
Chris Maunder
|
|
|
|
|
Confirmed, looks like it works again.
|
|
|
|
|
Now that I have 2.5.1 humming along on my NVidia GPU for v5 6.2 Object Detection, I am working on ALPR and I find that LPR 2.9.0 will only run on my CPU, with CanUseGPU showing as false. Any ideas why this would be? Thanks
|
|
|
|