|
Actually I'm still having some issues.
I haven't been able narrow down exactly when it happens, but it seems like after either end reboots the mesh is broken and it starts trying to look for the hostname instead of the IP address, which for whatever reason doesn't make it through the docker network interface, so you have to disable the mesh on the satellite, reboot the docker, then restart the mesh on the satellite again. Nothing relevant really comes up in the logs to give any insight on why this is happening.
|
|
|
|
|
I am attempting to run image codeproject/ai-server:cuda12_2 (current) under Docker running on Fedora 39. The server has abundant resources with 256 GB of RAM. As far as I know, Docker is not imposing memory limits. When I start the container, codeproject.ai starts normally and without errors. However, it crashes after 5 or 6 minutes with "out of memory","codeproject excited with code 139." The system log shows "systemd-coredump[1460173]: Process 1451539 (CodeProject.AI.) of user 0 dumped core.#012#012Stack trace of thread 882:#012#0 0x00007fbf944bc898 n/a (/usr/lib/x86_64-linux-gnu/libc.so.6 + 0x28898)#012#1 0x00007fafd2a00640 n/a (n/a + 0x0)#012ELF object binary architecture: AMD x86-64."
The container crashes whether or not it has been accessed, and whether or not it has claimed GPU resources. As long as it is running, it readily accepts images and performs comparisons, using about 1GB of GPU memory and around 3 GB of RAM. However, it still crashes.
I have searched and can't find anyone else with this problem, suggesting that it is something in my environment, but I can't figure out what it could be. I would appreciate any ideas.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Docker (ai-server)
Operating System: Linux (Ubuntu 22.04)
CPUs: AMD EPYC 7262 8-Core Processor (AMD)
2 CPUs x 8 cores. 16 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 4060 (8 GiB) (NVIDIA)
Driver: 550.78, CUDA: 12.4 (up to: 12.4), Compute: 8.9, cuDNN: 8.9.6
System RAM: 252 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 2%
GPU RAM Usage 1.9 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
This one's beyond my pay grade. If you've configured Docker to have this much RAM it should be lavishing thanks on you, not core dumping. That's just inconsiderate.
I did see mention of hosting system core dumps when a Docker container hit it's assigned RAM max, but that doesn't seem to be the case here.
I wonder if it's not an out-of-memory issue, but rather a memory access / memory corruption issue?
cheers
Chris Maunder
|
|
|
|
|
Thanks for thinking about this. I also think it's a memory access issue, but why isn't everybody using this Docker container getting it? Docker provides such a consistent environment that it's really hard to figure out why it's only my Docker container that doesn't work. The amount of system resources is probably the biggest variable not controlled by the container, but as you point out, there is no shortage. I have watched the memory consumption using "docker stats" once a second, and the memory consumption does not gradually increase over the 5-10 minute lifetime of the container as you might expect it to with a memory leak.
|
|
|
|
|
Well, turns out it's probably a memory issue and not a memory access issue. Based on a whim and partly on a "docker out of memory" thread unrelated to ai-server, I limited the file handles in the docker-compose file as follows:
ulimits:
nofile:
soft: 65536
hard: 65536
and that appears to have resolved or at least mitigated the issue. To be clear, the number of files was unlimited prior to my change. The ai-server has been up more than 4 hours, which is 3 hours 50 minutes longer than it has ever run before. It is happily matching faces using only 3.1 GB of RAM. I have not yet tried to prove that the number of file handles increases until it consumes all of the memory, but I'm wondering if ai-server spends its free time grabbing file handles as fast as it can when they are unlimited.
It's still very curious that nobody else has reported this. Maybe it has to do with Fedora, but it seems to me that Docker running under Fedora should look the same as Docker running under any other distribution from inside the container.
I have some time to do further troubleshooting in the next few days.
|
|
|
|
|
I've got some answers.
Codeproject.ai-server does, in fact, continuously open new file handles at the rate of about 120/minute on my system, up to the limit if one exists. If there is no limit, it keeps going until it consumes all system memory. The reason Fedora is different (I think) is because Fedora made a decision not to impose limits on Docker itself due to the overhead of enforcing those limits, and suggests that limits be established on individual containers using cgroups instead. This "out of memory" error would inevitably occur on any distribution not enforcing file limits on docker by default. That may only be Fedora and Redhat at this time.
I reduced the file open limit to 1024 on ai-server and observed it for a while. It gets up to the limit, then bounces back down to about 440 files and starts over. It doesn't crash. The file handle that increases is a FIFO.
This is definitely a bug that needs to be addressed.
modified 3hrs 15mins ago.
|
|
|
|
|
We had an issue that eventually lead to many file handles / watchers being set at startup. There's a check for this at startup and a warning issued, but as to it creating a bucket load more each second, that's bizarre. It would be handy to know which process is adding the handles: a module or the server itself.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
A file handle is left open every time one of these child processes exits:
Quote: futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1671, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
write(62, "\21", 1) = 1
rt_sigreturn({mask=[]}) = 202
futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1675, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
write(62, "\21", 1) = 1
rt_sigreturn({mask=[]}) = 202
futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1677, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
write(62, "\21", 1) = 1
rt_sigreturn({mask=[]}) = 202
futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1680, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
write(62, "\21", 1) = 1
rt_sigreturn({mask=[]}) = 202
futex(0x55d234f6aba4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1682, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
write(62, "\21", 1) = 1
rt_sigreturn({mask=[]}) = 202
|
|
|
|
|
I'm running the 12_2 CUDA Docker version...
This is what I see:
I also tried the version 11 Cuda Docker as well.
Is it just me or did this change?
|
|
|
|
|
|
Hi all. I am a long term Windows and BlueIris user but a novice with linux etc.
In an effort to use the mesh capabilities of CodeProject.AI on BlueIris, I have managed to get Mendel running on a Google Coral Dev Board and now want to install CodeProject.ai to the dev board - and am struggling so would really appreciate assistance, please.
I couldn't find any specific guidance for this board so am following the general installation guide
sudo apt install dotnet-sdk-7.0 appears to be failing with this output:
mendel@coy-apple:~$ sudo apt install dotnet-sdk-7.0
Reading package lists... Done
Building dependency tree... Done
E: Unable to locate package dotnet-sdk-7.0
E: Couldn't find any package by glob 'dotnet-sdk-7.0'
E: Couldn't find any package by regex 'dotnet-sdk-7.0'
mendel@coy-apple:~$
What am I doing wrong, please?
|
|
|
|
|
Mendel is essentially Debian so you could try using the Ubuntu .deb installer
cheers
Chris Maunder
|
|
|
|
|
I'm using codeproject 2.6.5 and when installing LlamaChat the following error occurs:
Installing simple Python bindings for the llama.cpp library...(⌠failed check) done
Soon after:
23:37:26:LlamaChat: Traceback (most recent call last):
23:37:26:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py", line 16, in
23:37:26:LlamaChat: from llama_chat import LlamaChat
23:37:26:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat.py", line 7, in
23:37:26:LlamaChat: from llama_cpp import ChatCompletionRequestSystemMessage, \
23:37:26:LlamaChat: ModuleNotFoundError: No module named 'llama_cpp'
what is happening?
|
|
|
|
|
Thanks very much for your message. It could be the module did not install correctly. Could you please try re-installing it?
If the same thing happens, could you please C:\Program Files\CodeProject\AI\modules\LlamaChat and share your install.log (as well as System Info tab from your CodeProject.AI Server dashboard)?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
hello, I already reinstalled the entire codeproject on two PCs, but none of them worked. follow the log:
=============================
2024-05-31 00:20:36: Installing CodeProject.AI Analysis Module
2024-05-31 00:20:36: ======================================================================
2024-05-31 00:20:36: CodeProject.AI Installer
2024-05-31 00:20:36: ======================================================================
2024-05-31 00:20:36: 95.3Gb of 976Gb available on
2024-05-31 00:20:36: General CodeProject.AI setup
2024-05-31 00:20:36: Creating Directories...done
2024-05-31 00:20:36: GPU support
2024-05-31 00:20:36: CUDA Present...Yes (CUDA 12.2, No cuDNN found)
2024-05-31 00:20:37: ROCm Present...No
2024-05-31 00:20:37: Checking for .NET 7.0...Checking SDKs...Upgrading: .NET is 0
2024-05-31 00:20:37: Current version is 0. Installing newer version.
2024-05-31 00:20:37: 'winget' não é reconhecido como um comando interno
2024-05-31 00:20:37: ou externo, um programa operável ou um arquivo em lotes.
2024-05-31 00:20:39: Reading LlamaChat settings.......done
2024-05-31 00:20:39: Installing module LlamaChat 1.4.4
2024-05-31 00:20:39: Installing Python 3.9
2024-05-31 00:20:39: Python 3.9 is already installed
2024-05-31 00:20:46: Creating Virtual Environment (Local)...done
2024-05-31 00:20:46: Confirming we have Python 3.9 in our virtual environment...present
2024-05-31 00:20:46: Downloading mistral-7b-instruct-v0.2.Q4_K_M.gguf
2024-05-31 00:32:41: Moving mistral-7b-instruct-v0.2.Q4_K_M.gguf into the models folder.
2024-05-31 00:32:41: Installing Python packages for LlamaChat
2024-05-31 00:32:41: [0;Installing GPU-enabled libraries: If available
2024-05-31 00:32:42: Ensuring Python package manager (pip) is installed...done
2024-05-31 00:32:52: Ensuring Python package manager (pip) is up to date...done
2024-05-31 00:32:52: Python packages specified by requirements.cuda12_2.txt
2024-05-31 00:32:59: - Installing the huggingface hub...(✅ checked) done
2024-05-31 00:33:01: - Installing disckcache for Disk and file backed persistent cache...(✅ checked) done
2024-05-31 00:33:09: - Installing NumPy, a package for scientific computing...(✅ checked) done
2024-05-31 00:33:25: - Installing simple Python bindings for the llama.cpp library...(⌠failed check) done
2024-05-31 00:33:25: Installing Python packages for the CodeProject.AI Server SDK
2024-05-31 00:33:26: Ensuring Python package manager (pip) is installed...done
2024-05-31 00:33:28: Ensuring Python package manager (pip) is up to date...done
2024-05-31 00:33:28: Python packages specified by requirements.txt
2024-05-31 00:33:32: - Installing Pillow, a Python Image Library...(✅ checked) done
2024-05-31 00:33:32: - Installing Charset normalizer...Already installed
2024-05-31 00:33:36: - Installing aiohttp, the Async IO HTTP library...(✅ checked) done
2024-05-31 00:33:39: - Installing aiofiles, the Async IO Files library...(✅ checked) done
2024-05-31 00:33:41: - Installing py-cpuinfo to allow us to query CPU info...(✅ checked) done
2024-05-31 00:33:42: - Installing Requests, the HTTP library...Already installed
2024-05-31 00:33:42: Scanning modulesettings for downloadable models...No models specified
2024-05-31 00:33:42: Traceback (most recent call last):
2024-05-31 00:33:42: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py", line 16, in <module>
2024-05-31 00:33:42: from llama_chat import LlamaChat
2024-05-31 00:33:42: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat.py", line 7, in <module>
2024-05-31 00:33:42: from llama_cpp import ChatCompletionRequestSystemMessage, \
2024-05-31 00:33:42: ModuleNotFoundError: No module named 'llama_cpp'
2024-05-31 00:33:43: Self test: Self-test passed
2024-05-31 00:33:43: Module setup time 00:13:05.67
2024-05-31 00:33:43: Setup complete
2024-05-31 00:33:43: Total setup time 00:13:06.86
Installer exited with code 0
===============================
|
|
|
|
|
Can you please paste the info from the System Info tab here? Otherwise we're just guess what system you have.
The translation is "'winget' is not recognized as an internal command" which means you're missing some bits. I'm guessing there may be other issues the installer may be having due to the language on your machine not being English
cheers
Chris Maunder
|
|
|
|
|
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: AMD Ryzen 9 5900X 12-Core Processor (AMD)
1 CPU x 12 cores. 24 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 3060 (12 GiB) (NVIDIA)
Driver: 536.25, CUDA: 12.2.91 (up to: 12.2), Compute: 8.6, cuDNN:
System RAM: 64 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.10
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 3060:
Driver Version 31.0.15.3625
Video Processor NVIDIA GeForce RTX 3060
System GPU info:
GPU 3D Usage 44%
GPU RAM Usage 10,6 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Thanks very much for that. We're narrowing in on a possible reason for this, but to confirm, could you please change your Logging level in the Server logs tab to Information, and then try to re-install the module again, and then share those module install logs with us?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
hello,
I changed the log level to 'trace'. Follow the informations:
17:36:50:LlamaChat doesn't appear in the Process list, so can't stop it.
17:36:51:Call to run Uninstall on module LlamaChat has completed.
17:37:10:Preparing to install module 'LlamaChat'
17:37:10:Downloading module 'LlamaChat'
17:37:11:Installing module 'LlamaChat'
17:37:11:Installer script at 'C:\Program Files\CodeProject\AI\setup.bat'
17:37:12:LlamaChat: Installing CodeProject.AI Analysis Module
17:37:12:LlamaChat: ======================================================================
17:37:12:LlamaChat: CodeProject.AI Installer
17:37:12:LlamaChat: ======================================================================
17:37:12:LlamaChat: 154.1Gb of 476Gb available on
17:37:12:LlamaChat: General CodeProject.AI setup
17:37:12:LlamaChat: Creating Directories...done
17:37:12:LlamaChat: GPU support
17:37:13:LlamaChat: CUDA Present...Yes (CUDA 12.5, No cuDNN found)
17:37:13:LlamaChat: ROCm Present...No
17:37:15:LlamaChat: Reading LlamaChat settings.......done
17:37:15:LlamaChat: Installing module LlamaChat 1.4.4
17:37:15:LlamaChat: Installing Python 3.9
17:37:15:LlamaChat: Python 3.9 is already installed
17:37:26:LlamaChat: Creating Virtual Environment (Local)...done
17:37:26:LlamaChat: Confirming we have Python 3.9 in our virtual environment...present
17:37:26:LlamaChat: Downloading mistral-7b-instruct-v0.2.Q4_K_M.gguf
17:40:00:LlamaChat: Moving mistral-7b-instruct-v0.2.Q4_K_M.gguf into the models folder.
17:40:00:LlamaChat: Installing Python packages for LlamaChat
17:40:00:LlamaChat: [0;Installing GPU-enabled libraries: If available
17:40:02:LlamaChat: Ensuring Python package manager (pip) is installed...done
17:40:13:LlamaChat: Ensuring Python package manager (pip) is up to date...done
17:40:13:LlamaChat: Python packages specified by requirements.cuda12.txt
17:40:21:LlamaChat: - Installing the huggingface hub...(✅ checked) done
17:40:23:LlamaChat: - Installing disckcache for Disk and file backed persistent cache...(✅ checked) done
17:40:32:LlamaChat: - Installing NumPy, a package for scientific computing...(✅ checked) done
17:40:51:LlamaChat: - Installing simple Python bindings for the llama.cpp library...(⌠failed check) done
17:40:51:LlamaChat: Installing Python packages for the CodeProject.AI Server SDK
17:40:53:LlamaChat: Ensuring Python package manager (pip) is installed...done
17:40:55:LlamaChat: Ensuring Python package manager (pip) is up to date...done
17:40:55:LlamaChat: Python packages specified by requirements.txt
17:40:58:LlamaChat: - Installing Pillow, a Python Image Library...(✅ checked) done
17:40:59:LlamaChat: - Installing Charset normalizer...Already installed
17:41:04:LlamaChat: - Installing aiohttp, the Async IO HTTP library...(✅ checked) done
17:41:06:LlamaChat: - Installing aiofiles, the Async IO Files library...(✅ checked) done
17:41:09:LlamaChat: - Installing py-cpuinfo to allow us to query CPU info...(✅ checked) done
17:41:10:LlamaChat: - Installing Requests, the HTTP library...Already installed
17:41:10:LlamaChat: Scanning modulesettings for downloadable models...No models specified
17:41:11:LlamaChat: Traceback (most recent call last):
17:41:11:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat_adapter.py", line 16, in
17:41:11:LlamaChat: from llama_chat import LlamaChat
17:41:11:LlamaChat: File "C:\Program Files\CodeProject\AI\modules\LlamaChat\llama_chat.py", line 7, in
17:41:11:LlamaChat: from llama_cpp import ChatCompletionRequestSystemMessage, \
17:41:11:LlamaChat: ModuleNotFoundError: No module named 'llama_cpp'
17:41:11:LlamaChat: Self test: Self-test passed
17:41:11:LlamaChat: Module setup time 00:03:58.04
17:41:11:LlamaChat: Setup complete
17:41:11:LlamaChat: Total setup time 00:03:59.33
17:41:11:Module LlamaChat installed successfully.
17:41:11:Module LlamaChat not configured to AutoStart.
17:41:11:Installer exited with code 0
|
|
|
|
|
hello,
Any feedback on this problem?
|
|
|
|
|
I'm currently running 2.6.2 and it is working fine. 2.6.2 was easier to install than previous version and reflects amazing work by the team!
If I read the "release note", it only says: "2.6.5 Various installer fixes"
Given that upgrades may, or may not be, fast or even successful, based on this, I would not choose to upgrade solely for installer fixes...
But, in the UI, I see: "An update to version 2.6.5 is available Download
Support for external modules and module updates."
OK, that's a different matter... do I need to upgrade to get the updated modules? I do not see any modules available for update in the Modules control panel? I thought that was the point of modules?
Is an upgrade recommended if I already have 2.6.2 installed and functioning?
Do I need to upgrade in order to update modules or should module updates be available in 2.6.2?
A little more clarity would be helpful.
|
|
|
|
|
You do indeed need to upgrade to get the updated modules.
Generally the further we get along, the more stable CodeProject.AI Server becomes. Also, if you don't upgrade, you'll get to a point where we're patching modules, updating modules, and actively working on the latest modules with the belief the majority of users are using them. Then if you ever have a problem with your current modules or setup, you'll be that far removed from the latest version.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Server version: 2.6.5
System: Linux
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): HD Graphics 530 (rev 06) (Intel Corporation)
System RAM: 8 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
HD Graphics 530 (rev 06):
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
Valid: True
Module Path: <root>/modules/ObjectDetectionYOLOv5-6.2
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.8
Runtime Location: Shared
FilePath: detect_adapter.py
Start pause: 1 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!raspberrypi,!jetson
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
APPDIR = <root>/modules/ObjectDetectionYOLOv5-6.2
CUSTOM_MODELS_DIR = <root>/modules/ObjectDetectionYOLOv5-6.2/custom-models
MODELS_DIR = <root>/modules/ObjectDetectionYOLOv5-6.2/assets
MODEL_SIZE = Medium
USE_CUDA = True
YOLOv5_AUTOINSTALL = false
YOLOv5_VERBOSE = false
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 1673,
"failedInferences": 1,
"numInferences": 1674,
"averageInferenceMs": 799.8786610878661
}
Started: 29 May 2024 6:55:47 AM Central Standard Time
LastSeen: 29 May 2024 9:03:58 AM Central Standard Time
Status: Started
Requests: 1674 (includes status calls)
Installation Log
2024-05-29 06:31:54: Setting verbosity to quiet
2024-05-29 06:31:54: Installing CodeProject.AI Analysis Module
2024-05-29 06:31:54: ======================================================================
2024-05-29 06:31:54: CodeProject.AI Installer
2024-05-29 06:31:54: ======================================================================
2024-05-29 06:31:54: 505.05 GiB of 843.02 GiB available on linux
2024-05-29 06:31:54: Installing xz-utils...
2024-05-29 06:31:56: General CodeProject.AI setup
2024-05-29 06:31:56: Setting permissions on runtimes folder...done
2024-05-29 06:31:56: Setting permissions on downloads folder...done
2024-05-29 06:31:56: Setting permissions on modules download folder...done
2024-05-29 06:31:56: Setting permissions on models download folder...done
2024-05-29 06:31:56: Setting permissions on persisted data folder...done
2024-05-29 06:31:56: GPU support
2024-05-29 06:31:56: CUDA (NVIDIA) Present: No
2024-05-29 06:31:56: ROCm (AMD) Present: No
2024-05-29 06:31:56: MPS (Apple) Present: No
2024-05-29 06:31:57: Reading module settings.......done
2024-05-29 06:31:57: Processing module ObjectDetectionYOLOv5-6.2 1.9.1
2024-05-29 06:31:57: Installing Python 3.8
2024-05-29 06:31:57: Python 3.8 is already installed
2024-05-29 06:32:02: W: https:
2024-05-29 06:32:09: Ensuring PIP in base python install... done
2024-05-29 06:32:10: Upgrading PIP in base python install... done
2024-05-29 06:32:10: Virtual Environment already present
2024-05-29 06:32:10: Checking for Python 3.8...(Found Python 3.8.19) All good
2024-05-29 06:32:12: Upgrading PIP in virtual environment... done
2024-05-29 06:32:14: Installing updated setuptools in venv... done
2024-05-29 06:32:47: Downloading Standard YOLO models...Expanding... done.
2024-05-29 06:32:47: Moving contents of models-yolo5-pt.zip to assets...done.
2024-05-29 06:33:30: Downloading Custom YOLO models...Expanding... done.
2024-05-29 06:33:30: Moving contents of custom-models-yolo5-pt.zip to custom-models...done.
2024-05-29 06:33:30: Installing Python packages for Object Detection (YOLOv5 6.2)
2024-05-29 06:33:30: Installing GPU-enabled libraries: If available
2024-05-29 06:33:31: Searching for python3-pip...All good.
2024-05-29 06:33:34: Ensuring PIP compatibility... done
2024-05-29 06:33:34: Python packages will be specified by requirements.linux.txt
2024-05-29 06:33:36: - Installing Pandas, a data analysis / data manipulation tool...Already installed
2024-05-29 06:33:37: - Installing CoreMLTools, for working with .mlmodel format models...Already installed
2024-05-29 06:33:38: - Installing OpenCV, the Open source Computer Vision library...Already installed
2024-05-29 06:33:40: - Installing Pillow, a Python Image Library...Already installed
2024-05-29 06:33:41: - Installing SciPy, a library for mathematics, science, and engineering...Already installed
2024-05-29 06:33:42: - Installing PyYAML, a library for reading configuration files...Already installed
2024-05-29 06:33:44: - Installing Torch, for Tensor computation and Deep neural networks...Already installed
2024-05-29 06:33:45: - Installing TorchVision, for Computer Vision based AI...Already installed
2024-05-29 06:38:39: - Installing Ultralytics YoloV5 package for object detection in images... (✅ checked) done
2024-05-29 06:38:41: - Installing Seaborn, a data visualization library based on matplotlib...Already installed
2024-05-29 06:38:41: Installing Python packages for the CodeProject.AI Server SDK
2024-05-29 06:38:42: Searching for python3-pip...All good.
2024-05-29 06:38:47: Ensuring PIP compatibility... done
2024-05-29 06:38:47: Python packages will be specified by requirements.txt
2024-05-29 06:38:49: - Installing Pillow, a Python Image Library...Already installed
2024-05-29 06:38:51: - Installing Charset normalizer...Already installed
2024-05-29 06:38:53: - Installing aiohttp, the Async IO HTTP library...Already installed
2024-05-29 06:38:55: - Installing aiofiles, the Async IO Files library...Already installed
2024-05-29 06:38:57: - Installing py-cpuinfo to allow us to query CPU info...Already installed
2024-05-29 06:38:59: - Installing Requests, the HTTP library...Already installed
2024-05-29 06:38:59: Scanning modulesettings for downloadable models...No models specified
2024-05-29 06:39:08: Fusing layers...
2024-05-29 06:39:09: YOLOv5.1m summary: 391 layers, 21805053 parameters, 0 gradients
2024-05-29 06:39:09: Adding AutoShape...
2024-05-29 06:39:12: Self test: Self-test passed
2024-05-29 06:39:12: Module setup time 00:07:16
2024-05-29 06:39:13: Setup complete
2024-05-29 06:39:13: Total setup time 00:07:19
Installer exited with code 0
1: Would is be possible to change the "DisableLegacyPort": parameter in the appsettings.json file from false to true? It would save some time troubleshooting when CPAI collides with another program running on the machine.
2: I might be doing this wrong, but...
I am having to do the following to get the service to run on Ubuntu 22.04:
a: copy /bin/codeproject.ai-server-2.6.5/codeproject.ai-server.service to /lib/systemd/system
b: run sudo systemctl enable codeproject.ai-server
c: reboot the machine.
Am I doing something wrong, or should this be done by the install program?
It seems to indicate that the service will start when the machine reboots, but in my experience, that does not happen.
TIA
|
|
|
|
|