|
That link works Chris
Do I need to uninstall the previous version prior to installing this update?
|
|
|
|
|
Thanks Chris that link works . Do you uninstall previous version or does it install over the top.
|
|
|
|
|
Over the top is fine.
Out of interest, did you retry the initial link and if so, did it work again? If not, could you please send me the link you used? Some bad code somewhere...
cheers
Chris Maunder
|
|
|
|
|
Hi Chris ,
no I didnt , just went straight to the link you provided . Will keep this in mind on next release if it doesn't work. Thanks for help.
|
|
|
|
|
|
Which URL was that?
cheers
Chris Maunder
|
|
|
|
|
|
Should be good now
cheers
Chris Maunder
|
|
|
|
|
It was the link that came up after opening the dashboard > Check for update. Then when it showed the update, there was a "Download" link to click.
|
|
|
|
|
I'm trying to update CodeprojectAI:gpu on my unraid server. But I'm getting the following error:

Can't start the container. Does the new version need additional parameters?
|
|
|
|
|
That’s because newest gpu image based on arm64, wonder if they gonna update image properly. I did rollback to previous.
|
|
|
|
|
same issue here, build is for the wrong cpu architechture.
|
|
|
|
|
Fixing this now. We've been fighting Docker for days now, and I should have realized that the Docker images were for the wrong architecture because they actually built.
Our build scripts imply the target architecture (this has since been corrected) since we used to build on Intel machines. I'm doing most of my work on an arm64 machine (cheap Mac mini!) hence the mix-up. It turns out you can't actually build amd64 images on an M1 powered machine (See qemu: uncaught target signal 11 (Segmentation fault) - core dumped · Issue #6204 · docker/for-mac · GitHub) so for the foreseeable future I'll have to be ambidextrous with my build machines.
Just building CPU/GPU now. Should be done within half an hour.
cheers
Chris Maunder
|
|
|
|
|
Yeah, I'm using mac book M1 and it is easy to build image for amd64. You can just add something like
FROM --platform=linux/amd64 ubuntu:22.04 into DockerFile and it'll be built for amd64
|
|
|
|
|
Yep, you'd think that's how it would work. Evidently we are hitting the qemu issue which is causing a set fault on the M1s. It simply won't build for me on my Mac
cheers
Chris Maunder
|
|
|
|
|
Can confirm that the update is working.
|
|
|
|
|
I am wondering if the Quadro K4200 is supported or should I use a GT 1030 or just stick to CPU. I only use the object detection and "ipcam-general.pt"
OS: Windows 11
CPU: i7-6700K
GPU: Nvidia Quadro K4200 (CUDA 3.0)
GPU Driver: 474.14
CUDA Version: 10.2
CUDnn Version: 8.6 (zlib installed and on path)
|
|
|
|
|
If you are only using one model use the GT 1030 with the Object Detection (YOLOv5 6.2) module.
If you want to try the Quadro K4200 you need to use the Object Detection (YOLOv5 3.1) module with CUDA 10.2
|
|
|
|
|
The NVIDIA Quadro K4200 is a professional graphics card that was released in 2013, it is considered as a mid-range workstation GPU. It has 4GB of memory, 1344 CUDA cores, and a memory bandwidth of 173 GB/s.
Object detection, particularly with deep learning models, can be computationally intensive and can benefit from the use of a dedicated GPU. However, the Quadro K4200 is an older GPU and may not be well suited for running the latest deep learning models, particularly those that are large and complex.
The GT 1030 is a entry-level gaming GPU and it is a lot newer than the Quadro K4200, it also has less CUDA cores and memory bandwidth. However, it is still considered an entry-level GPU and may not be powerful enough to run some deep learning models.
In general, it is best to use the most recent and powerful GPU that is compatible with your system and budget. But, If you are running a deep learning model on a system with a relatively older GPU like the K4200, the model may run slower and may require more time to train.
If you are using the "ipcam-general.pt" model, which is not a deep learning model, and you are just using it for object detection, the Quadro K4200 should be able to handle that task.
But if you are planning on using more complex models, it may be beneficial to upgrade to a more powerful GPU, such as the NVIDIA GeForce RTX 3070 or 3080, or to use a cloud-based GPU service.
Also, make sure that the GPU driver and CUDA version are compatible with the version of CUDA you have installed on your system.
|
|
|
|
|
I'm currently pushing the 2.0.6 images to Docker.
docker pull codeproject/ai-server
There are 4 main flavours:
codeproject/ai-server - No GPU, just x64 CPU enabled
codeproject/ai-server:gpu - CUDA GPU enabled
codeproject/ai-server:arm64 - Arm64 chips such as the Raspberry Pi or Apple Macs with M1/M2 chips
codeproject/ai-server:rpi64 - A slimmer image for Arm64 R-Pis
These also exist with a "-2.0.6" extension in case you need a specific version at a later date.
There is also a codeproject/ai-server:latest tag for the latest CPU-only version
cheers
Chris Maunder
|
|
|
|
|
Will there be amd64 gpu image?
|
|
|
|
|
What hardware (host / GPU) are you looking to target?
cheers
Chris Maunder
|
|
|
|
|
I have dual Xeon server with Tesla P40 and issue is that docker hub doesnt have any amd64 image.
|
|
|
|
|
All fixed.
cheers
Chris Maunder
|
|
|
|
|
Thanks for you hard work! All works perfectly now in my k8s cluster.
|
|
|
|