|
Advice for object detection for LEO person detection and LEO vehicle detection, speed detection with blue iris setup. RTX4070 8.9, 6-4k cams, Street facing cams
|
|
|
|
|
I'm working on my thesis project, and want to make an intelligent search assistant that understands context and, of course, processes and repsonds in natural languaje. The data I want to train this model on is from the Virtual Observatory and a python library that can be used to retrieve data from it.
I thought on Open AI's GPT-3 API, but the knowledge to which it has access to is outdated. The I thought about IBM's Wattson Discovery, but I feel that using their solution would limit the response type or the training process very much.
What ML model(s) would work better in my case? Or what software/solution would be useful?
|
|
|
|
|
I'm reaching out to share a perplexing issue I've encountered with the integration CPAI with my BI setup, hoping to find if anyone else has experienced something similar or could offer any insights. The problem first manifested around 01:45 am on 14/02/2024, and despite troubleshooting efforts, it recurred this morning, indicating a persistent underlying issue.
Initially, the system logs from 14th February showed an error related to CUDA, specifically mentioning "an illegal memory access was encountered". This issue caused a loop of errors until a system reboot was performed at 9:06 am.
Here is the exact log entry for reference:
<pre lang="Terminal">2024-02-14 01:39:57: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command 'custom' in Object Detection (YOLOv5 6.2)
2024-02-14 01:39:57: Object Detection (YOLOv5 6.2): Detecting using ipcam-combined in Object Detection (YOLOv5 6.2)
2024-02-14 01:39:57: Response received (#reqid 85bde494-89d3-429d-a21b-c10b9430c5a8 for command custom)
2024-02-14 01:39:57: Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 566, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
in Object Detection (YOLOv5 6.2)
I upgraded yesterday from CPAI v2.5.1 to v2.5.4, hoping the update would resolve the issue which took place on the 14th. However, this morning, the same CUDA error reappeared, this time indicating "an illegal instruction was encountered". The error persisted until a reboot was done just after 9 am. Below is the log excerpt from today's occurrence:
2024-02-18 04:52:40: Object Detection (YOLOv5 6.2): Detecting using ipcam-combined in Object Detection (YOLOv5 6.2)
2024-02-18 04:52:40: Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (#reqid ad29496b-5caf-4d8b-b02f-7fcc6c7ab605) ['No objects found'] took 22ms
2024-02-18 04:52:40: Client request 'custom' in queue 'objectdetection_queue' (#reqid 7e1b52dd-b880-4c35-b7c6-0f076127faab)
2024-02-18 04:52:40: Request 'custom' dequeued from 'objectdetection_queue' (#reqid 7e1b52dd-b880-4c35-b7c6-0f076127faab)
2024-02-18 04:52:40: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command 'custom' in Object Detection (YOLOv5 6.2)
2024-02-18 04:52:40: Object Detection (YOLOv5 6.2): Detecting using ipcam-combined in Object Detection (YOLOv5 6.2)
2024-02-18 04:52:40: Response rec'd from Object Detection (YOLOv5 6.2) command 'custom' (#reqid 7e1b52dd-b880-4c35-b7c6-0f076127faab)
2024-02-18 04:52:40: Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 715, in forward
max_det=self.max_det) # NMS
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 920, in non_max_suppression
x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
RuntimeError: CUDA error: an illegal instruction was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 717, in forward
scale_boxes(shape1, y[i][:, :4], shape0[i])
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 162, in __exit__
self.dt = self.time() - self.start # delta-time
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 566, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: an illegal instruction was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
in Object Detection (YOLOv5 6.2)
The repeating nature of this error is particularly concerning as it undermines the reliability of object detection capabilities, which are crucial for the functionality I rely on. It's disconcerting to see the system fail in such a manner, especially considering the otherwise commendable performance improvements in the AI aspects of the software.
Has anyone else encountered similar issues, particularly with CUDA errors causing system instability? Any advice on troubleshooting or resolving this would be immensely appreciated. I've also posted this on the Blue Iris forum to cast a wider net for potential solutions.
Thank you in advance for your time and assistance.
|
|
|
|
|
|
I've posted in the CodeProject.AI thank you, how do I delete this post I cannot see any option allowing me to delete?
|
|
|
|
|
That's OK, you can leave this post here so other people can see where to go for help.
|
|
|
|
|
How does the CodeProject.AI Discussions forum relate to "ordinary" Artificial Intelligence forum in the General Programming group of CP? Which topics or kinds of questions should go in each of them?
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
CodeProject.AI is a specific project run by the CodeProject staff. So any question related to this project and its usage should be posted in its own discussion forum. Mainly because that is where the project team go to look for questions. The general AI forum is for general, i.e. non-CodeProject.AI, questions. If you look on the CodeProject home page at the picture at the top, it has a link to the correct forum.
|
|
|
|
|
|
Hello everyone,
I'm new to this forum and I'm currently in the midst of a decision-making process regarding the optimal GPU usage for running CPAI on Blue Iris, particularly for a single-camera setup at my home. The primary purpose is to detect human presence while effectively filtering out false triggers due to weather conditions.
At present, I own a RTX 3080, which I'm considering repurposing for this task, especially since I'm contemplating an upgrade to one of the 4000 series super cards in the near future. However, I'm deliberating whether the 3080 might be overkill for my specific requirements.
After substantial research and discussions on the Blue Iris forum, the consensus appears to be in favour of the GTX 1650. It's been recommended as a more than adequate solution for CPAI, offering sufficient processing speed while maintaining lower power consumption.
My current setup relies solely on CPU processing, resulting in a latency of about 120-200ms for image processing. In contrast, I came across a post on the Blue Iris forum indicating that the GTX 1650 could potentially reduce this latency to around 30+ms. This substantial improvement naturally piques my interest.
However, I can't help but wonder about the potential benefits of deploying my GTX 3080 for this purpose. Would there be a significant advantage in terms of processing speed or efficiency? I've noticed that a fellow member, MikeLud, is conducting tests with a GTX 4090, which adds another layer of curiosity regarding the performance spectrum of these GPUs.
While I'm currently leaning towards the GTX 1650, primarily due to its power efficiency and seemingly adequate capabilities for my needs, I'm eager to hear your thoughts and experiences. Has anyone here used a GTX 3080/3080TI for a similar setup? If so, what were your observations regarding latency, power consumption, and overall performance?
Your insights and any additional information you can provide would be greatly appreciated. I’m looking to make the most informed decision to ensure efficient and effective surveillance at my home.
Thank you in advance for your valuable input!
Best regards
|
|
|
|
|
Hello, I'm trying to add Super Resolution to my CodeProject.AI_ServerGPU Docker container running on UNRAID. I went into the container and clicked Install Modules and clicked Install beside Super Resolution. I get the below log within the Server logs tab in the container. I also tried letting the container run as Privileged with no luck.
Is there a different way I need to follow to install additional Modules within this container when using docker than using the Install Modules button?
My environment details:
OS: UNRAID 6.12.6
Docker container: CodeProject.AI_ServerGPU version 2.2.4-Beta
GPU: NVIDIA GeForce GTX 1050 Ti on driver latest: v545.29.06
CUDA Version 12.3
17:34:12:Preparing to install module 'SuperResolution'
17:34:12:Downloading module 'SuperResolution'
17:34:13:Installing module 'SuperResolution'
17:34:13:SuperResolution: Hi Docker! We will disable shared python installs for downloaded modules
17:34:13:SuperResolution: No schemas installed
17:34:13:SuperResolution: (No schemas means: we can't detect if you're in light or dark mode)
17:34:13:SuperResolution: sh: 1: lsmod: not found
17:34:13:SuperResolution: Installing CodeProject.AI Analysis Module
17:34:13:SuperResolution: ======================================================================
17:34:13:SuperResolution: CodeProject.AI Installer
17:34:13:SuperResolution: ======================================================================
17:34:13:SuperResolution: 66.02 GiB available
17:34:13:SuperResolution: Installing curl...
17:34:13:SuperResolution: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
17:34:14:SuperResolution: E: Failed to fetch http:
17:34:14:SuperResolution: E: Failed to fetch http:
17:34:14:SuperResolution: E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
17:34:14:Module SuperResolution installed successfully.
17:34:14:
17:34:14:Module 'Super Resolution' 1.6 (ID: SuperResolution)
17:34:14:Installer exited with code 10
17:34:14:Module Path: /app/modules/SuperResolution
17:34:14:AutoStart: True
17:34:14:Queue: superresolution_queue
17:34:14:Platforms: windows,linux,linux-arm64,macos,macos-arm64
17:34:14:GPU: Support disabled
17:34:14:Parallelism: 1
17:34:14:Accelerator:
17:34:14:Half Precis.: enable
17:34:14:Runtime: python38
17:34:14:Runtime Loc: Local
17:34:14:FilePath: superres_adapter.py
17:34:14:Pre installed: False
17:34:14:Start pause: 0 sec
17:34:14:LogVerbosity:
17:34:14:Valid: True
17:34:14:Environment Variables
17:34:14:PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION = python
17:34:14:
17:34:14:Error trying to start Super Resolution (superres_adapter.py)
17:34:14:Module SuperResolution started successfully.
17:34:14:An error occurred trying to start process '/app/modules/SuperResolution/bin/linux/python38/venv/bin/python3' with working directory '/app/modules/SuperResolution'. No such file or directory
17:34:14: at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
17:34:14:Please check the CodeProject.AI installation completed successfully
17:34:15:Call to Install on module SuperResolution has completed.
modified 20-Jan-24 18:54pm.
|
|
|
|
|
|
|
Verify Super Resolution module compatibility with CodeProject.AI_ServerGPU version 2.2.4-Beta, your GPU, and CUDA version.
Inspect Docker container logs for error messages related to the module installation failure.
Try manually installing the module via the container's shell using specific scripts or commands.
Ensure your Docker container is up to date to avoid issues resolved in newer versions.
If problems persist, seek support from the CodeProject.AI community or support forums with detailed error logs. <pre><a href="capcuttemplates" rel="dofollow">https://capcuttemplates.co/</a>
|
|
|
|
|
hi
i have all work fine. but not continuous like i want
i want that blue iris will triggered almost in continuous time with face recognition
i use it like door locked
so lot people walk into the door
so i want when it detects face that is register it will confirm and send http commands
all work except the part of the continuity
its work every few minutes of break
i also play with the break time, trigger time, send pics per ms
i didn’t get it work
can it been done ?
thanks
yossi
|
|
|
|
|
|
I had object detection .net working with the integrated gpu previously, so i switched to a 1650 video card. I removed .net and went with the 6.2 one.
I also ran the cuda installer and the batch file installer per the codeproject ai download page.
I've restarted the pc and i cant get it to go to GPU mode
any thoughts out there?
I had moved this thread to the other forum, but for some reason i cant find it, but here was the solution that worked for me:
Uninstall code project
Uninstall cuda 11.7
Uninstall gfx card drivers
Reboot
Install 516.94 gfx drivers
Install cuda 11.7.0
Install Code Project 2.0.8
Install cuDNN bat file (https://www.codeproject.com/ai/docs/faq/gpu.html)
modified 27-Sep-23 9:36am.
|
|
|
|
|
One suggestion, but first a question; does anything else on your box use the GPU in this expected manner? If this is a testing-of-the-water type of scenario wouldn't it be better to get tried-and-true stuff running first?
I have no piers here on CP and see myself as somewhat of a pariah when I post but I'm sure somewhere in the expanse of time between using the CPU and using the GPU, in realtime computer use (1995-present), sticking to allowing a CPU to enslave the GPU, is in the questor's best interests.
|
|
|
|
|
well, i can use the 1650 as the primary gpu without issue. In this situation i'm using the built in one for the lcd.
Both drivers show up and show up in device manger, i had no issues with the two cuda installers either
|
|
|
|
|
Perhaps ask your original question here then:
CodeProject.AI Discussions[^]
I've observed reposting on cp will result in boiled tempers so, because you're moving to a better venue, perhaps adding some keywords to better explain AND keeping your grammar right (like capitalizing your first-person pronouns) will cause someone there to come to the rescue.
|
|
|
|
|
ah, i dont know how i ended up in the wrong forum again, thanks.
And who uses caps for pronouns in 2023, especially from a phone :P
|
|
|
|
|
theskyisthelimit99 wrote: who
... one what sees for miles and miles and miles ...
|
|
|
|
|
Yes ... Without doing anything, my Windows C# UWP app utilizes the GPU. Only Edge and VS are also utilizing it.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
|
I want to build face recognition door. Can someone help me with the flow chat and the necessary things that will enable me carry out the project.
|
|
|
|
|