|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.

Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
If you've come across an issue in building, installing, running or configuring CodeProject.AI Server we're here to help. We just ask that you provide enough info for us to dig in quickly.
Please include:
Environment:
- What version of CodeProject.AI Server are you using?
- What Operating system (include Windows version, or if docker just 'Docker')
- Are you using a GPU? If so:
- What brand / model of GPU
- What driver version
- If the card is Nvidia, what version of CUDA is installed?
Some tips:
- In the root directory of CodeProject.AI Server is a logs/ directory. Take a look in there to see if you spot any logs that might be worth including in your post (remove personal info though!)
- Have you changed any settings? If so, let us know.
- We can usually only help with questions around CodeProject.AI Server. Questions about third party apps are usually outside our scope, so please keep the focus on CodeProject.AI Server.
cheers
Chris Maunder
modified 19-Dec-22 18:14pm.
|
|
|
|
|
Initial memory usage after starting is approximately 11 GB including the BI memory usage. After 12 hours the usage is 26 to 30 GB. If I restart the CodeProject.AI it's back at 11 GB. It's a 64 GB machine.
It seems that there are memory leaks consuming a lot of RAM. Did someone else recognize this?
The analysis jobs are getting slower with the time. I use 15 seconds pre-recording time and "LOW" in BI with mainstream only (substream can't be used because of bad resolution). This worked fine with 1.6.8. With 2.0.7 the object detection delivers much better results, but it gets such slow, that the records sometimes start after the object (person, dog, cat) nearly left the property again. After restarting the service it's fast enough for a couple of hours until it slows down again.

Restarting the CodeProject.AI service, BI not restarted:

|
|
|
|
|
i had this happen to me - i was assuming it was a problem with blueiris. when i downgraded blueiris to 5.6.8.4 it was stable. I am using all amd and windows 10, so i am just using GPU with .net. I asked a question about this to blueiris support and they were not very helpful, blaming cpai, but my blueiris keeps crashing on the latest version so I downgraded.
|
|
|
|
|
It's often hard to tell what's causing an issue in two very complex systems. Each system has to make certain assumptions about the other, and those assumptions can become incorrect when a system changes.
We've made some changes to an upcoming 2.0.8 release to try and mitigate memory use, and my understanding is Blue Iris has also made some changes on their end that seem to have helped.
Try updating to the latest Blue Iris. Ken (@ Blue Iris) is very, very focused on ensuring his system is well-behaved and provides the best experience, and if there are problems he pulls out all the stops to try and fix them.
cheers
Chris Maunder
|
|
|
|
|
Hi everyone,
I am a noob in this field and would appreciate your help
I recently installed Code Project to be used with BlueIris (been using deepstack previously), after installation finished, i tried to start it from BlueIris AI tab unsuccessfully as I get error message < Could not start(258); check path >.
I tried manual start by launching "CodeProject.AI.Server.exe" but I get the following error message:
Unhandled exception. System.IO.InvalidDataException: Failed to load configuration from file 'C:\ProgramData\CodeProject\AI\modulesettings. json’.
---> System.FormatException: Could not parse the JSON file.
---> System. Text .Json.JsonReaderException: The input does not contain any JSON tokens. Expected the input to start with
a valid JSON token, when isFinalBlock is true. LineNumber: 0 | BytePositionInLine: 0.
at System. Text. Json.ThrowHelper. ThrowJsonReaderException(Utf8JsonReader& json, ExceptionResource resource, Byte nextByte, ReadOnlySpan'1 bytes)
at System. Text. Json.Utf8JsonReader .Read()
at System, Text.Json.JsonDocument .Parse(ReadOnlySpan'1 utf8JsonSpan, JsonReaderOptions readerOptions, MetadataDb& database, StackRowStack& stack)
at System. Text. Json.JsonDocument .Parse(ReadOnlyMemory'1 utf8Json, JsonReaderOptions readerOptions, Byte[] extraRented
ArrayPoolBytes, PooledByteBuffertiriter extraPooledByteBuffertriter)
at System. Text.Json.JsonDocument .Parse(ReadOnlyMemory'1 json, JsonDocumentOptions options)
at System. Text.Json.JsonDocument .Parse(String json, JsonDocumentOptions options)
at Microsoft .Extensions .Configuration.Json.JsonConfigurationFileParser.ParseStream(Stream input)
at Microsoft .Extensions .Configuration.Json.JsonConfigurationProvider.Load(Stream stream)
---End of inner exception stack trace
at Microsoft .Extensions . Configuration. Json.JsonConfigurationProvider.Load(Stream stream)
at Microsoft .Extensions.Configuration.FileConfigurationProvider.Load(Boolean reload)
---End of inner exception stack trace
at Microsoft .Extensions .Configuration.FileConfigurationProvider.Load(Boolean reload)
at Microsoft .Extensions .Configuration.ConfigurationRoot. .ctor(IList'1 providers)
at Microsoft .Extensions.. Configuration .ConfigurationBuilder .Build()
at Microsoft .Extensions.Hosting.HostBuilder. InitializeappConfiguration()
at Microsoft .Extensions.Hosting.HostBuilder.Build()
at CodeProject .AI.API.Server.Frontend.Program.Main(String[] args)
at CodeProject .AI.API.Server.Frontend.Program.<Main>(String[] args)
|
|
|
|
|
Can you please send me (chris@codeproject.com) and copy of your C:\ProgramData\CodeProject\AI\modulesettings.json. file. And then please delete (or move or rename) that file and restart CodeProject.AI Server.
cheers
Chris Maunder
|
|
|
|
|
We need, for instance, to count all subjects / objects detected by our custom models.
Main idea is when BlueIris triggers an alert we would like to get the predictions results (JSON format) and handle it before sent it to MQTT.
Does it possible to do that at BlueIris level or do we have to change the OpenProjectAI server code?
Thanks in advance.
Kind regards,
Cesar
|
|
|
|
|
Blue Iris calls CodeProject.AI Server for its AI inferencing and I'm not sure how you would tap into BI to get that info. If you can take the image that BI stores after a detection, send that to CodeProject.AI, get the results, and then send them on to a MQTT broker then that would help, but I'm someone in the BI crowd might have a better solution.
cheers
Chris Maunder
|
|
|
|
|
Chris to make this work for him you just need to add object count to the JSON responses and then he can use the &JSON macro as the payload to send what he needs to the MQTT broker


|
|
|
|
|
Light weight, easy to use and it is free!
|
|
|
|
|
When running codeproject AI in Docker, I get the following error in the log every few mins and I cannot install any plugins:
Error checking for available modules: The request was canceled due to the configured HttpClient.Timeout of 30 seconds elapsing.
|
|
|
|
|
Checking available add-ins requires an internet connection. Can you confirm your connection is OK?
cheers
Chris Maunder
|
|
|
|
|
Hi all,
I'll struggling with performance issues with the latest version of CodeProject 2.0.7, it seems to be timing out.
Does the model size setting work in BI? The system seems more stable if I use "Medium" rather than "High" model size.
Just wondering what else I can try to make sure my T400 can work with the latest release of CodeProject.
Thanks.
|
|
|
|
|
EDIT: 2GB is too low for any of my suggestions below. If it worked on medium, leave it on medium! I have mine on HIGH, and it consumes 2.4 GB. I'm sure MEDIUM is below your limit.
Sorry I should have looked at that sooner.
As for ensuring BI is set to HIGH or MEDIUM, that can be set in the BI AI screen in global settings. I don't use it this way though. I run it as as windows service, and control the model size through the CPAI dashboard.
--
Try manually setting this value in your modulesettings.json located at:
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json
"PostStartPauseSecs": 5,
Recommended fix method:
1) Apply the setting change as administrator
2) Manually stop the service (wait a couple of seconds)
3) Manually start the service
4) If this is your issue, consider unchecking "Auto start/stop with BlueIris". Just keep in mind that this might have already been fixed on either side by now.

Wait about 10 seconds or so for it to come to life, then post your results. This fix worked for me, and based on your post about works fine on MEDIUM, and not fine on HIGH leads me to believe you have the same issue I had.
I hope this setting ships as standard soon, or becomes configurable in the UI.
modified 23hrs ago.
|
|
|
|
|
An easier option is to simply open the CodeProject.AI dashboard, go down the status list and look for the ObjectDetection module, then click on the "..." and change the model size. The module (not the whole server) will be restarted with the new (persisted) setting and you should be good to go.
cheers
Chris Maunder
|
|
|
|
|
I'm using CPAI 1.6.8 with Blue Iris 5.6.8.4, and am seeing the following error in the CPAI web interface window:
1:04:22 PM: Object Detection (YOLO): (General Exception) : The size of tensor a (32) must match the size of tensor b (52) at non-singleton dimension 3
This is occurring with CPAI 1.6.8. (I can't upgrade to ver 2.0.7 because it doesn't play nicely on my PC with the most recent version of Blue Iris.)
In terms of hardware and software, CPAI is running on an i7-7700 @ 3.6GHZ, 16GB of RAM, and a GTX 1080 Ti Founders Edition graphics card.
It is detecting things but I thought I'd mention this error in case it means something to the dev team or whoever. The full CPAI startup system log (with the error at the bottom) is below in case that helps.
The only change since this log is that I've turned off all of the modules except Object Detection (YOLO). I am using a single custom model, ipcam-combined.pt from Mike Lud (all other models have been moved out of the custom-models directory).
If there's any other info I can provide, please ask and I'll post it.
API server is online
1:03:50 PM: Operating System: Microsoft Windows 10.0.19044
1:03:50 PM: Architecture: X64
1:03:50 PM: Environment:
1:03:50 PM: Platform: windows
1:03:50 PM: In Docker: False
1:03:50 PM: App DataDir: C:\ProgramData\CodeProject\AI
1:03:57 PM: Video adapter info:
1:03:57 PM: Name - Intel(R) HD Graphics 630
1:03:57 PM: Device ID - VideoController1
1:03:57 PM: Adapter RAM - 1,024 MB
1:03:57 PM: Adapter DAC Type - Internal
1:03:57 PM: Driver Version - 31.0.101.2111
1:03:57 PM: Video Processor - Intel(R) HD Graphics Family
1:03:57 PM: Video Architecture - VGA
1:03:57 PM: Video Memory Type - Unknown
1:03:57 PM: GPU 3D Usage - 1%
1:03:57 PM: GPU RAM Usage - 68.63 MB
1:03:57 PM: Name - NVIDIA GeForce GTX 1080 Ti
1:03:57 PM: Device ID - VideoController2
1:03:57 PM: Adapter RAM - 4 GB
1:03:57 PM: Adapter DAC Type - Integrated RAMDAC
1:03:57 PM: Driver Version - 31.0.15.1601
1:03:57 PM: Video Processor - NVIDIA GeForce GTX 1080 Ti
1:03:57 PM: Video Architecture - VGA
1:03:57 PM: Video Memory Type - Unknown
1:03:57 PM: GPU 3D Usage - 4%
1:03:57 PM: GPU RAM Usage - 68.57 MB
1:03:57 PM: BackendProcessRunner Start
1:04:00 PM: Attempting to start Scene Classification
1:04:00 PM:
1:04:00 PM: Module 'Scene Classification' (ID: SceneClassification)
1:04:00 PM: Active: True
1:04:00 PM: GPU: Support enabled
1:04:00 PM: Parallelism: 1
1:04:00 PM: Platforms: windows,linux,macos,macos-arm,docker
1:04:00 PM: Runtime: python37
1:04:00 PM: Queue: scene_queue
1:04:00 PM: Start pause: 1 sec
1:04:00 PM: Valid: True
1:04:00 PM: Environment Variables
1:04:00 PM: APPDIR = %MODULES_PATH%\Vision\intelligencelayer
1:04:00 PM: CPAI_MODULE_SUPPORT_GPU = True
1:04:00 PM: DATA_DIR = %DATA_DIR%
1:04:00 PM: MODE = MEDIUM
1:04:00 PM: MODELS_DIR = %MODULES_PATH%\Vision\assets
1:04:00 PM: PROFILE = desktop_gpu
1:04:00 PM: TEMP_PATH = %MODULES_PATH%\Vision\tempstore
1:04:00 PM: USE_CUDA = True
1:04:00 PM: VISION-SCENE = True
1:04:00 PM: YOLOv5_VERBOSE = false
1:04:00 PM:
1:04:00 PM: Started Scene Classification backend
1:04:01 PM: Latest version available is 2.0.7-Beta
1:04:01 PM: Attempting to start Face Processing
1:04:01 PM: Attempting to start Background Remover
1:04:01 PM:
1:04:01 PM: Module 'Background Remover' (ID: BackgroundRemover)
1:04:01 PM: Active: True
1:04:01 PM: GPU: Support disabled
1:04:01 PM: Parallelism: 1
1:04:01 PM: Platforms: windows,linux,docker,macos,macos-arm
1:04:01 PM: Runtime: python39
1:04:01 PM: Queue: removebackground_queue
1:04:01 PM: Start pause: 0 sec
1:04:01 PM: Valid: True
1:04:01 PM: Environment Variables
1:04:01 PM: U2NET_HOME = %MODULES_PATH%/BackgroundRemover/models
1:04:01 PM:
1:04:01 PM: Started Background Remover backend
1:04:01 PM: Attempting to start Object Detection (YOLO)
1:04:01 PM:
1:04:01 PM: Module 'Object Detection (YOLO)' (ID: ObjectDetectionYolo)
1:04:01 PM: Active: True
1:04:01 PM: GPU: Support enabled
1:04:01 PM: Parallelism: 0
1:04:01 PM: Platforms: all
1:04:01 PM: Runtime: python37
1:04:01 PM: Queue: detection_queue
1:04:01 PM: Start pause: 1 sec
1:04:01 PM: Valid: True
1:04:01 PM: Environment Variables
1:04:01 PM: APPDIR = %MODULES_PATH%\ObjectDetectionYolo
1:04:01 PM: CPAI_CUDA_DEVICE_NUM = 0
1:04:01 PM: CPAI_HALF_PRECISION = Enable
1:04:01 PM: CPAI_MODULE_SUPPORT_GPU = True
1:04:01 PM: CUSTOM_MODELS_DIR = %MODULES_PATH%\ObjectDetectionYolo\custom-models
1:04:01 PM: MODELS_DIR = %MODULES_PATH%\ObjectDetectionYolo\assets
1:04:01 PM: MODEL_SIZE = Medium
1:04:01 PM: USE_CUDA = True
1:04:01 PM: YOLOv5_VERBOSE = false
1:04:01 PM:
1:04:01 PM: Started Object Detection (YOLO) backend
1:04:02 PM: Attempting to start Object Detection (.NET)
1:04:02 PM: Attempting to start Portrait Filter
1:04:02 PM: Latest version available is 2.0.7-Beta
1:04:02 PM: *** A new version 2.0.7-Beta is available **
1:04:09 PM: Background Remover: Background Remover started.
1:04:11 PM: scene.py: Vision AI services setup: Retrieving environment variables...
1:04:11 PM: scene.py: GPU in use: NVIDIA GeForce GTX 1080 Ti
1:04:11 PM: Scene Classification: Scene Classification started.
1:04:16 PM: detect_adapter.py: APPDIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo
1:04:16 PM: detect_adapter.py: CPAI_PORT: 32168
1:04:16 PM: detect_adapter.py: MODEL_SIZE: medium
1:04:16 PM: detect_adapter.py: MODELS_DIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\assets
1:04:16 PM: detect_adapter.py: support_GPU: True
1:04:16 PM: detect_adapter.py: use_CUDA: True
1:04:16 PM: Object Detection (YOLO): Running init for Object Detection (YOLO)
1:04:16 PM: Object Detection (YOLO): Object Detection (YOLO) started.
1:04:22 PM: Object Detection (YOLO): Detecting using ipcam-combined
1:04:22 PM: Object Detection (YOLO): Detecting using ipcam-combined
1:04:22 PM: Object Detection (YOLO): Detecting using ipcam-combined
1:04:22 PM: Object Detection (YOLO): Detecting using ipcam-combined
1:04:22 PM: Object Detection (YOLO): Detecting using ipcam-combined
1:04:22 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 5546ms
1:04:22 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 5544ms
1:04:22 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 5549ms
1:04:22 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 5544ms
1:04:22 PM: Object Detection (YOLO): (General Exception) : The size of tensor a (32) must match the size of tensor b (52) at non-singleton dimension 3
1:04:22 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 6077ms
1:04:47 PM: Latest version available is 2.0.7-Beta
|
|
|
|
|
If you use the standard model, and not the custom models, does it then work?
cheers
Chris Maunder
|
|
|
|
|
Thanks to the documentation I was referred to a few days ago, I'm working through learning how to do some training and so far having pretty good luck on the command line. I'm about to try training a model with a higher input resolution now as I'm trying to detect small objects. I've read that the resolutions need to match for training and inference. I see the --img option for train.py and val.py, but it's not exactly clear to me how to do the same when the custom model is loaded into the CodeProject.ai server.
I see the MODE argument that says "The detection mode for vision operations. High, medium or low resolution inference." Is that controlling the input image resolution? If so, what do "high", "medium", and "low" correspond to in terms of pixels? Looking in the code on github, I can find the yolov5-3.1 module (haven't found the yolov5-6.2 module source yet) and it looks like it changes the resolution but the large setting is still 640px. But I'm not sure if that would apply to a custom model?
In short, if I train a model based on yolov5l6 at 1280px, how do I integrate it with the CodeProject AI Server (running in Docker on an Ubuntu host) so it sees input images at the right resolution? Thanks!
|
|
|
|
|
Update on my crashing situation while using any release after v2.0.2.
I kept BI at v5.6.9.7 and rolled backed to CPAI v2.0.2. I chose v2.0.2 because it was the most stable version I have tested of the v2.0.x releases. I left every BI setting exactly the same, completely uninstalled CPAI v2.0.7, along with removing the two CPAI folders in Program Files and Program Data before installing v2.0.2.
Once done, BI and CPAI have performed perfectly by successfully processing around 300 triggers since I rolled back to CPAI v2.0.2. I'm not sure why my setup has issues with v2.0.7 and other people don't, but I plan to stick with v2.0.2 and test new releases beyond v2.0.7 as they are released.
Thoughts?
|
|
|
|
|
Can you please post a link to the v2.0.2 Version?
I was late to the party and didn't see that one..
Thanks
|
|
|
|
|
Sorry, I don't have a link. I kept the installer from the first time I downloaded it some time ago.
|
|
|
|
|
Here's 2.0.2. It has bugs (hence the 2.0.7) but feel free to try.
cheers
Chris Maunder
modified 14hrs ago.
|
|
|
|
|
Thanks Chris, as I said, any version beyond v2.0.2 is problematic for my setup. No idea why.
|
|
|
|
|
Hi , installed to latest version so far no problem . Solved
|
|
|
|
|