Skip to content


General Issues

Can't find custom models

When CodeProject.AI Server is installed it will comes with two different object detection modules. Both modules work the same, with the difference that one is a Python implementation that supports CUDA GPUs, and the other is a .NET implementation that supports embedded Intel GPUs. Each come with the same set of custom models that can be used. For custom object detection you need to

  1. Ensure Object Object detection is enabled (it is by default)
  2. Use the provided custom models, or a. add your own models to the standard custom model folder (C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models or C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionNet\custom-models) if using a Windows install, or b. specify a directory that will contain the models (handy for Docker)

To specify a different folder to use for custom models, you can

  1. Set the --Modules:ObjectDetectionYolo:EnvironmentVariables:CUSTOM_MODELS_DIR parameter when launching, or
  2. Set the Modules:ObjectDetectionYolo:EnvironmentVariables:CUSTOM_MODELS_DIR environment variable, or
  3. Set the CUSTOM_MODELS_DIR value in the modulesettings.json file in the ObjectDetectionYolo folder, or
  4. Set the global override (to be deprecated!) variable MODELSTORE-DETECTION to point to your custom object folder, or
  5. (For Docker) Map the folder containing your custom models (eg. C:\MyCustomModels) to the Object Detection's custom assets folder (/app/AnalysisLayer/ObjectDetectionYolo/custom-models). An example would be:

    Text Only
    docker run -p 32168:32168 --name CodeProject.AI-Server -d ^
      --mount type=bind,source=C:\ProgramData\CodeProject\AI\docker\data,target=/etc/codeproject/ai ^
      --mount type=bind,source=C:\MyCustomModels,target=/app/AnalysisLayer/ObjectDetectionYolo/custom-models,readonly 
        codeproject/ai-server
    

    This mounts the C:\MyCustomModels directory on my local system and maps it to the /app/AnalysisLayer/ObjectDetectionYolo/custom-models folder in the Docker container. Now, when CodeProject.AI Server is looking for the list of custom models, it will look in C:\MyCustomModels rather than /app/AnalysisLayer/ObjectDetectionYolo/custom-models

    See the API Reference - CodeProject.AI Server

Port already in use

If you see:

Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://127.0.0.1:5000: address 
   already in use.
Either you have CodeProject.AI already running, or another application is using port 5000.

Our first suggestion is to no longer use port 5000. It's a reserved port, though not all operating systems are actively using it. We prefer port 32168 since it's easy to remember and well out of harm's way of other used ports.

You can change the external port that CodeProject.AI uses by editing the appsettings.json file and changing the value of the CPAI_PORT variable. In the demo app there is a Port setting you will need to edit to match the new port.

Failing that, shut down any application using port 5000 (including any installed version of CodeProject.AI Server if you're trying to run in Development mode in Visual Studio)..

GPU is not being used

Please ensure you have the NVidia CUDA drivers installed:

  1. Install the CUDA 11.7 Drivers
  2. Install the CUDA Toolkit 11.7.
  3. Download and run our cuDNN install script.

Inference randomly fails

Loading AI models can use a lot of memory, so if you have a modest amount of RAM on your GPU, or on your system as a whole, you have a few options

  1. Disable the modules you don't need. The dashboard (http://localhost:32168) allows you to disable modules individually
  2. If you are using a GPU, disable GPU for those modules that don't necessarily need the power of the GPU.
  3. If you are using a module that offers smaller models (eg Object Detector (YOLO)) then try selecting a smaller model size via the dashboard

Some modules, especially Face comparison, may fail if there is not enough memory. We're working on meaking the system leaner and meaner.

Module fails to start

If you have a modest hardware setup then each module may require a little more time to start up before being able to proceed to loading the next module.

In the modulesettings.json file in each module's older is the setting PostStartPauseSecs. This specifies a pause between loading the given module and loading the next. Set it to 3 to 5 seconds if you modules are failing to load properly.

Windows issues

OSError: [WinError 1455] The paging file is too small for this operation to complete

In a nutshell, your paging file is too small. If you've changed (reduced) your paging file settings to save disk space you may want to increase the size to provide more headroom for the system. Failing that it could simply be the case you have too little RAM installed. We'd recommend at least 8Gb for the current versions.

Server fails to start with 'System.Management.ManagementException: Invalid class' error

The error you will see is

Text Only
Description: The process was terminated due to an unhandled exception.
Exception Info: System.TypeInitializationException: The type initializer for
'CodeProject.AI.SDK.Common.SystemInfo' threw an exception.
---> System.Management.ManagementException: Invalid class
This is due to the WMI repository being corrupt. To repair the corrupted WMI do the following:

  • Open an Administrator Command or PowerShell window
  • run winmgmt /verifyrepository. If the repository has an issue, it will respond "repository is not consistent".
  • if the respository is inconsistent it needs to be repaired. run winmmgmt /salvagerepository
  • run winmgmt /verifyrepository again.
  • if it still reports the "respository in not consistent" then you might have to take drastic action and reset the repository. See the links below for details on how to do this, and the risks involved.

See

Development Environment

'command not found' during development setup on Linux/macOS

When running the setup script (or any script, for that matter) on Linux or macOS, you see an error of the form:

: No such file or directory 1: #1/bin/bash
: No such file or directory 10: ./utils.sh
setup.sh: line 11: $'\r': command not found
This indicates that the .sh script file has been saved with Windows-style CRLF line endings instead of Linux/Unix style LF line endings.

Open the script in Visual Studio or Visual Studio Code and at the bottom right of the editor window you will see a line ending hint

CRLF line ending hint

Click that and choose 'LF' to correct the line endings and re-run the script.

Models not found

When building you see:

error MSB3030: Could not copy the file "<path>\\ObjectDetectionNet\\assets\\yolov5m.onnx"
  because it was not found.
Ensure you've run the development setup scripts before attempting to build

Server startup failed

System.ComponentModel.Win32Exception: The system cannot find the file specified.
   at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)

Ensure you've run the development setup scripts before attempting to start the server

Using Docker

docker: Error response from daemon: invalid mode: /etc/codeproject/ai.

If you're in Windows, ensure you're running Docker from Windows Powershell or terminal, and not from a WSL terminal.

Raspberry Pi

You must install .NET to run this application

You may have already installed the dev environment on your Pi, but the latest version of the .NET runtime or SDK may have been updated and you will need to manually update .NET.

Go to /src/Scripts in a terminal on your Pi and run

Bash
sudo bash dotnet-install-rpi.sh

Inference may randomly crash if running Docker in Windows under WSL2.

When Docker is installed on Windows it will, by default, use WSL2 if available. WSL2 will only use a max of 50% of available memory which isn't always enough. To solve this you can create a .wslconfig file to change this:

.wslconfig
# Place this file into /users/<username> 

# Settings apply across all Linux distros running on WSL 2
[wsl2]

# Limits VM memory to use no more than 12 GB, this can be set as whole numbers 
# using GB or MB. The default is 50% of available RAM and 8GB isn't (currently) 
# enough for CodeProject AI Server GPU
memory=12GB 

# Sets amount of swap storage space to 8GB, default is 25% of available RAM
swap=8GB

You have an NVidia card but GPU/CUDA utilization isn't being reported in the CodeProject.AI Server dashboard when running under Docker

Please ensure you start the Docker image with the --gpus all parameter:

Terminal
docker run -d -p 32168:32168 --gpus all codeproject/ai-server:gpu