Click here to Skip to main content
15,607,554 members
Articles / Artificial Intelligence / Tensorflow
Article
Posted 16 Jan 2022

Stats

472.8K views
138.7K downloads
123 bookmarked

CodeProject.AI Server: AI the easy way.

Rate me:
Please Sign up or sign in to vote.
5.00/5 (50 votes)
15 Feb 20239 min read
Version 2.0.8. Our fast, free, self-hosted Artificial Intelligence Server for any platform, any language
CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container.

Image 1

Downloads

Utility Scripts 

CodeProject.AI Server: An Artificial Intelligence server

Think of CodeProject.AI server like a database server: you install it, ir runs in the background, and provides AI operations for any application via a simple API. 

CodeProject.AI server runs as a Windows service or under Docker. Any language that can make HTTP calls can access the service, and the server does not require an external internet connection. Your data stays in your network.

What's New - 2.0

  • 2.0.8: Improved analysis process management. Stemp out those errant memory hogging Python processes!
  • 2.0.7: Improved logging, both file based and in the dashboard, module installer/uninstaller bug fixes, 
  • 2.0.6: Corrected issues with downloadable modules installer
  • Our new Module Registry: download and install modules at runtime via the dashboard
  • Improved performance for the Object Detection modules
  • Optional YOLO 3.1 Object Detection module for older GPUs
  • Optimised RAM use
  • Support for Raspberry Pi 4+. Code and run natively directly on the Raspberry Pi using VSCode natively
  • Revamped dashboard
  • New timing reporting for each API call
  • New, simplified setup and install scripts

Please see our CUDA Notes for information on setting up, and restrictions around, Nvidia cards and CUDA support.

If you are upgrading: when the dashboard launches it might be necessary to force-reload (Ctrl+R on Windows) the dashboard to ensure you are viewing the latest version

Why we built CodeProject.AI Server

  • AI programming is something every single developer should be aware of

    We wanted a fun project we could use to help teach developers and get them involved in AI. We'll be using CodeProject.AI Server as a focus for articles and exploration to make it fun and painless to learn AI programming

    We want your contributions!
  • AI coding examples have too many moving parts

    You need to install packages and languages and extensions to tools, and then updates and libraries (but version X, not version Y) and then you have to configure paths and...Oh, you want to run on Windows not Linux? In that case you need to... It's all too hard. There was much yelling at CodeProject.

    CodeProject.AI Server includes everything you need in a single installer. CodeProject.AI Server also provides an installation script that will setup your dev environment and get you debugging within a couple of clicks.
  • AI solutions often require the use of cloud services

    If you trust the cloud provider, or understand the billing structure, or can be assured you aren't sending sensitive data or won't go over the free tier this is fine. If you have a webcam inside your house, or can't work out how much AWS will charge, it's not so OK

    CodeProject.AI Server can be installed locally. Your machine, your network, no data needs to leave your device.

Supported Platforms

Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
Windows macOS macOS-arm64 Ubuntu Raspberry Pi Docker Visual Studio
2019+
Visual Studio
Code

Where to start: Quick links

  1. Download and install in Windows or Docker. Fire up the dashboard at http://localhost:32168.
  2. Read the docs (or at least browse through to get a feel)
  3. Learn how to configure the modules if necessary
  4. Download, build and run the code

Quick Introductions

1: Running and playing with the features

  1. Install and Run
    1. For a Windows Service, Download the latest version, install, and launch the shortcut to the server's dashboard on your desktop or open a browser to http://localhost:32168.

      If you wish to take advantage of a CUDA enabled NVidia GPU, please ensure you have the CUDA drivers installed before you install CodeProject.AI
       
    2. For a Docker Container for 64 Bit Linux run
      docker run -p 32168:32168 --name CodeProject.AI-Server -d -v <local directory>:/etc/codeproject/ai codeproject/ai-server
      where <local directory> is some existing directory on the host machine such as
      1. C:\ProgramData\CodeProject/AI on Windows
      2. /usr/share/CodeProject/AI on Linux
    3. For Docker GPU (nVidia CUDA), please use 
      docker run --gpus all -p 32168:32168 --name CodeProject.AI-Server -d -v <local directory>:/etc/codeproject/ai codeproject/ai-server:gpu
  2. On the dashboard, at the bottom, is a link to the demo playground. Open that and play!

2: Running and debugging the code

  1. Clone the CodeProject CodeProject.AI Server repository.
  2. Make sure you have Visual Studio Code or Visual Studio 2019+ installed.
  3. Run the setup script in /Installers/Dev
  4. Debug the front-end server application (see notes below, but it's easy)

3. Using CodeProject.AI Server in my application

Here's an example of using the API for scene detection using a simple JavaScript call:

HTML
<html>
<body>
Detect the scene in this file: <input id="image" type="file" />
<input type="button" value="Detect Scene" onclick="detectScene(image)" />

<script>
function detectScene(fileChooser) {
    var formData = new FormData();
    formData.append('image', fileChooser.files[0]);

    fetch('http://localhost:5000/v1/vision/detect/scene', {
        method: "POST",
        body: formData
    })
    .then(response => {
        if (response.ok) response.json().then(data => {
            console.log(`Scene is ${data.label}, ${data.confidence} confidence`)
        });
    });
}
</script>
</body>
</html>

You can include the CodeProject.AI Server installer (or just a link to the latest version of the installer) in your own apps and installers and voila, you have an AI enabled app.

See the API documentation for a complete rundown of functionality

Notes on CUDA and Nvidia support

If you have a CUDA enabled Nvidia card please then ensure you

  1. install the CUDA Drivers 
  2. Install CUDA Toolkit 11.7.
  3. Download and run our cuDNN install script to install cuDNN.

Nvidia downloads and drivers are challenging! Please ensure you download a driver that is compatible with CUDA 11.7, which generally means the CUDA driver version 516.94 or below. Version 522.x or above may not work. You may need to refer to the release notes for each driver to confirm

Since we are using CUDA 11.7 (which has support for compute capability 3.7 and above).we can only support NVidia CUDA cards that are equal to or better than a GK210 or Tesla K80 card. Please refer to this table of supported cards to determine if your card has compute capability 3.7 or above.

Newer cards such as the GTX 10xx, 20xx and 30xx series, RTX, MX series are fully supported.

AI is a memory intensive operation. Some cards with 2GB RAM or less may struggle in some situations. Using the dashboard you can either disable modules you don't need, or disable GPU support entirely for one or more modules. This will free up memory and help get you back on track.

What does it include?

CodeProject.AI Server includes

  • A HTTP REST API Server. The server listens for requests from other apps, passes them to the backend analysis services for processing, and then passes the results back to the caller. It runs as a simple self-contained web service on your device.
  • Backend Analysis services. The brains of the operation is in the analysis services sitting behind the front end API. All processing of data is done on the current machine. No calls to the cloud and no data leaving the device.
  • The Source Code, naturally.

What can it do?

CodeProject.AI Server can currently

  • Detect objects in images
  • Detect faces in images
  • Detect the type of scene represented in an image
  • Recognise faces that have been registered with the service
  • Perform detection on custom models

The development environment also provides modules that can 

  • Remove a background from an image
  • Blur a background from an image
  • Enhance the resolution of an image
  • Pull out the most important sentences in text to generate a text summary
  • Prove sentiment analysis on text

We will be constantly expanding the feature list.

Our Goals

  • To promote AI development and inspire the AI developer community to dive in and have a go. Artificial Intelligence is a huge paradigm change in the industry and all developers owe it to themselves to experiment in and familiarize themselves with the technology. CodeProject.AI Server was built as a learning tool, a demonstration, and a library and service that can be used out of the box.
  • To make AI development easy. It's not that AI development is that hard. It's that there are so, so many options. Our architecture is designed to allow any AI implementation to find a home in our system, and for our service to be callable from any language.
  • To focus on core use-cases. We're deliberately not a solution for everyone. Instead we're a solution for common day-to-day needs. We will be adding dozens of modules and scores of AI capabilities to our system, but our goal is always clarity and simplicity over a 100% solution.
  • To tap the expertise of the Developer Community. We're not experts but we know a developer or two out there who are. The true power of CodeProject.AI Server comes from the contributions and improvements from our AI community.

How to Guides

License

CodeProject.AI Server is licensed under the Server-Side Public License

Release Notes and Roadmap

Coming up

  • More modules.
  • Chaining modules.

Previous versions

Release 1.6.x Beta

  • Optimised RAM use
  • Ability to enable / disable modules and GPU support via the dashboard
  • REST settings API for updating settings on the fly
  • Apple M1/M2 GPU support
  • Workarounds for some Nvidia cards
  • Async processes and logging for a performance boost
  • Breaking: the CustomObjectDetection is now part of ObjectDetectionYolo
     
  • Performance fix for CPU + video demo.
  • Patch 1.6.7: potential memory leak addressed
  • Patch 1.6.8: image handling improvements on Linux, multi-thread ONNX on .NET

Release 1.5.6.2 Beta

  • Docker nVidia GPU support
  • Further performance improvements
  • cuDNN install script to help with nVidia driver and toolkit installation
  • Bug fixes

Release 1.5.6 Beta

  • nVidia GPU support for Windows
  • Perf improvements to Python modules
  • Work on the Python SDK to make creating modules easier
  • Dev installers now drastically simplified for those creating new modules
  • Added SuperResolution as a demo module

Release 1.5 Beta

  • Support for custom models

Release 1.3.x Beta

  • Refactored and improved setup and module addition system
  • Introduction of modulesettings.json files
  • New analysis modules

Release 1.2.x Beta

  • Support for Apple Silicon for development mode
  • Native Windows installer
  • Runs as Windows Service
  • Run in a Docker Container
  • Installs and Builds using VSCode in Linux (Ubuntu), macOS and Windows, as well as Visual Studio on Windows
  • General optimisation of the download payload sizes

Previous

  • We started with a proof of concept on Windows 10+ only. Installs we via a simple BAT script, and the code has is full of exciting sharp edges. A simple dashboard and playground are included. Analysis is currently Python code only
  • Version checks are enabled to alert users to new versions
  • A new .NET implementation scene detection using the YOLO model to ensure the codebase is platform and tech stack agnostic
  • Blue Iris integration completed

Written By
Software Developer CodeProject Solutions
Canada Canada
The CodeProject team have been writing software, building communities, and hosting CodeProject.com for over 20 years. We are passionate about helping developers share knowledge, learn new skills, and connect. We believe everyone can code, and every contribution, no matter how small, helps.

The CodeProject team is currently focussing on CodeProject.AI Server, a stand-alone, self-hosted server that provides AI inferencing services on any platform for any language. Learn AI by jumping in the deep end with us: codeproject.com/AI.
This is a Organisation

4 members

Comments and Discussions

 
GeneralRe: Unraid update to 2.0.6 not working Pin
Reapola18-Jan-23 0:39
Reapola18-Jan-23 0:39 
AnswerRe: Unraid update to 2.0.6 not working Pin
Chris Maunder18-Jan-23 2:36
cofounderChris Maunder18-Jan-23 2:36 
GeneralRe: Unraid update to 2.0.6 not working Pin
Дмитрий Меланьин18-Jan-23 3:08
Дмитрий Меланьин18-Jan-23 3:08 
GeneralRe: Unraid update to 2.0.6 not working Pin
Chris Maunder18-Jan-23 3:56
cofounderChris Maunder18-Jan-23 3:56 
GeneralRe: Unraid update to 2.0.6 not working Pin
DeadpoolW18-Jan-23 7:28
DeadpoolW18-Jan-23 7:28 
QuestionNvidia Quadro K4200 Supported? Pin
PCMV17-Jan-23 17:18
PCMV17-Jan-23 17:18 
AnswerRe: Nvidia Quadro K4200 Supported? Pin
Mike Lud17-Jan-23 17:33
communityengineerMike Lud17-Jan-23 17:33 
AnswerRe: Nvidia Quadro K4200 Supported? Pin
Marco Garganigo18-Jan-23 2:37
Marco Garganigo18-Jan-23 2:37 
The NVIDIA Quadro K4200 is a professional graphics card that was released in 2013, it is considered as a mid-range workstation GPU. It has 4GB of memory, 1344 CUDA cores, and a memory bandwidth of 173 GB/s.

Object detection, particularly with deep learning models, can be computationally intensive and can benefit from the use of a dedicated GPU. However, the Quadro K4200 is an older GPU and may not be well suited for running the latest deep learning models, particularly those that are large and complex.

The GT 1030 is a entry-level gaming GPU and it is a lot newer than the Quadro K4200, it also has less CUDA cores and memory bandwidth. However, it is still considered an entry-level GPU and may not be powerful enough to run some deep learning models.

In general, it is best to use the most recent and powerful GPU that is compatible with your system and budget. But, If you are running a deep learning model on a system with a relatively older GPU like the K4200, the model may run slower and may require more time to train.

If you are using the "ipcam-general.pt" model, which is not a deep learning model, and you are just using it for object detection, the Quadro K4200 should be able to handle that task.

But if you are planning on using more complex models, it may be beneficial to upgrade to a more powerful GPU, such as the NVIDIA GeForce RTX 3070 or 3080, or to use a cloud-based GPU service.

Also, make sure that the GPU driver and CUDA version are compatible with the version of CUDA you have installed on your system.
NewsDocker images update to 2.0.6 Pin
Chris Maunder17-Jan-23 16:59
cofounderChris Maunder17-Jan-23 16:59 
GeneralRe: Docker images update to 2.0.6 Pin
Дмитрий Меланьин18-Jan-23 0:13
Дмитрий Меланьин18-Jan-23 0:13 
GeneralRe: Docker images update to 2.0.6 Pin
Chris Maunder18-Jan-23 2:20
cofounderChris Maunder18-Jan-23 2:20 
GeneralRe: Docker images update to 2.0.6 Pin
Дмитрий Меланьин18-Jan-23 2:22
Дмитрий Меланьин18-Jan-23 2:22 
GeneralRe: Docker images update to 2.0.6 Pin
Chris Maunder18-Jan-23 7:32
cofounderChris Maunder18-Jan-23 7:32 
GeneralRe: Docker images update to 2.0.6 Pin
Дмитрий Меланьин18-Jan-23 8:20
Дмитрий Меланьин18-Jan-23 8:20 
GeneralRe: Docker images update to 2.0.6 Pin
theoldfool18-Jan-23 10:32
professionaltheoldfool18-Jan-23 10:32 
GeneralRe: Docker images update to 2.0.6 Pin
theoldfool18-Jan-23 2:21
professionaltheoldfool18-Jan-23 2:21 
GeneralRe: Docker images update to 2.0.6 Pin
Chris Maunder18-Jan-23 2:42
cofounderChris Maunder18-Jan-23 2:42 
GeneralRe: Docker images update to 2.0.6 Pin
theoldfool18-Jan-23 3:01
professionaltheoldfool18-Jan-23 3:01 
GeneralRe: Docker images update to 2.0.6 Pin
Chris Maunder18-Jan-23 3:57
cofounderChris Maunder18-Jan-23 3:57 
GeneralRe: Docker images update to 2.0.6 Pin
Chris B 4218-Jan-23 4:42
Chris B 4218-Jan-23 4:42 
GeneralRe: Docker images update to 2.0.6 Pin
Chris Maunder18-Jan-23 7:33
cofounderChris Maunder18-Jan-23 7:33 
GeneralRe: Docker images update to 2.0.6 Pin
Chris B 4218-Jan-23 8:22
Chris B 4218-Jan-23 8:22 
Buglogging feature and GPU/CPU display is not working properly on version 2.0.6 Pin
Member 1574485517-Jan-23 16:05
Member 1574485517-Jan-23 16:05 
GeneralRe: logging feature and GPU/CPU display is not working properly on version 2.0.6 Pin
Chris Maunder17-Jan-23 17:06
cofounderChris Maunder17-Jan-23 17:06 
QuestionBlue Iris Custom Object Detector List Models Pin
Mike Lud17-Jan-23 7:50
communityengineerMike Lud17-Jan-23 7:50 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Flags: AnsweredFixedAdded to TODO

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.