|
After several false starts, some handy updates from the PaddleOCR team, and a custom built PaddlePaddle wheel from the guys at QEngineering, we have PaddlePaddle and PaddleOCR running natively on the Raspberry Pi.

It's not fast, granted, but it works. The task now is simply to optimise. The entire exercise was more about providing greater flexibility for installing modules on systems such as the Raspberry and Orange Pi or the Jetson boards. By introducing system specific module settings and requirements files we can provide that extra level of fine tuning for systems that need a little more... persuasion.
This update will be released, with full code as always, in the next few hours.
cheers
Chris Maunder
|
|
|
|
|
With much expectation we released version 2.2.0 of CodeProject.AI Server. Everything looked great, we'd tested locally, tested Docker, tested our new Ubuntu and Debian installer, our new macOS Intel and Apple Silicon installers, and pounded the Windows version in the debugger till our fingers were raw.
Nothing could go wrong.
So, once we finally tracked down the bug that caused such a kerfuffle we couldn't work out why there was that bug.
In short, we changed the way a script worked. Instead of install scripts specifying the version of python they use 3 in different places, we now set the version once as a global, and the methods the install script call will just query the global variable. No, it's not technically pretty, but we want a simple coding experience for module install scripts, not prizes for best practices.
All good, all very easy, except the installed version was reverting to the previous behaviour. Instead of the utilities grabbing the global variable, they were looking for the passed-in values and failing when they didn't see anything passed in. We'd updated the utility scripts but they were behaving like the previous version.
How is it that we install one version of a script, but what we see on the other end is the script from a different time?
The answer was pretty simple: we forgot to add that script to our new installer. Which then raised the question: How does a script that was removed, and no longer on the system, manage to leap from whatever netherworld it was banished to and appear in this current reality?
That answer was simple too: WiX.
The entire reason we built a new installer was because we ran out of patience with WiX. Doing anything crazy and wild like specifying a new installation location was beyond painful. In moving to Inno Setup we also discovered that WiX was adding the wrong GUID in the wrong place (at least the wrong place as we and the docs seemed to think) and so uninstalling was...a problem.
So we have a new installer. An upgrade is often done by calling the previous installer's uninstall methods, then calling the new installers install methods. But what if the directions to find the previous uninstaller were placed incorrectly in the registry? Then you have an installer that can't uninstall, but instead installs on top of the previous installation.
So the combination of a missing file plus an install on top of a previous install means we end up with the previous version's file.
Now if only we can get a reliable .NET SDK install...
cheers
Chris Maunder
|
|
|
|
|
For those who have spare macs and mac minis lying around, we have a macOS installation package almost ready.
Server version: 2.1.12-Beta
Operating System: macOS (Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:53:44 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T8103)
CPUs: Apple M1 (Apple)
1 CPU x 8 cores. 8 logical processors (Arm64)
GPU: Apple Silicon (Apple)
System RAM: 16 GiB
Target: macOS-Arm64
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.0
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Video adapter info:
Apple M1:
Driver Version<br />
Video Processor Apple M1
Global Environment variables:
CPAI_APPROOTPATH = /Library/CodeProject.AI Server/2.1.12
CPAI_PORT = 32168
A huge caveat here is that this isn't a real ".app" app. It's a package installer that places the server binaries and resources into /Library/CodeProject.AI Server and provides a .command file for launching. Installation happens through the macOS installer, but uninstalling requires a single bash command. Install takes just a few seconds.
We're just cleaning up some modules that aren't as happy running natively on macOS as they are in the dev environment or in Docker. Just issues with paths and build/publish quirks, but nothing dramatic.
Separate x64 and arm64 .pkg files will be provided as part of our next minor update (2.2)
Update: Turns out Ubuntu installers are not that hard either:
Server version: 2.1.12-Beta
Operating System: Linux (Linux 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023)
CPUs: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz (Intel)
1 CPU x 8 cores. 16 logical processors (x64)
System RAM: 8 GiB
Target: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.9
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Video adapter info:
Global Environment variables:
CPAI_APPROOTPATH = /usr/bin/codeproject.ai-server-2.1.12
CPAI_PORT = 32168
cheers
Chris Maunder
modified 1-Sep-23 16:17pm.
|
|
|
|
|
The Coral TPU has always held the promise of cheap, fast AI inferencing but it's always felt like The Project That Was Left Behind. The documentation reference a version of Tensorflow Lite compiled specifically for the Coral, and then PyCoral libraries themselves, both of which stopped being supported for macOS at macOS version 11 for Intel chips, and 12 for Apple Silicon. Python 3.9 is the latest interpreter supported on both these platforms.
Serendipitously my iMac, which I use purely as a Bootcamp machine, still had macOS 11 installed so in re-running the setup scripts and tweaking things a little, Coral support on the Macs is a thing. If you haven't upgraded your OS.
On a Mac, this is more a theoretical and development curiosity for those working with CodeProject.AI Server, but 11ms inference is still a win.

Update
We've been testing CodeProject.AI server on Linux by using WSL in Windows. Generally this works very well, except in the case of USB devices. Previous efforts to get Coral working under WSL failed, but we've made the switch to testing Linux using Ubuntu 22.04 on bare metal, and we're pretty stoked to see the Coral working perfectly out of the gate.

cheers
Chris Maunder
modified 11-Aug-23 12:02pm.
|
|
|
|
|
It's working! It's working!!!!!! And much better response speeds than running on Windows! Averaging 148ms on Medium!!!
Thank you Chris. This is huge!
(Is there any way to enable the desk-melting version of the
libedgetpu1 ? Apparently it's not really possible to destroy the thing, it just gets hot. Someone ran it for months at maximum and nothing bad happened. Mine's not going to be analyzing at maximum capacity for months at a time, and I'm not touching it, so it should be fine.)
|
|
|
|
|
That's a great data point (and maybe the reason most AI work is done on Linux )
To get it to install the oven-mitts version you could try
sudo apt-get remove libedgetpu1-std -y
sudo apt-get install libedgetpu1-max -y
cheers
Chris Maunder
|
|
|
|
|
It worked! Well, I had to use this for the second one since it requires the "yes" and the CPAI Docker sets the interactive mode to noninteractive, even in an interactive shell inside the container:
DEBIAN_FRONTEND=dialog sudo apt-get install libedgetpu1-max
I made a post on how I got this to work in Docker too. Working great! The Medium speeds went from 148ms to 112ms after doing this.
CodeProject.AI Server: AI the easy way.[^]
Is it possible to add an environmental variable to the Docker image to use the oven-mitt mode for Coral? I can always write a Dockerfile myself on top of yours and use that, but could be cool as an option.
|
|
|
|
|
After running perfectly for a day and a half, the Coral suddenly started giving the "The interpreter is in use" error. Blue Iris didn't indicate anything was wrong, just "Nothing was found", which is not good. After restarting the Docker container, the Coral went back to normal. (This error seems to be common, see here)
If there's no good way to fix this problem, maybe a health check can be implemented for the Docker image to restart the process or the entire container whenever the "The interpreter is in use" error is seen.
|
|
|
|
|
CodeProject.AI Server now allows you to train your own models!
Adding Object Detection to your apps is easy with CodeProject.AI Server, but you’ve been limited to the models you could find by hunting around and testing by trial and error. We’ve now added the ability to train your own YOLOv5 Object Detection models with just a couple of clicks.
All you do is choose the types of objects you wish to detect (from one of the 600 classes in Google's Open Images Dataset V7). Select how many images you want to use for training, select the model size, then click to build the dataset automatically, and then another click to train your model. Reuse the same dataset on subsequent model builds in case you want to create larger or smaller models, or fine tune the training parameters. It doesn’t get easier than that.

Open Images is a series of thousands of images Google has gathered that have each identifiable object in the image tagged with a bounding box and a label. For example, a mechanical fan.

Ideally this is for those who have a decent GPU because of the processing power required. You can train these models with a decent CPU setup, it will just take much, much longer. While you can train models off smaller sets of images, like 100, for greater accuracy in your detections you would want 1000 images or more.
You can get a comprehensive list of all the Open Images object classes here. For those who have searched in vain for a custom object detection model specifically for knives, trucks, swimming pools, falcons, flowerpots, and ostriches in a single model, your dream has come true.
CodeProject.AI Server's Object Detection Training has a number of applications outside of the opportunity to train a more custom model, it could also be used to train models to help ignore certain images from your camera feed that are always present and interfering with your detections.
So, whether you live in Derry and are interested in knowing if there’s a suspicious clown wandering around, or you really want to exclude that plant from your backyard detections, or you just really want to know how many lynxes are on your driveway, give CodeProject.AI Server's new Object Detection Training a try.
Thanks,
Sean Ewington
CodeProject
modified 4-Aug-23 11:49am.
|
|
|
|
|
CodeProject.AI Server 2.1.8 is now available!
With 2.1.8 we're hoping to get rid of hangs, have better resource usage, and some other general bug fixes.
If you were experiencing a hang on the CodeProject.AI Dashboard, or noticing some additional CPU, memory, or GPU usage, 2.1.8 should resolve those issues.
In addition, if you are a Blue Iris user and were experiencing error 500, please try the latest Blue Iris release, version 5.7.5.6, which should resolve the problem.
As always, please check our README if you are having issues, and if none of those are your issue, please leave a message on our CodeProject.AI Server forums.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|

CodeProject.AI Server is now available on the most popular version of Home Assistant, Home Assistant Operating System!
Technically CodeProject.AI Server has been available on Home Assistant since March as a custom repository, but that implementation and the article demonstrating the installation was on Home Assistant Container.
Home Assistant OS is by far the most popular installation choice of Home Assistant users, according to Home Assistant Analytics, with 68.2% of users opting to use Home Assistant OS.
Now we have an article that walks through, step by step, installing Home Assistant OS on a Raspberry Pi 4, setting up CodeProject.AI Server as a custom repository on Home Assistant OS, and demonstrating a practical use case where CodeProject.AI detects a person and HAOS sends a snapshot of the detected person and notification to Home Assistant Companion, the Home Assistant mobile app.
If you're looking for an artificial intelligence solution for Home Assistant, CodeProject.AI Server is constantly being updated, and we'll be releasing more articles demonstrating how to set up various detection scenarios and automations in Home Assistant.
Thanks,
Sean Ewington
CodeProject
modified 1-May-23 15:10pm.
|
|
|
|
|
CodeProject.AI Server 2.1 is released[^]! The big thing in 2.1 is module control. When you first launch CodeProject.AI Server 2.1, Object Detection (Python and .NET), as well as Face Processing, are automatically installed (rather than the installer installing them), but these modules can now be uninstalled. Every other module can installed, re-installed, uninstalled, or updated from the Modules tab.

We've also added a module for Object Detection on a Raspberry Pi using Coral and a module to Cartoonise images (for fun).
There are a heap of other improvements like better logging, half-precision support checks on CUDA cards, bug fixes, and we’ve made it so that modules are versioned so that our module registry will now only show modules that fit your current server version.
Thanks to everyone for all the support and usage of CodeProject.AI Server so far. We're dedicated to making further improvements so please feel free to give your feedback on our CodeProject.AI Discussions[^] forum. And please give 2.1 a try[^]!
Thanks,
Sean Ewington
CodeProject
modified 21-Apr-23 13:27pm.
|
|
|
|
|
Inspired by this I decided see how hard it would be to add a cartooniser to CodeProject.AI Server
 
Pretty easy!
Module is due out today.
cheers
Chris Maunder
|
|
|
|
|

Home Assistant is an IoT home automation tool with a lot of possibilities because it can integrate with various devices and services. With Home Assistant and the right devices you can have things like: a dashboard with all your cameras visible (from your phone, too), an auto-lock for the front door after it’s been closed for three minutes, an alarm system that arms when all the registered users are away, and the ultimate (for some reason) a garage door that automatically closes if it’s been open for a period of time.
There are a lot of potential applications of CodeProject.AI Server and Home Assistant. I’m currently finishing an article that using CodeProject.AI and Home Assistant, detects when a person is in a camera frame, starts recording, and takes a snapshot and sends it to your phone.
For now though, check out the CodeProject.AI-HomeAssist repository[^]. Read up on how to use it, what it can do. Or, you can go straight to the guide that shows, step by step, How to Setup CodeProject.AI Server with Home Assistant Container[^].
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
|

When shutting down CodeProject.AI Server we kept seeing processes remain alive. One of the issues was that if you terminate a Python process from a virtual environment you need to be aware that there are actually two processes running. Here's our solution: Terminating Python Processes Started from a Virtual Environment.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
One of the great perks of CodeProject.AI Server 2.0 is that it now allows analysis modules to be downloaded and installed at runtime.
In previous versions, we would include a number of modules, which, if we're honest, made the install quite large. Now that we're expanding the modules we offer (Automated License Plater Reader, Optical Character Recognition, Coral support), it made sense to make this more customizable. As of 2.0, the only modules CodeProject.AI Server installs by default are Object Detection, and Face Processing.
In fact there are actually two Object Detection modules. One is a .NET implementation that uses DirectML to take advantage of a large number of GPUs, including embedded GPUs on Intel chips. The other is a classic Python implementation that targets CPU or NVidia GPUs. Each module performs differently on difference systems, so the inclusion of both allows you to easily test which one suits you best.

To install a module you want, simply go to the Install Modules tab, and click the Install button. That's it! The module automatically pulls and installs for you.
If you no longer want a module installed, simply click Uninstall, and the module will be removed.

As always, you can still toggle which modules you want active from the Status tab. Simply click on the ... dropdown in the desired module and hit Start or Stop.
In addition, if a module needs an updated, simply go to the Install Modules tab, and click the Update button.
Our goal is to make AI as accessible and easy to use as possible. If you've struggled to play around with AI in the past, give version 2.07 a try.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
CodeProject.AI Server 2.0.5 is officially released! This version has a plethora of additions and we're hoping will be a greater foundation for AI functionality moving forward.
The biggest part of CodeProject.AI Server 2.0.5 is the runtime ability to install / uninstall modules.

We have a new Module Repository which allows us to add, update and remove modules separately from the main server installation is more compact, it also provides more options for module use. For example, version 2.0.5 now includes a module (Object Detection YOLOv5 3.1), based on an older version of PyTorch, which is ideal for those with older CUDA 10.2 and below GPUs.
In addition, CodeProject.AI Server now includes an Automatic License Plate Recognition (ALPR) module for detecting and reading license plates and an Optical Character Recognition module.
We have also improved our .NET based Object Detection module with increases in detection speeds by up to 20%. This modules uses ML.NET, which provides support for a wide range of non-CUDA GPU cards, including embedded Intel GPUs.
If you're interested in learning, or getting involved in AI, download CodeProject.AI Server version 2.05 today.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|

Tweaking the install / setup scripts on a Raspberry Pi by running VS Code and our installers on the Raspberry Pi natively. Same experience as a Mac or Windows box. Same tools, same scripts, same models. All in a teeny tiny little not-so-toy computer.
cheers
Chris Maunder
|
|
|
|
|
The ‘server’ part of CodeProject.AI Server is now .NET 7, with all the wonderful teething problems and workarounds that make a developer’s life so much fun.
While our server is only a very small part of the overall system, it’s crucial that it’s fast, easy to debug, and runs everywhere we need it to. With .NET 7 there are a bunch of performance improvements and some new features that will benefit us in the future. .NET 6 will be a distant memory for everyone soon enough so we’ve gone through the upgrade pain now to save us later on.
The upgrade raised several errors and warnings about null references or using nulls inappropriately, due in part to additional features in C# and .NET 7. A few easy fixes later and ... the application didn't run properly.
It turns out there's a bug in .NET 7.0.0 where loading a dictionary from a configuration file fails. More specifically, option binding for a ConcurrentDictionary works fine in .NET 6, but fails in .NET 7. I have a workaround which I’ve posted on CodeProject here: Workaround for ServiceCollection.Configure<t> failing where T is ConcurrentDictionary<string, tmyconfig=""> in NET7. This issue will be resolved in .NET 7.0.1.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|

Some Blue Iris users who are using CodeProject.AI Server want to know how to detect objects at night. Or some Blue Iris users are able to detect objects at night, but the detection is unreliable. In one case a user from the ipcamtalk forum experienced issues detecting cars at night. The user finds if the car drives fast enough, CodeProject.AI Server can detect it, if the car drives too slow, CodeProject.AI Server only scans the headlights.
The user tried changing their Artificial Intelligence settings to check live images at various intervals: 100 ms, 200 ms, 500 ms, and 750 ms, but to no avail. However, the issue is not in the intervals of checking the live images, but instead the number of real-time images to analyze.

Under the setting +real-time images, simply increase this number until detection is no longer a problem, and in the To cancel box, put an item like giraffe or banana. This will stop Blue Iris sending CodeProject.AI Server images as soon as it finds something in the To confirm box. The To cancel box forces Blue Iris to send CodeProject.AI Server every image.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
We've hit 50,000 downloads of CodeProject.AI Server!
Thank you to everyone who downloaded CodeProject.AI Server, and an extra special thank you to those that tried it, are using it, have posted bugs, suggestions, encouragements, ideas and wisdom. Most of all, thanks for giving it a go and supporting us in this crazy fun journey.
CodeProject.AI Server has come a long way since its release in January of this year. We've added support for custom models, GPU support including integrated GPUs and the Apple Silicon M chips, and we’re adding more Docker containers including our latest which support Arm64 on Apple and Raspberry Pi devices. We’re also proud to be the AI service of choice for Blue Iris. And with the improved integration with Blue Iris, we’re now running live for tens of thousands of Blue Iris users.
And this is just the beginning. We're committed to making our fast, free, self-hosted, AI server as easy to use as possible, everywhere you need to use it.
If you're interested in learning, or getting involved in AI, download CodeProject.AI Server today.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
A number of people are using CodeProject.AI Server with Blue Iris, a video security software package. Our server supplies the AI smarts to detect things like vehicles, people, and custom objects from the video feeds.
Last month we got an error from a Blue Iris user who was using GPU processing. The error Blue Iris gave them was "AI: not responding."
In their error report, the user indicated they were using an NVIDIA GeForce GTX 1070 Ti which, we discovered, is not able to use half precision in its image processing for AI.
When doing a prediction (or inference) of an image to determine what's in it, this prediction process uses a model. Within the model there is a set of weights that are basically the coefficients that are assigned to various points in the model. When you download a model, you are downloading (among other things) a series of weights that have been trained for a particular process or AI task such as detecting raccoons. The weights assist in considering the input, like an image, to help determine the values of a set of outputs, one of which could be whether a raccoon was detected in the image.
This training process uses a set of inputs with known outputs. It runs the detection process and measures the error of the output to the expected known output. The training process tries to minimize the error by backwards propagating and adjusting all the weights all the way back, then doing it over and over again until it converges to an acceptable error rate. This process takes a lot of time and processing power.
The detection process uses the trained model’s weights to take an input - in this case an image - and calculates the probability that an object was trained to detect is in the image, and where it is located.
Part of this processing time has to do with your graphics card. Some graphics cards use half precision floating-point format, or FP16. If the card does not support half precision, or the processing is being done on the CPU, 32-bit floating point (FP32) is used. These are the number formats that are used in image processing.
For image processing, FP16 is preferred. Newer NVIDIA GPUs have specialized cores called Tensor Cores which can process FP16.
But for those with graphics cards that do not have Tensor Cores and are trying to use Blue Iris with CodeProject.AI Server, they would get an error message that read "AI: not responding."
In order to address this, we determined which models of the NVIDIA GPUs actually supported FP16 and created a table which says, "these graphics cards don't support FP16 and so don't try and use FP16 for those GPUs anymore." Hopefully we've got a comprehensive list. We may not and we'll have to adjust it if we find any more, but for now, here is our list:
- TU102
- TU104
- TU106
- TU116
- TU117
- GeoForce GT 1030
- GeForce GTX 1050
- GeForce GTX 1060
- GeForce GTX 1060
- GeForce GTX 1070
- GeForce GTX 1080
- GeForce RTX 2060
- GeForce RTX 2070
- GeForce RTX 2080
- GeForce GTX 1650
- GeForce GTX 1660
- MX550
- MX450
- Quadro RTX 8000
- Quadro RTX 6000
- Quadro RTX 5000
- Quadro RTX 4000
- Quadro P620
- Quadro P400
- T1000
- T600
- T400
- T1200
- T500
- T2000
- Tesla T4
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|

Way, way more fiddly than I expected but there you have it.
Here are the quick notes on this:
- The Raspberry Pi 400 is a self contained quad core 64bit Arm-based Raspberry Pi built into a keyboard.

It comes preinstalled with a 32bit OS. Sigh. Download the imager from Raspberrypi.com and make yourself a 64bit OS installer. It's easy, just annoying.
- Building a docker container for Arm is easy, sort of. Once you have the correct base image, you have the build targets correct, you get things to build and then the image builds properly, then it's all smooth sailing. We settled with using arm64v8/ubuntu:22.04 as the base Docker image. I built using an M1 mac mini because the thing is so crazy fast for such a seemingly modest machine.
- Fitting everything into a Docker image that will fit on a Pi means making hard calls. For this demo I only included the Python Face detector, Object detector and scene classifier. This made it almost pocket sized at 1.5Gb. Build, push, move back to Pi...
- Back to the Raspberry Pi, you'll note Raspberry Pi's don't come with Docker. Update your system and install Docker:
sudo apt update
sudo apt upgrade
curl -fsSL <a href="https://get.docker.com">https://get.docker.com</a> -o get-docker.sh
sudo bash get-docker.sh
- Pull the RaspberryPi image:
sudo docker pull codeproject/ai-server:rpi64-1.6.8.0
- Run the Docker image:
docker run -p 32168:32168 --name CodeProject.AI-Server -d codeproject/ai-server:rpi64-1.6.8.0
- Launch the dashboard: http://localhost:32168
Memory is a huge issue and this docker image is not exactly useable at this point. What you're seeing here is literally the first run - a proof of concept that it all works give or take resource constraints. The servers are up, the queues are being polled, the logs are being written, the model layers are being fused.
Small steps, but another small hill climbed. Onto the next.
cheers
Chris Maunder
|
|
|
|
|
Recently for CodeProject.AI Server we were looking to increase the number of frames per second we can handle (throughput) as well as figure out how to be able to analyze frames from a video stream on the backend.
Here's the flow of how CodeProject.AI Server works. Someone writes an application that calls CodeProject.AI Server with a request for data that gets put into a queue for the backend module. The backend module requests data from the queue, gets it, figures out what it supposed to do with it, processes it, and then it sends the response back, which gets sent back to the original caller.

See Adding new modules to CodeProject.AI - CodeProject.AI Server v1.6.5 for more details on the system architecture.
So, what we're doing is looking at the part between the frontend and the backend module, trying to reduce the communication time there.
To help do this we looked at SignalR, Microsoft's library which can establish a permanent or semi-permanent connection. The nice thing about SignalR is that it's got all the infrastructure built in for handling reconnects, doing remote procedure calls, and streaming data.
We hunted around for an existing Python package to handle SignalR for Python using asynchronous calls. Unfortunately, there wasn't much. There was one Python package, signalr-async, that looked promising, but it required Python 3.8 or higher. Currently CodeProject.AI Server is running Python 3.8 for Linux, and Python 3.7 for Windows for some modules.
Ultimately, we ended up pulling the code from that Python package and getting it to work on Python 3.7 and 3.8.
We had some trouble getting it to work properly, so we wrote a little .NET client (because it's easier to debug and understand), which helped us realize we were using the Python package incorrectly.
In our Python code, we're running multiple tasks. And it turns out that each of those tasks needed a separate SignalR connection, as opposed to sharing that connection across all the tasks.
Once we made that change things were working, but it was slower than the existing http request/response method we currently use.
So, we went back to the aiohttp Python package documentation. It turns out we’re already doing the best we can in terms of communication speed between the frontend and the backend module. This is because the session object maintains a pool of connections for reuse, eliminating most of the overhead of making a request.
But that's how it goes. Not all experiments are successful, but the result is important. It tells us where our limits are and helps us move on to doing things better and faster.
We're still going to be looking at getting streaming working in the most efficient manner so we can process video streams at a higher rate. It appears aiohttp can handle streams in addition to standard request/response communication, but more on that later.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|