Click here to Skip to main content
15,867,453 members
Articles / Artificial Intelligence / Tensorflow
Article

Multi-Stage Docker Builds for AI Object Detection

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
18 May 2021CPOL4 min read 6.1K   62   3  
In this article we run inference on sample images with TensorFlow using a containerized Object Detection API environment.
Here we’ll use a multi-stage build to create a container for inference with TensorFlow Object Detection API.

Introduction

Container technologies, such as Docker, simplify dependency management and improve portability of your software. In this series of articles, we explore Docker usage in Machine Learning (ML) scenarios.

This series assumes that you are familiar with AI/ML, containerization in general, and Docker in particular.

In the previous article, we have leveraged the power of Nvidia GPU to reduce both training and inference time for a simple TensorFlow model. In this article, we’ll build on top of the achieved solution. We’ll use a multi-stage build to create a container for inference with TensorFlow Object Detection API. You are welcome to download the code used in this article.

In subsequent articles of this series, we’ll tackle large models dealing with Natural Language Processing (NLP) tasks with PyTorch and Transformers. First, we will run the inference, then we’ll serve the inference model via a Rest API. Next, we’ll debug the Rest API service running in a container. Finally, we’ll publish the created container in the cloud using Azure Container Instances.

Why Multi-Stage Builds?

In many scenarios, the container build process includes more complex steps than a simple package installation or file copy. It may involve code compilation, sometimes preceded by downloading that code from an external repository such as GitHub.

As a general rule, you should avoid including build tools in the created container to decrease the container image size and increase its security. If you simply remove these tools after they are used, they may still exist in one of the container layers, so the final size will not be reduced.

One of Docker's best practices in such cases is to use multi-stage builds.

Object Detection API

TensorFlow Object Detection API resides in the research folder of the TensorFlow Model Garden repository. This folder contains selected code implementations and pre-trained models for detecting objects in images, such as DeepMAC and Context R-CNN. These models may be state-of-the-art, but they are not yet official TensorFlow models.

Object Detection API installation involves multiple steps, some of which are cumbersome, so using Docker may really help here. In theory, we could use a Dockerfile provided in the TensorFlow repository (for example, this one). We'll create our own, though, because the TensorFlow-provided Dockerfile creates a large Docker image with all build tools and resources left inside.

Dockerfile First Shared Stage

The first stage will install all shared dependencies required for both build and inference:

Python
FROM tensorflow/tensorflow:2.4.1-gpu AS buildimg_base
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install --no-install-recommends \
  python3-cairocffi \
  python3-pil \
  python3-lxml \
  python3-tk \
  && apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/*

Note the AS buildimg_base clause in the FROM statement. It defines the internal image name, which we will use in the following steps.

Dockerfile Second Stage (Build)

Now we will need to download the Object Detection API repository and build it. We start by extending the Dockerfile created previously referencing the buildimg_base internal image:

Python
FROM buildimg_base AS buildimg
ARG DEBIAN_FRONTEND=noninteractive

Next, we instruct Docker to install build tools, download repository, and build the library:

Python
RUN apt-get update && apt-get -y install --no-install-recommends \
  protobuf-compiler \
  git 
WORKDIR /tmp/odsetup

RUN git clone https://github.com/tensorflow/models.git \
  && cd /tmp/odsetup/models \
  && git checkout fea1bf9d622f07638767deeb0acd742d3a5d8af7 \
  && (cd /tmp/odsetup/models/research/ && protoc object_detection/protos/*.proto --python_out=.)

WORKDIR /tmp/odsetup/models/research/
RUN cp object_detection/packages/tf2/setup.py ./ \
  && python -m pip install -U pip \
  && pip install .

Note how we use git checkout to ensure that we use a specific version of the Object Detection API code. Unfortunately, we cannot rely on tags here, because this repository doesn’t include the research folder in the tagged (official) releases.

Dockerfile Third Stage for Inference

Now, we instruct Docker to start from the same, shared buildimg_base image, and then copy Python libraries installed in the previous stage:

Python
FROM buildimg_base
COPY --from=buildimg /usr/local/lib/python3.6 /usr/local/lib/python3.6

For simplicity, we copy all libraries, which include the new object_detection one, along with 100+ other dependencies.

Finally, we add a user to ensure that the container will not be executed as root:

Python
ARG USERNAME=mluser
ARG USERID=1000
RUN useradd --system --create-home --shell /bin/bash --uid $USERID $USERNAME
USER $USERNAME
WORKDIR /home/$USERNAME

Building Image

With all the code in our Dockerfile, we can build the image:

$ docker build --build-arg USERID=$(id -u) -t mld06_gpu_tfodapi .

You can skip the --build-arg USERID attribute when running on the Windows host. In reality, you don’t want to run Object Detection API on a machine without GPU support. Because GPU support is available only on Linux hosts, we’ll focus on Linux commands in this article.

Running Object Detection Inference

With the image ready, we can use the container to run predictions on sample images.

Describing how Object Detection API can be used for inference is beyond the scope of this article – download our code to proceed.

Apart from the content we have discussed, our Dockerfile contains a single Python script: app/prediction_tutorial.py. This script handles download of sample images and an object detection model, then runs predictions on these sample images. The code was adapted from the Object Detection API tutorial.

With all the pieces in place, we can run this code using our container. For easier testing, we’ll map our local folders as container volumes.

On Linux Docker with GPU support, execute:

$ docker run -v $(pwd)/.keras:/home/mluser/.keras -v $(pwd)/app:/home/mluser/app \
 --rm --user $(id -u):$(id -g) --gpus "device=0" \
 mld06_gpu_tfodapi python app/prediction_tutorial.py

To run the same container on a machine without GPU, remove the --gpus "device=0" attribute.

If everything goes well, you should expect logs similar to the following:

Image 1

Because we have mapped the container folder /home/mluser/.keras to our local path, we can also examine images with predictions saved in the .keras/predictions folder.

Summary

In this article, we’ve used Docker in a moderately complex case. We’ve run inference on sample images with TensorFlow using a containerized ObjectDetection API environment. In subsequent articles of this series, we’ll continue tackling large models. Those models will deal with Natural Language Processing (NLP) tasks with PyTorch and Transformers. Stay tuned!

This article is part of the series 'Containerized AI and Machine Learning View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect
Poland Poland
Jarek has two decades of professional experience in software architecture and development, machine learning, business and system analysis, logistics, and business process optimization.
He is passionate about creating software solutions with complex logic, especially with the application of AI.

Comments and Discussions

 
-- There are no messages in this forum --