Click here to Skip to main content
15,867,973 members
Articles / Containers / Kubernetes

Delete Stale Feature Branches in a Kubernetes Cluster

Rate me:
Please Sign up or sign in to vote.
0.00/5 (No votes)
29 Nov 2021MIT9 min read 6.1K   7   1  
Delete namespaces as namespace equals all resources belonging to feature branch in Kubernetes cluster
The project helps you to delete namespaces as a namespace equals to all resources that belong to a feature branch in Kubernetes cluster.

Delete stale feature branches in your Kubernetes cluster.

Release Build Status

Table of Contents

Getting Started

End Users

An end user of the project is a Kubernetes cluster's administrator which is involved into Continuous Integration process and looking for a solution of the problem of feature branches' that live in the cluster after its pull request is already merged.

Feature Branch

Feature branch (or deploy preview) means that a pull request is deployed as a separate instance of your application. It allows preventing errors and bugs, responsible people can check a feature before it's merged to production.

One of the ways to create a feature branch in Kubernetes cluster is to use namespaces to separate production deployment from any other. Production configurations may look similar to:

YAML
kind: Deployment
apiVersion: apps/v1
metadata:
  namespace: github-back-end
...

Otherwise, feature branches always have a different namespace. Such as -pr- prefix or postfix in its name. The example is illustrated below:

YAML
kind: Deployment
apiVersion: apps/v1
metadata:
  namespace: github-back-end-pr-17
...

To summarize what is described above, the project helps you to delete namespaces as a namespace equals to all resources that belong to a feature branch in Kubernetes cluster.

More information about implementation of feature branches using namespaces is here and here.

Motivation

To understand the motivation of the project, let's check common continuous integration for a pull request and its lifecycle:

  1. A new commit is pushed to a branch.
  2. Code style and tests are passed.
  3. A feature branch's configurations are applied.
  4. The feature branch's namespace and other resources are running in a cluster.
  5. The branch is merged to a production branch (for instance, master).

One important thing is that good lifecycle will delete all existing feature branch resources for a particular commit before applying configurations for the new commit. It's needed to ensure that each commit's deployment is done from a clear state.

But after the branch is merged to the production branch, all feature branch's resources are still running in a cluster and occupy its resources. What are the ways to delete them? Check alternatives section.

Alternatives

These are the ways to delete feature branch's resources after its branch is merged to production branch. All of them are not ideal as well as this project. Each of you can choose any approach that mostly fits your case.

  1. On each production branch build, detect which branch was merged last and delete.

    • It can be done only by fetching commits history. In this case, a commit should contain number of its pull request.
    • Sometimes, production branch's builds fail on the stage you do not want to rebuild. For instance, all important stages are passed, but a notification stage failed, and your clean up stage is after the notification one. So, unlikely you rebuild it. Also, the logic of deletion in master branch pipeline doesn't logically fit all other instructions such as deploy.
  2. Integrate a webhook to your continuous integration system (example).

    • It may not fit your development principle. For instance, Jenkins support the only one type of pipelines where you can have your pipeline's configuration file to be uploaded to a source code server (and follow infrastructure as code process). So, to create a webhook, you will need separate scripts that process webhook's data which will not be uploaded to the source code server and should be maintained in a user interface.
  3. Create own Cronjob resource in a Kubernetes cluster.

    • That also requires development and maintenance. Especially, moving this from one company to another one.
    • Furthermore, this project works almost in the same principle as Cronjob resource, so you lose nothing while reusing.

Installation

Apply the latest release configurations with the command below, it will create the StaleFeatureBranch resource, install the operator into stale-feature-branch-operator namespace, create a service account and necessary RBAC roles.

Shell
$ kubectl apply -f \
      https://raw.githubusercontent.com/dmytrostriletskyi/
      stale-feature-branch-operator/master/configs/production.yml

If you need any previous release, full list of versions is available here.

Usage

To delete stale feature branches, after applying installation instructions above, create a configuration file with feature-branch.dmytrostriletskyi.com/v1 as apiVersion and StaleFeatureBranch as kind:

YAML
apiVersion: feature-branch.dmytrostriletskyi.com/v1
kind: StaleFeatureBranch
metadata:
  name: stale-feature-branch
spec:
  namespaceSubstring: -pr-
  afterDaysWithoutDeploy: 3

Choose any metadata's name for the resource and dive into specifications:

  1. namespaceSubstring is needed to get all feature branches' namespaces. For instance, the example above will grab github-back-end-pr-17 and github-back-end-pr-33 if there are namespaces github-back-end, github-front-end, github-back-end-pr-17, github-back-end-pr-33 in a cluster as the -pr- substring occurs there.
  2. afterDaysWithoutDeploy is needed to delete only old namespaces. If you set 3 days there, namespaces created 1 day or 2 days ago will not be deleted, but created 3 days, 1 hour or 4 days will be deleted.

It processes feature branches' namespaces every 30 minutes by default. The last available parameter in specifications is checkEveryMinutes. You can configure a frequency of the processes in minutes if the default value doesn't fit you.

Check guideline below if you want to know how it works under the hood.

Guideline

This guideline shows how the deletion of stale feature branches works under the hood. You should not reproduce the instructions below for production cluster as it's just a detailed example to understand the behavior of the operator. For this chapter, testing Kubernetes cluster on your personal computer will be used.

Requirements

  1. Docker. Virtualization to run the software in packages called containers.
  2. Minikube. Runs a single-node Kubernetes cluster in a virtual machine (or Docker) on your personal computer.
  3. kubectl. Command-line interface to access Kubernetes cluster.

Running

Start Kubernetes cluster on your personal computer with the following command:

Shell
$ minikube start --vm-driver=docker
minikube v1.11.0 on Darwin 10.15.5
Using the docker driver based on existing profile.
Starting control plane node minikube in cluster minikube.

Then, choose your cluster as the main one for kubectl. It's needed for cases you work with many clusters from the single computer:

Shell
$ kubectl config use-context minikube
Switched to context "minikube".

Applied configurations in the same way you apply it to a production cluster. But as it's production configurations, they will expect old namespaces present in your cluster. Our cluster is fresh, and no old resources are present there. As you do not have them, the operator allows you to specify the debug parameter. If the debug is enabled, all namespaces will be deleted without checking for an oldness:

Copy the production configurations to your personal computer:

Shell
$ curl https://raw.githubusercontent.com/dmytrostriletskyi/
  stale-feature-branch-operator/master/configs/production.yml > \
      stale-feature-branch-production-configs.yml

If you need any previous release, full list of versions is available here.

Enable debug by changing the setting. For Linux, it's:

Shell
$ sed -i 's|false|true|g' stale-feature-branch-production-configs.yml

For macOS, it's:

Shell
$ sed -i "" 's|false|true|g' stale-feature-branch-production-configs.yml

Apply the changed production configurations:

Shell
$ kubectl apply -f stale-feature-branch-production-configs.yml

Fetch all resources in Kubernetes cluster, you will see StaleFeatureBranch resource is available to use:

Shell
$ kubectl api-resources | grep stalefeaturebranches
NAME                   SHORTNAMES  APIGROUP                              NAMESPACED  KIND
stalefeaturebranches   sfb         feature-branch.dmytrostriletskyi.com  true        StaleFeatureBranch

Fetch pods in stale-feature-branch-operator namespace, you will see an operator that listens for new StaleFeatureBranch resources running there:

Shell
$ kubectl get pods --namespace stale-feature-branch-operator
NAME                                             READY   STATUS    RESTARTS   AGE
stale-feature-branch-operator-6bfbfd4df8-m7sch   1/1     Running   0          38s

Fetch the operator's logs to ensure it's running:

Shell
$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch 
  -n stale-feature-branch-operator
{"level":"info","ts":1592306900.8200202,"logger":"cmd","msg":"Operator Version: 0.0.1"}
...
{"level":"info","ts":1592306901.5672553,"logger":"controller-runtime.controller",
 "msg":"Starting EventSource","controller":"stalefeaturebranch-controller",
 "source":"kind source: /, Kind="}
{"level":"info","ts":1592306901.6680624,"logger":"controller-runtime.controller",
 "msg":"Starting Controller","controller":"stalefeaturebranch-controller"}
{"level":"info","ts":1592306901.6681142,"logger":"controller-runtime.controller",
 "msg":"Starting workers","controller":"stalefeaturebranch-controller","worker count":1}

Create ready-to-use fixtures that contain two namespaces project-pr-1 and project-pr-2 with many other resources as well (deployment, service, secrets, etc.).:

Shell
$ kubectl apply \
      -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/first-feature-branch.yml \
      -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/second-feature-branch.yml
namespace/project-pr-1 created
deployment.apps/project-pr-1 created
service/project-pr-1 created
horizontalpodautoscaler.autoscaling/project-pr-1 created
secret/project-pr-1 created
configmap/project-pr-1 created
ingress.extensions/project-pr-1 created
namespace/project-pr-2 created
deployment.apps/project-pr-2 created
service/project-pr-2 created
horizontalpodautoscaler.autoscaling/project-pr-2 created
secret/project-pr-2 created
configmap/project-pr-2 created
ingress.extensions/project-pr-2 created

You can check their existence by the following command:

Shell
$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
      configmap,ingress -n project-pr-1 && \
      kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
      configmap,ingress -n project-pr-2
...
NAME                                READY   STATUS    RESTARTS   AGE
pod/project-pr-1-848d5fdff6-rpmzw   1/1     Running   0          67s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/project-pr-1   1/1     1            1           67s
...

As it's told above, when debug is enabled, all namespaces will be deleted without checking for an oldness. It means if we create StaleFeatureBranch configurations, the namespaces will be deleted immediately. The fixture for StaleFeatureBranch will check for namespaces that contain -pr- in their names once a minute.

Shell
$ kubectl apply -f \
      https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/stale-feature-branch.yml

Then, check the logs of the operator, and you will see that namespaces are deleted:

Shell
{"level":"info","ts":1592322500.64014,"logger":"stale-feature-branch-controller",
 "msg":"Stale feature branch is being processing.","namespaceSubstring":"-pr-",
 "afterDaysWithoutDeploy":1,"checkEveryMinutes":1,"isDebug":"true"}
{"level":"info","ts":1592322500.7436411,"logger":"stale-feature-branch-controller",
 "msg":"Namespace should be deleted due to debug mode is enabled.",
 "namespaceName":"project-pr-1"}
{"level":"info","ts":1592322500.743676,"logger":"stale-feature-branch-controller",
 "msg":"Namespace is being processing.","namespaceName":"project-pr-1",
 "namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
{"level":"info","ts":1592322500.752212,"logger":"stale-feature-branch-controller",
 "msg":"Namespace has been deleted.","namespaceName":"project-pr-1"}
{"level":"info","ts":1592322500.752239,"logger":"stale-feature-branch-controller",
 "msg":"Namespace should be deleted due to debug mode is enabled.",
 "namespaceName":"project-pr-2"}
{"level":"info","ts":1592322500.752244,"logger":"stale-feature-branch-controller",
 "msg":"Namespace is being processing.","namespaceName":"project-pr-2",
 "namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
{"level":"info","ts":1592322500.75804,"logger":"stale-feature-branch-controller",
 "msg":"Namespace has been deleted.","namespaceName":"project-pr-2"}

If you check resources again, the output would be Terminating or empty.

Shell
$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
  configmap,ingress -n project-pr-1 && \
      kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
      configmap,ingress -n project-pr-2

You can go through the process of the creation of resources again. At the end, in a minute or less, resources will be deleted again.

API

Version One

Use feature-branch.dmytrostriletskyi.com/v1 as apiVersion. Arguments for specification are the following:

Arguments Type Required Restrictions Default Description
namespaceSubstring String Yes - - Substring to grab feature branches' namespaces and not other once.
afterDaysWithoutDeploy Integer Yes >0 - Delete feature branches' namespaces if there is no deploy for number of days.
checkEveryMinutes Integer No >0 30 Processes feature branches' namespaces each number of minutes.

Development

Requirements

  1. Docker. Virtualization to run software in packages called containers.
  2. Minikube. Runs a single-node Kubernetes cluster in a virtual machine (or Docker) on your personal computer.
  3. kubectl. Command line interface to access Kubernetes cluster.

Cloning

Clone the project with the following command:

Shell
$ mkdir -p $GOPATH/src/github.com/dmytrostriletskyi
$ cd $GOPATH/src/github.com/dmytrostriletskyi
$ git clone git@github.com:dmytrostriletskyi/stale-feature-branch-operator.git
$ cd stale-feature-branch-operator

Running

Start Kubernetes cluster on your personal computer with the following command:

Shell
$ minikube start --vm-driver=docker
minikube v1.11.0 on Darwin 10.15.5
Using the docker driver based on existing profile.
Starting control plane node minikube in cluster minikube.

Then, choose your cluster as main one for kubectl. It's needed for cases you work with many clusters from the single computer:

Shell
$ kubectl config use-context minikube
Switched to context "minikube".

Register StaleFeatureBranch resource by the following command:

Shell
$ kubectl create -f configs/development.yml

By fetching all resources in Kubernetes cluster, you will see StaleFeatureBranch resource is available to use there:

Shell
$ kubectl api-resources | grep stalefeaturebranches
NAME                   SHORTNAMES  APIGROUP                              NAMESPACED  KIND
stalefeaturebranches   sfb         feature-branch.dmytrostriletskyi.com  true        StaleFeatureBranch

Build the operator with the following command:

Shell
$ go build -a -o operator pkg/*.go

Run the operator with the following command:

Shell
$ ./operator
{"level":"info","ts":1592321007.8580391,"logger":"cmd","msg":"Operator Version: 0.0.1"}
...
{"level":"info","ts":1592321008.1686652,"logger":"controller-runtime.controller",
 "msg":"Starting EventSource","controller":"stalefeaturebranch-controller",
 "source":"kind source: /, Kind="}
{"level":"info","ts":1592321008.3716009,"logger":"controller-runtime.controller",
 "msg":"Starting Controller","controller":"stalefeaturebranch-controller"}
{"level":"info","ts":1592321008.3717089,"logger":"controller-runtime.controller",
 "msg":"Starting workers","controller":"stalefeaturebranch-controller","worker count":1}

The following environment variables are supported:

Shell
$ OPERATOR_NAME=stale-feature-branch-operator IS_DEBUG=true ./operator
Arguments Type Required Restrictions Default Description
OPERATOR_NAME String Yes - - Operator name.
IS_DEBUG String No One of: true, false. false If debug mode is enabled, all namespaces will be deleted without checking for an oldness.

Create ready-to-use fixtures that container two namespaces project-pr-1 and project-pr-2 with many other resources as well (deployment, service, secrets, etc.):

Shell
$ kubectl apply \
      -f fixtures/first-feature-branch.yml -f fixtures/second-feature-branch.yml
namespace/project-pr-1 created
deployment.apps/project-pr-1 created
service/project-pr-1 created
horizontalpodautoscaler.autoscaling/project-pr-1 created
secret/project-pr-1 created
configmap/project-pr-1 created
ingress.extensions/project-pr-1 created
namespace/project-pr-2 created
deployment.apps/project-pr-2 created
service/project-pr-2 created
horizontalpodautoscaler.autoscaling/project-pr-2 created
secret/project-pr-2 created
configmap/project-pr-2 created
ingress.extensions/project-pr-2 created

You can check their existence by the following command:

Shell
$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
  configmap,ingress -n project-pr-1 && \
      kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
      configmap,ingress -n project-pr-2
...
NAME                                READY   STATUS    RESTARTS   AGE
pod/project-pr-1-848d5fdff6-rpmzw   1/1     Running   0          67s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/project-pr-1   1/1     1            1           67s
...

As it's told above, when debug is enabled, all namespaces will be deleted without checking for an oldness. It means if we create StaleFeatureBranch configurations, the namespaces will be deleted immediately. The fixture for StaleFeatureBranch will check for namespaces that contain -pr- in their names once a minute.

Shell
$ kubectl apply -f fixtures/stale-feature-branch.yml

Later, check the logs of the operator, and you will see that namespaces are deleted:

Shell
{"level":"info","ts":1592322500.64014,"logger":"stale-feature-branch-controller",
 "msg":"Stale feature branch is being processing.","namespaceSubstring":"-pr-",
 "afterDaysWithoutDeploy":1,"checkEveryMinutes":1,"isDebug":"true"}
{"level":"info","ts":1592322500.7436411,"logger":"stale-feature-branch-controller",
 "msg":"Namespace should be deleted due to debug mode is enabled.",
 "namespaceName":"project-pr-1"}
{"level":"info","ts":1592322500.743676,"logger":"stale-feature-branch-controller",
 "msg":"Namespace is being processing.","namespaceName":"project-pr-1",
 "namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
{"level":"info","ts":1592322500.752212,"logger":"stale-feature-branch-controller",
 "msg":"Namespace has been deleted.","namespaceName":"project-pr-1"}
{"level":"info","ts":1592322500.752239,"logger":"stale-feature-branch-controller",
 "msg":"Namespace should be deleted due to debug mode is enabled.",
 "namespaceName":"project-pr-2"}
{"level":"info","ts":1592322500.752244,"logger":"stale-feature-branch-controller",
 "msg":"Namespace is being processing.","namespaceName":"project-pr-2",
 "namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
{"level":"info","ts":1592322500.75804,"logger":"stale-feature-branch-controller",
 "msg":"Namespace has been deleted.","namespaceName":"project-pr-2"}

If you check resources again, the output would be Terminating or empty.

Shell
$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
  configmap,ingress -n project-pr-1 && \
      kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,
      configmap,ingress -n project-pr-2

You can go through the process of creation of resources again. At the end, in a minute or less, resources will be deleted again.

Docker Image

The operator is deployed to Kubernetes cluster as a pod in a deployment that can be found in configs/production.yml file:

YAML
kind: Deployment
apiVersion: apps/v1
...
      containers:
        - name: stale-feature-branch-operator
          image: dmytrostriletskyi/stale-feature-branch-operator:v0.0.1
...

To build, use the following command replacing registry, project name and version if needed:

Shell
$ docker build --tag dmytrostriletskyi/stale-feature-branch-operator:v$
                 (cat .project-version) -f ops/Dockerfile .

To push, use the following command replacing registry, project name and version if needed:

Shell
$ docker push dmytrostriletskyi/stale-feature-branch-operator:v$(cat .project-version)

If you want to run it locally, do the following command:

Shell
$ docker run dmytrostriletskyi/stale-feature-branch-operator:v$(cat .project-version) && \
      --name stale-feature-branch-operator

Contributing

Code Style

Ensure that your code is formatted with the following command:

Shell
$ go fmt ./...

Testing

Ensure that your code is covered with tests using the following command:

Shell
$ go test ./... -v -count=1

Custom Resource Definitions

If you changed a custom resource definition schema such as pkg/apis/featurebranch/v1/stale_feature_branch.go, you should:

  1. Update corresponding CustomResourceDefinition resources in configs/development.yml and configs/production.yml. To generate CustomResourceDefinition resource based on your changes, use the following command. It will output update configuration:

    Shell
    $ make crds
    o: creating new go.mod: module tmp
    go: found sigs.k8s.io/controller-tools/cmd/controller-gen in 
        sigs.k8s.io/controller-tools v0.2.5
    ../../go/bin/controller-gen crd:trivialVersions=true 
    rbac:roleName=manager-role webhook output:stdout paths="./..."
    
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      annotations:
        controller-gen.kubebuilder.io/version: v0.2.5
      creationTimestamp: null
      name: stalefeaturebranches.feature-branch.dmytrostriletskyi.com
    ...
  2. Update deep copies for schema's structures. It will update file pkg/apis/featurebranch/v1/zz_generated.deepcopy.go automatically:

    Shell
    $ make deep-copy
    go: creating new go.mod: module tmp
    go: found sigs.k8s.io/controller-tools/cmd/controller-gen 
        in sigs.k8s.io/controller-tools v0.2.5
    ../../go/bin/controller-gen object paths="./..."

License

This article, along with any associated source code and files, is licensed under The MIT License


Written By
United States United States
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
-- There are no messages in this forum --