Click here to Skip to main content
15,922,166 members
Articles / Containers / Kubernetes

Deploy Persistent Storage on Azure with Kubernetes

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
12 Dec 2017CPOL4 min read 16.3K   1   5
How to make some data storage resources available to the cluster
In this article, you will see the steps to deploy persistent storage volumes in Kubernetes on Azure.


In the previous article in this Devops series, we looked at deploying a production ready Kubernetes cluster on Azure using the 'KubeSpray' project. Having set that up, the next step is to make some data storage resources available to the cluster. We do this by creating an Azure file storage resource, and then linking it to the Kubernetes cluster using 'Secrets'. This article is a walk-through of the process.


I can't really think of any reasonable project I've been involved in that didn't have the use of some data storage facility. When you head into the world of cloud, and containers, having a handy C:\ or D:z\ drive at your fingertips gets elusive. You start having to think of things like persistent volumes and cloud based blob storage and the like. Using Kubernetes, we can set up a data volume that appears to our container resources just like another large remote drive. This is a key piece of the puzzle when pulling different technologies together in an orchestrated manner like we are doing in this series of articles.

Setting Up a Kubernetes Data Volume on Azure

  1. In your control panel, from NEW action, select general storage resource:

    Image 1

  2. Give the resource a new unique name (lowercase), and use the same resource group as your main Kubernetes cluster:

    Image 2

  3. The new resource will become available in the resource group - select it for editing:

    Image 3

  4. In the details page, select the 'file' share section:

    Image 4

  5. In the file share page, click add-new, give the share a unique name, and specify the size of file storage you require (in GB):

    Image 5

    After saving, you should see the newly created share available for use.

    Image 6

  6. Next, we need to get some security keys from the storage account.

    Go back to the main resource group list where the account is located and select it.

    Image 7

  7. Once in the storage account, we then need to navigate to the 'Access keys' section and copy out the first key and the resource name.

    Image 8

  8. We can't use these keys directly in Kubernetes and need to convert them using Base64. We can do this by taking each one of the items we copied and encoding. In this example, we use the online resource

    Image 9

  9. We now need to take this information and give it to Kubernetes in a Yaml file.

    SSH into the main Kubernetes master machine, and issue the following commands:

    sudo su -
    apt-get update 
    apt-get install -y cifs-utils

    Image 10

    Now use nano to create a new yaml file:

    nano azure-secret.yaml

    Into this file, add the following contents, *replacing* the value for accountname and accountkey with the base64 encoded values of each (be careful with indenting/spacing in yaml files):

    apiVersion: v1
    kind: Secret
      name: azure-secret
    type: Opaque
      azurestorageaccountname: <your encoded account name>
      azurestorageaccountkey: <your encoded key>

    Image 11

    After making the changes, use CTRL + O <Enter> to write the file, then CTRL + X to exit:

    Image 12

    This has set up the 'key secret' file, now we need to set up the main instruction file. In this case, it has been called 'azure.yaml' but it can be given any name.

    The contents of this file are as follows:

    apiVersion: v1
    kind: Pod
     name: shareddatastore
      - image: kubernetes/pause
        name: azure
          - name: azure
            mountPath: /mnt/azure
          - name: azure
              secretName: azure-secret
              shareName: kubedatashare
              readOnly: false
    • The important parts that you need to change are:
      • volumeMounts - given name of 'azure' and an internal virtual mount path. The name should match the name of the volume (next step/file-entry)
      • volumes → name → this has been set to a default name of 'azure'. The next entry is 'azureFile' (defines the type of storage volume to Kubernetes). The 'secretName' refers to the data in the 'azure-secret.yaml' file we created earlier, and the shareName is the name we gave to the file share we created in step (5). Setting readOnly to false makes the volume available as read/write.
  10. We now need to pass the secret key to Kubernetes and then set the volume running.

    At the command line, send in the following command:

    kubectl​ ​create​ ​-f​ ​azure-secret.yaml

    Once that has completed, the secret has been set, so we can send in the command to setup the volume itself.

    kubectl​ ​create​ ​-f​ ​azure.yaml 

    Once that completes, we can then test that everything has been setup correctly by examining available pods:

    kubectl get po

    This will show an output like the following:

    Image 13

    Finally, we can examine how the volume has been implemented to confirm it is as specified:

    kubectl describe po azure

    Looking at the output of 'describe', you can see important information such as the node (VM) the containers is hosted on, the fact that it is a 'secret based' volume, and that it is connected to an Azure file service.

    Image 14

  11. The volume can now be directly accessed by any container... we will cover how to do this in detail in a later article.

    If you need them, I have put a shortened version of the instructions in a zip attached to the top of this article. Finally, as usual, if you found the article useful, please give it a vote!

Links of Interest


  • 7th December, 2017 - Version 1


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
Chief Technology Officer SocialVoice.AI
Ireland Ireland
Allen is CTO of SocialVoice (, where his company analyses video data at scale and gives Global Brands Knowledge, Insights and Actions never seen before! Allen is a chartered engineer, a Fellow of the British Computing Society, a Microsoft mvp and Regional Director, and C-Sharp Corner Community Adviser and MVP. His core technology interests are BigData, IoT and Machine Learning.

When not chained to his desk he can be found fixing broken things, playing music very badly or trying to shape things out of wood. He currently completing a PhD in AI and is also a ball throwing slave for his dogs.

Comments and Discussions

QuestionAnother good one Pin
Sacha Barber12-Dec-17 22:54
Sacha Barber12-Dec-17 22:54 
AnswerRe: Another good one Pin
DataBytzAI13-Dec-17 11:04
professionalDataBytzAI13-Dec-17 11:04 
GeneralRe: Another good one Pin
Sacha Barber13-Dec-17 16:37
Sacha Barber13-Dec-17 16:37 
Questionall images are missing Pin
Mou_kol7-Dec-17 20:51
Mou_kol7-Dec-17 20:51 
AnswerRe: all images are missing Pin
DataBytzAI10-Dec-17 2:44
professionalDataBytzAI10-Dec-17 2:44 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.