Install glusterfs to use as local storage for Kubernetes

Topic created · 1 Posts · 6 Views
  • This document is for dev/test environments. Do not use on a prod environment

    What is glusterfs

    Gluster is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. Gluster is free.

    Why user glusterfs?

    Usually, when you run your infrastructure on a cloud provider, the storage is automatically provisioned. Nevertheless, when running your own installation in a VM or bare metal, you won't have any kind of storage.

    By using glusterfs, you have a reliable and fast system to create your volumes in a easy manner

    Installation

    The process it's quite simple. As per the official documentation, which states:

    Please use the Gluster Community PPAs for the latest version of GlusterFS:
    https://launchpad.net/~gluster
    

    Let's say we have 3 kubernetes nodes. Our master node, and node1 and node2.
    The following commands should be executed in all the nodes

    The steps that we need to perform are:

    sudo add-apt-repository ppa:gluster/glusterfs-7
    sudo apt-get update
    apt install glusterfs-server
    sudo systemctl start glusterd
    sudo systemctl enable glusterd
    

    Adding nodes

    Ensure that our 3 hosts have the corresponding entry on the /etc/hosts

    From the master node, we need to perform the following actions:

    gluster peer probe node1
    gluster peer probe node2
    

    Then, we need to verify that our nodes have been added by running:

    gluster peer status
    

    which will return:

    Number of Peers: 2
    
    Hostname: node1
    Uuid: 76a57408-5f50-420f-927a-4cfc6843f1d4
    State: Peer in Cluster (Connected)
    
    Hostname: node2
    Uuid: 9e76c3a8-9a48-4a07-a6de-d3a9c3659a43
    State: Peer in Cluster (Connected)
    root@master:/home/istio/istio-1.4.3# gluster pool list
    UUID                                    Hostname        State
    76a57408-5f50-420f-927a-4cfc6843f1d4    node1           Connected
    9e76c3a8-9a48-4a07-a6de-d3a9c3659a43    node2           Connected
    df901df3-f164-45c7-afaf-92efc5964f10    localhost       Connected
    

    Setup Distributed GlusterFS Volume

    Now that we have the nodes, it's time to add the actual shared storage.
    We need to add an additional disk to our instances (50 / 100 gb)

    Then, scan for new disks with fdisk -l and create the vg / lv as per below on each node:

    vgcreate glustervg /dev/sdb
    lvcreate -L +51G -n glusterlv /dev/glustervg
    mkfs.xfs /dev/glustervg/glusterlv
    mkdir /gluster
    

    Then, get the blkid for the new lv and add it to /etc/fstab

    blkid /dev/glustervg/glusterlv
    
    vi /etc/fstab
    add:
    UUID=99f57d21-8844-44b9-b2e3-85fc80ce5886 /gluster xfs defaults 0 0
    
    mount /gluster
    

    From the master node, we will create the replica by executing:

    gluster volume create vol01 replica 3 transport tcp master:/glusterfs node1:/gluster node2:/gluster force
    gluster volume start vol01
    

    And validate by checking the volume information:

    gluster volume info vol01
    
    Volume Name: vol01
    Type: Replicate
    Volume ID: 63f13452-27f6-44a9-a24f-66ec0a4df33e
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 3 = 3
    Transport-type: tcp
    Bricks:
    Brick1: master:/glusterfs
    Brick2: node1:/gluster
    Brick3: node2:/gluster
    Options Reconfigured:
    transport.address-family: inet
    storage.fips-mode-rchecksum: on
    nfs.disable: on
    performance.client-io-threads: off
    

    Mount our gluster volume

    On each node we will create a new mount point

    mkdir -p /mnt/glusterfs
    

    For redundancy, we will be mounting the different servers on each node. As an example, on the master node, we mount node2, on node1 we mount master and in node2 we mount node1

    Edit /etc/fstab and add:

    Master:

    node2:/vol01 /mnt/glusterfs glusterfs defaults,_netdev 0 0
    

    Node1:

    master:/vol01 /mnt/glusterfs glusterfs defaults,_netdev 0 0
    

    Node2:

    node1:/vol01 /mnt/glusterfs glusterfs defaults,_netdev 0 0
    

    Then, on each node, mount the volume by doing a mount -a

    Create a kuberntes PV

    Now that we have our shared storage, we can create a Persistent Volume.
    To do so, in our master node we will run the following command

    /mnt/glusterfs/test
    

    Then, create a yaml file named storage.yaml with the following content

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: test
      labels:
        type: local
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      hostPath:
        path: "/mnt/glusterfs/test"
    

    Once done, create the pv with:

    kubectl apply -f storage.yml
    

    And validate with:

    kubectl get pv
    NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    test   5Gi        RWX            Retain           Available                                   4s
    

    Create the PVC

    One we have our volume create, we need a Persistent volume claim (or pvc) in order to allow our pods to use it.
    To do so, we will create a file named pvc.yaml with the following content:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
      name: test
      namespace: default
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      volumeName: test
    

    Then, create it by using:

    kubectl create -f pvc.yaml
    

    Finally, validate its creation:

    kubectl get pvc
    NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    test   Bound    test     5Gi        RWX                           115s