Play with Kubernetes using IBM Cloud CE

Topic created · 1 Posts · 21 Views
  • This is only for testing / sandbox purpose.

    There are quite a few options around to start playing with Kubernetes. We have minikube , microk8s, but those are options to test in small environments such as our laptops or workstations.

    If we want something more look-a-like to an actual deployment, we can use IBM Private cloud CE (community edition) which will give us a friendly approach.

    We've followed the official IBM documentation to install it, but here we will recap all the steps to make its deployment as fast as possible.

    IBM Cloud Private Version 3.2.0 uses Kubernetes version 1.13.9

    We will cover the following items:

    • Install docker CE

    • Download IBM Private cloud CE image

    • Configure IBM private cloud templates

    • Install IBM private cloud

    • Access IBM private cloud

    • Browse the tools (Metrics, Monitoring)

    • Perform a simple deployment

    Specs and requirements

    We will be using the Single node (or all in one) deployment type.
    As per the official doc, we will need a VM or cloud instance with the following specs

    alt text

    In our case, we've selected an instance with our cloud provider of 8 vCPU | 32 GB RAM with Ubuntu Linux 18.04 LTS Bionic Beaver Minimal Install

    You will need to provide the instance on your cloud provider or VM.
    We can use a single partition (all the disk assigned to / ) or add an additional disk and create a VG/LV and mount it to /data

    Install docker CE

    update the system:

    apt-get update
    

    Once updated, we will install the following dependencies:

    sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common
    

    Add docker GPG key:

          curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    

    Add docker CE repository and install

    sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable"
    
     sudo apt-get update
    
     apt-get install docker-ce
    
    

    Once docker has finished its installation, we will need to modify the docker.service file to reduce the amount of logs that are written to our system in order to avoid a possible lack of space.

    To do so, edit /lib/systemd/system/docker.service (with vim, emacs... or your preferred editor)
    Add the following after "execstart"

    --log-opt max-size=10m --log-opt max-file=10
    

    The entry will look as:
    alt text

    Then, we will reload the docker daemon and restart the service

    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    We can run the a standard check to ensure that docker is up and running

    docker ps
    

    If you are using your own vm or a cloud provider instance, it's very likely that the root login will be disabled (and this is required for the installation). In that case, you will need to perform some additional steps.

    First, update the sshd_config file to allow root login

    sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
    

    Then, restart the ssh service:

    sudo service ssh restart
    

    You will need to set a password for the root user, as by default it doesn't have one

    sudo passwd
    [sudo] password for xxxx: 
    Enter new UNIX password: 
    Retype new UNIX password: 
    passwd: password updated successfully
    

    Before moving on we need to exchange our own root ssh keys to allow IBM private cloud to run the installation

    To do so, we will generate an ssh key by running:

    ssh-keygen
    

    Then copy the generated key to the server itself by executing (where xxx is your server ip)

    ssh-copy-id -i ~/.ssh/id_rsa.pub root@xxxxxx
    

    Download IBM Private cloud CE image

    Download the CE image from Docker Hub by running the following command:

    docker pull ibmcom/icp-inception:3.2.0
    

    Once done and before moving forward, we need to disable the following entries from our /etc/hosts file

    #::1    ip6-localhost   ip6-loopback
    #127.0.1.1      virtualserver01.xxx.xx       virtualserver01
    

    We can proceed and generate the installation folder by running the following command:

    cd /data
    
       sudo docker run -e LICENSE=accept \
       -v "$(pwd)":/data ibmcom/icp-inception:3.2.0 cp -r cluster /data
    

    Go to the go to /data/cluster folder and edit the hosts file by enabling the different sections and adding your instance IP

    [master]
    10.112.117.204
    
    [worker]
    10.112.117.204
    
    [proxy]
    10.112.117.204
    
    [management]
    10.112.117.204
    
    [va]
    10.112.117.204
    

    Once modified, edit the config.yaml file and add below --- the following:

    ansible_python_interpreter: /usr/bin/python3
    

    alt text

    Also, under the "## Advanced Settings" section, modify and add as per below:

    default_admin_user: admin
    default_admin_password: mJgFFnAMQGjGQyMYQuFA2CP9DTSkvppy (change here for a 32 string generated password)
    ansible_user: root
    ansible_become: true
    ansible_become_password: xxxxxxx (our root password)
    

    Finally, copy the generated ssh key before to the installation folder by changing the name to ssh_key

    sudo cp /root/.ssh/id_rsa /data/cluster/ssh_key
    

    Install IBM Cloud Private-CE installation:

    sudo docker run --net=host -t -e LICENSE=accept \
    -v "$(pwd)":/installer/cluster ibmcom/icp-inception:3.2.0 install
    

    During the installation we might see some errors due to lack of space in some partitions. As this is a POC, we can ignore them

    fatal: [172.16.0.10]: FAILED! => changed=true
      cmd: |-
        disk=$(df -lh -BG --output=avail /var | sed '1d' | grep -oP '\d+')
         [[ $disk -ge 240 ]] || (echo "/var directory available disk space ${disk}GB, it should be greater than or equal to 240 GB" 1>&2 && exit 1)
    

    Once the installation finishes (takes around 30 minutes) we will have an output like:

    alt text

    Accessing the cloud UI

    As you can see, the installation has generated the URL to access our IBM Private Cloud UI.
    Unless we're on the same network, for security reasons we won't be able to access it.

    In order to access, we will need to create an ssh tunnel to our Instance and then configure our web browser to use a Socks Proxy

    If we are using windows, we will open putty --> Load our connection --> Go to SSH --> Tunnels and Create a new port forward:

    alt text

    If we are using linux, we can do:

        ssh -L 4444:localhost:4444 our-remote-host
    

    Once the connection is open, we will need to modify our web browser proxy options (in our case, firefox) and configure a Manual Proxy connection as shown below:

    alt text

    This will allow us to browse to our Cloud UI:

    https://x.x.x.x:8443/

    alt text

    In order to login, we will use the admin user and our 32 strings generated password during the install process

    Browsing the Cloud UI

    Once logged in, if we click on the menu and select Overview, we can see the System and Resource overview of the system

    alt text

    If we go back to the menu and Select Platform and then Monitoring, a new tab will be opened with Grafana and browse the different default Dashboards available:

    Node Performance

    alt text

    Cluster Monitoring

    alt text

    Also, we can access the integrated Elasticsearch by going back to the menu, Select Platform and then Logging

    alt text

    Using Kubectl

    We can play around and get information of our cluster running the following commands:

    kubectl config view
    kubectl get nodes
    kubectl get pods --all-namespaces -o wide
    kubectl get services --sort-by=.metadata.name --all-namespaces
    

    Check the Official Kubernetes Cheat sheet for more

    Creating a Persistent Volume

    In a Prod / Cloud environment we will usually have auto provisioned storage, but in our case, we will be using local storage filesystem for our test.

    Create a new folder under /data and then an additional folder for our app

    mkdir /data/storage
    chmod -R 777 /data/storage
    mkdir /data/storage/mariadb
    

    Then, create a new file named pv_test.yaml with the following content replacing x.x.x.x by your server IP

    {
        "kind": "PersistentVolume",
        "apiVersion": "v1",
        "metadata": {
          "name": "mariadb",
          "labels": {
            "volumename": "mariadb",
            "app": "mariadb"
          }
        },
        "spec": {
          "storageClassName": "mariadb",
          "capacity": {
            "storage": "20Gi"
          },
          "accessModes": [
            "ReadWriteMany"
          ],
          "persistentVolumeReclaimPolicy": "Retain",
          "local": {
            "path": "/data/storage/mariadb"
          },
          "persistentVolumeReclaimPolicy": "Retain",
          "volumeMode": "Filesystem",
          "nodeAffinity": {
            "required": {
              "nodeSelectorTerms": [
                {
                  "matchExpressions": [
                    {
                      "key": "kubernetes.io/hostname",
                      "operator": "In",
                      "values": [
                        "172.16.0.10"
                      ]
                    }
                  ]
                }
              ]
            }
        }
      }
    }
    

    Then run the following commands which will

    • Create the namespace mariadb
    • Add the specific IBM cloud policy to run privileged containers
    • Create the persistent volume
    • Create the persistent volume claim
    kubectl create namespace mariadb
    kubectl -n mariadb create rolebinding ibm-anyuid-clusterrole-rolebinding --clusterrole=ibm-privileged-clusterrole --group=system:serviceaccounts:mariadb
    kubectl apply -f  pv_test.yaml
    kubectl get pv
    

    Creating a Persistent Volume Claim

    Once we have a volume we will create a Persistent volume claim in order to satisfy our deployment storage needs.
    To do so, create a new file named pvc.yaml with the following content

    {
      "apiVersion": "v1",
      "kind": "PersistentVolumeClaim",
      "metadata": {
        "name": "mariadb",
        "namespace": "mariadb",
        "labels": {
          "app": "mariadb",
          "volumename": "mariadb"
        },
        "finalizers": [
          "kubernetes.io/pvc-protection"
        ]
      },
      "spec": {
        "accessModes": [
          "ReadWriteMany"
        ],
        "resources": {
          "requests": {
            "storage": "20Gi"
          }
        },
        "volumeName": "mariadb",
        "storageClassName": "mariadb",
        "volumeMode": "Filesystem",
        "dataSource": null
      }
    }
    

    Then create the pvc by typing

    kubectl apply -f pvc.yaml
    

    You can verify that is has been created by using:

    root@pocprivatecloud:/data/cluster# kubectl  get pvc --all-namespaces
    NAMESPACE     NAME                            STATUS   VOLUME                         CAPACITY   ACCESS MODES   STORAGECLASS               AGE
    kube-system   data-logging-elk-data-0         Bound    logging-datanode-172.16.0.10   20Gi       RWO            logging-storage-datanode   94m
    kube-system   image-manager-image-manager-0   Bound    image-manager-172.16.0.10      20Gi       RWO            image-manager-storage      102m
    kube-system   mongodbdir-icp-mongodb-0        Bound    mongodb-172.16.0.10            20Gi       RWO            mongodb-storage            101m
    mariadb       mariadb                         Bound    mariadb                        20Gi       RWX            mariadb                    9s
    root@pocprivatecloud:/data/cluster# kubectl describe pvc mariadb -n mariadb
    

    Or, you can go to the UI to Platform, then storage and then select PersistentVolumeClaim to validate that has been created

    Select our docker image for deployment

    We have our cluster up and running, we have created a volume to store persistent data, so now it's time to perform our first deployment. As you might have noticed, we've decided to perform a mariadb deployment.

    First we need to choose our docker image. Nothing as easier as go to docker hub and search for it:
    https://hub.docker.com/_/mariadb

    Read the notes. This will help us to identify which Environment Variables we need to setup and what is the internal filesystem to mount in our volume

    Additionally, by default, the IBM private cloud has a policy defined in regards of the images authorized to be used for deployments.

    We don't recommend to disable it, as it's a way to guarantee that only the images that we desire will be allowed in our environment. Instead, you can either create a cluster or namespace policy from the UI (Under manage, Resource security and Image Policies) or edit the existing policy and allow the docker hub registry.

    To do so, perform:

    kubectl  get ClusterImagePolicy
    NAME                                    AGE
    ibmcloud-default-cluster-image-policy   2h
    
    kubectl edit ClusterImagePolicy ibmcloud-default-cluster-image-policy
    

    And add the following under repositories. Then Save and quit

      - name: docker.io/*
    

    Creating Kubernetes secrets

    As you know, kubernetes stores secrets codified is base64. In order to create the necessary secrets for our deployment (mariadb) we will need to run:

    root@pocprivatecloud:/data/cluster# echo -n 'password' | base64
    cGFzc3dvcmQ=
    root@pocprivatecloud:/data/cluster# echo -n 'user' | base64
    dXNlcg==
    root@pocprivatecloud:/data/cluster# echo -n 'database' | base64
    ZGF0YWJhc2U=
    

    Take note of the values and create a file named secrets.yaml with the following content:

    apiVersion: v1
    kind: Secret
    metadata:
      name: mariadb
      namespace: mariadb
    type: Opaque
    data:
      rootpw: cGFzc3dvcmQ=
      mysqluser: dXNlcg==
      mysqlpassword: cGFzc3dvcmQ=
      mysqldatabase: ZGF0YWJhc2U=
    

    Then, create the secret and validate by typing:

    kubectl apply -f secrets.yaml
    
    kubectl get secrets -n mariadb
    NAME                  TYPE                                  DATA   AGE
    default-token-g7pqt   kubernetes.io/service-account-token   3      20m
    mariadb               Opaque                                4      8s
    
    root@pocprivatecloud:/data/cluster# kubectl describe secret mariadb -n mariadb
    Name:         mariadb
    Namespace:    mariadb
    Labels:       <none>
    Annotations:
    Type:         Opaque
    
    Data
    ====
    MYSQL_DATABASE:       8 bytes
    MYSQL_PASSWORD:       8 bytes
    MYSQL_ROOT_PASSWORD:  8 bytes
    MYSQL_USER:           4 bytes
    

    Deploying our app

    Now that we have our storage and secrets created, it's time to deploy our application.
    We will create a basic deployment file named deploy.yaml which will use our created namespace, secrets and volume claim

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name:  mariadb
      namespace: mariadb
      labels:
        name: mariadb
        app: mariadb
    spec:
      selector:
        matchLabels:
          app: mariadb
          name: mariadb
      template:
        metadata:
          labels:
            name: mariadb
            app: mariadb
        spec:
          containers:
          - image:  mariadb:latest
            name:  mariadb
            env:
            - name:  MYSQL_ROOT_PASSWORD
              valueFrom:
                 secretKeyRef:
                   name: mariadb
                   key: rootpw
            - name:  MYSQL_USER
              valueFrom:
                 secretKeyRef:
                   name: mariadb
                   key: mysqluser
            - name:  MYSQL_PASSWORD
              valueFrom:
                 secretKeyRef:
                   name: mariadb
                   key: mysqlpassword
            - name:  MYSQL_DATABASE
              valueFrom:
                 secretKeyRef:
                   name: mariadb
                   key: mysqldatabase
            ports:
            - containerPort:  3306
              name:  mariadb
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: "/var/lib/mysql"
              name: mariadb
          volumes:
          - name: mariadb
            persistentVolumeClaim:
              claimName: mariadb
    

    Then, perform the following command to create our deployment:

    kubectl apply -f deploy.yaml
    

    Validate our app and get live logs

    You can validate that our deployment is up and running by typing:

    root@pocprivatecloud:~# kubectl get pods -n mariadb
    NAME                        READY   STATUS             RESTARTS   AGE
    mariadb-7d597bfbc4-gftb4    1/1     Running            0          3m46s
    

    If you run the following command, you can see in real time the logs for our application (the pod name after logs will be the corresponding which has been discovered with the previous command)

    kubecl logs  mariadb-7d597bfbc4-gftb4 -n mariadb
    

    Which returns:

    2020-01-14 15:30:53+00:00 [Note] [Entrypoint]: Database files initialized
    2020-01-14 15:30:53+00:00 [Note] [Entrypoint]: Starting temporary server
    2020-01-14 15:30:53+00:00 [Note] [Entrypoint]: Waiting for server startup
    2020-01-14 15:30:53 0 [Note] mysqld (mysqld 10.4.11-MariaDB-1:10.4.11+maria~bionic) starting as process 123 ...
    2020-01-14 15:30:53 0 [Note] InnoDB: Using Linux native AIO
    2020-01-14 15:30:53 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
    

    Now you can see the difference between using the CLI or going back to the UI (to platform and then Logging) to access Kibana / ElasticSearch

    Once in Kibana, apply the following filter:

    alt text

    This will return all the live logs for your deployment. You can try to filter and search for specific log entries:

    alt text