Install Kubernetes 1.18 in Ubuntu 20

Topic created · 1 Posts · 2 Views
  • This is a exercise for testing purposes


    In order to create our Kubernetes test cluster, we will need 3 nodes (1 master and 2 slaves) with specs as the following ones.
    You can go down to 4gb of RAM for this POC, but we recommend having at least 8gb

    • 2 vCPUs
    • 8 GB RAM
    • Ubuntu 20.04 on each node

    You can find really cheap cloud providers to run this kind of test. For example offers a similar instance for 20$ per month per instance

    Requirements pre-installation

    You will need to set a password for the root user, as by default it doesn't have one (run on each node)

    sudo -i
    sudo passwd
    [sudo] password for xxxx: 
    Enter new UNIX password: 
    Retype new UNIX password: 
    passwd: password updated successfully

    Also, on each node edit the file /etc/ssh/sshd_config and change

    #PermitRootLogin prohibit-password


    PermitRootLogin yes

    Then restart the ssh service by running

    systemctl restart ssh

    Then, we will need to generate an ssh key-pair on each node

    root@node1:~# ssh-keygen
    root@node2:~# ssh-keygen
    root@node3:~# ssh-keygen

    Once done and before moving forward, we need to disable the following entries from our /etc/hosts file on each node

    #::1    ip6-localhost   ip6-loopback
    #       virtualserver01

    get your node ips and add them to /etc/hosts on each node node1 node2 node3

    Then copy the keys to each node (this needs to be done from each node)

    ssh-copy-id -i /root/.ssh/ root@node1
    ssh-copy-id -i /root/.ssh/ root@node2
    ssh-copy-id -i /root/.ssh/ root@node3

    Update the packages on each node

    apt-get update
    sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common -y

    Install docker

    We need to install Docker on each node

    curl -fsSL | sudo apt-key add 
        sudo add-apt-repository \
       "deb [arch=amd64] \
       $(lsb_release -cs) \
     sudo apt-get update
     apt-get install docker-ce -y

    Install Kubernetes

    We need to add the Kubernetes signing key on each node
    To do so. run the following command:

    curl -s | sudo apt-key add

    Also, we need to add the Xenial Kubernetes Repository on each node
    Run the following command to do so and then update the package list:

    sudo apt-add-repository "deb kubernetes-xenial main"
    apt-get update

    The next step in the installation process is to install Kubeadm on all the nodes through the following command:

    sudo apt install kubeadm -y

    Once the installation finishes, you can check the version by running

    kubeadm version

    Before moving on, we need to disable the SWAP memory in all the nodes
    You need to disable swap memory on both the nodes as Kubernetes does not perform properly on a system that is using swap memory.

    Run the following command to do so:

    sudo swapoff -a

    Then, only in our master node (node1), initialize kubelet by typing the following command

    kubeadm init --pod-network-cidr=

    Once the process finishes, take note of the output as it's really important.
    We will be using later this information to join the worker nodes to our cluster.

    Don't execute it yet

    Your Kubernetes control-plane has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join --token ahlqlw.bk3iqojsp519rf7d \
        --discovery-token-ca-cert-hash sha256:ebf7c6b895b52a059872d198a44b942267bef89d8d1d6802bbd3cc8082ebe600

    Run the following commands on the master node

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Then, on the master node download the calico network manifest using the following command:

    curl -O

    edit the file calico.yaml and modify as per below (defining the CIDR block that we configured during the kube installation)

            - name: CALICO_IPV4POOL_CIDR
              value: ""


            - name: CALICO_IPV4POOL_CIDR
              value: ""

    The section will look like:

    alt text

    **As in ubuntu 20.04 we need to adjust the configuration of the net.ipv4.conf.all.rp_filter paramter **

    Edit the file /etc/sysctl.d/10-network-security.conf and change:




    Also, to make the change on the fly, run

    sysctl -w net.ipv4.conf.all.rp_filter=0

    then apply the manifest by running

    kubectl apply -f calico.yaml

    After a few minutes, you can query the status of the pods by using:

    kubectl get pods --all-namespaces

    alt text

    And also verify that your master node is now ready by using:

    kubectl get nodes

    alt text

    Joining worker nodes to the cluster

    Now you can go to node2 and node 3 and run the command generated during the installtion to join the worker nodes to our cluster

    kubeadm join --token ahlqlw.bk3iqojsp519rf7d \
        --discovery-token-ca-cert-hash sha256:ebf7c6b895b52a059872d198a44b942267bef89d8d1d6802bbd3cc8082ebe600

    after a couple of minutes you can check the nodes status:

    kubectl get nodes

    alt text

    And get additional node info

    kubectl get nodes -o wide

    alt text

    Access the cluster from your workstation

    In order to access from our laptop or a remote machine, we need to create a cluster role and cluster role binding.
    In the master node create a yaml file witht the content:

    We are creating a role with full admin permissions. In a PROD environment we should follow the best practices and limit the permissions.

    kind: ClusterRole
      annotations: "true"
      name: remote
    - apiGroups:
      - '*'
      - '*'
      - '*'
    - nonResourceURLs:
      - '*'
      - '*'

    then create the cluster role binding by executing the command from the master node

    kubectl create clusterrolebinding remote   --clusterrole=remote  --user=remote   --group=system:serviceaccounts

    get the cluster token by running

    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'

    you will get a long token string such as the following output.
    take note of it


    Then, from your workstation, open a shell or powershell and perform the following commands

    kubectl config set-cluster kubernetes --server=https://your_node1_public_ip:6443 --insecure-skip-tls-verify=true
     kubectl config set-context kubernetes --cluster=kubernetes
     kubectl config set-credentials remote --token=your_token
     kubectl config set-context kubernetes --user=remote
     kubectl config use-context kubernetes

    you can then run the usual commands to check your cluster info

    kubectl get nodes
    kuectl get cluster-info

    Deploy a stateless app

    In order to test our cluster, we can deploy a stateless example app based on the official kubernetes doc, create a yaml file with the following content in your workstation:

    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    kind: Deployment
      name: nginx-deployment
          app: nginx
      replicas: 2 # tells deployment to run 2 pods matching the template
            app: nginx
          - name: nginx
            image: nginx:1.7.9
            - containerPort: 80

    Then, perform

    kubectl apply -f my-file.yaml

    And then verify if your deployment is running and where

    kubectl.exe get pods --all-namespaces -o wide
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
    default       nginx-deployment-54f57cf6bf-n7nfc          1/1     Running   0          2m11s     node2   <none>           <none>
    default       nginx-deployment-54f57cf6bf-wrtsd          1/1     Running   0          2m11s     node3   <none>           <none>

    Access your app

    Now, you can access your nginx server by doing a kubernetes port-forward.
    Replace the pod name by the value returned by the previous command

    kubectl port-forward nginx-deployment-54f57cf6bf-n7nfc 8080:80
    Forwarding from -> 80
    Forwarding from [::1]:8080 -> 80
    Handling connection for 8080
    Handling connection for 8080

    Then, open a web browser to http://localhost:8080

    Using octant

    As you know, you can deploy and use the kubernetes dashboard.
    Nevertheless, we recommend you to take a look to octant

    Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes.

    The benefit is that it´s installed in your workstation and the access is based in your permissions, meaning that no additional deployments are needed in the cluster.
    You can download the last release here

    Once installed, just execute it by running the following command, and a tab in your web browser will launch the console.
    You can check the deployment and pods that we just triggered before.


    alt text

Log in to reply