Install Kubernetes 1.17 in Ubuntu 18.04

Topic created · 1 Posts · 11 Views
  • This is a exercise for testing purposes

    Specs

    In order to create our Kubernetes test cluster, we will need 3 nodes (1 master and 2 slaves) with specs as the following ones.
    You can go down to 4gb of RAM for this POC, but we recommend having at least 8gb

    • 2 vCPUs
    • 8 GB RAM
    • Ubuntu 18.04 on each node

    You can find really cheap cloud providers to run this kind of test. For example https://www.vultr.com/products/cloud-compute/ offers a similar instance for 20$ per month per instance

    Requirements pre-installation

    First, we will need to generate an ssh key-pair on each node

    root@node1:~# ssh-keygen
    root@node2:~# ssh-keygen
    root@node3:~# ssh-keygen
    

    You will need to set a password for the root user, as by default it doesn't have one (run on each node)

    sudo passwd
    [sudo] password for xxxx: 
    Enter new UNIX password: 
    Retype new UNIX password: 
    passwd: password updated successfully
    

    Once done and before moving forward, we need to disable the following entries from our /etc/hosts file on each node

    #::1    ip6-localhost   ip6-loopback
    #127.0.1.1      virtualserver01.xxx.xx       virtualserver01
    

    get your node ips and add them to /etc/hosts on each node

    172.16.0.15 node1
    172.16.0.16 node2
    172.16.0.17 node3
    

    Then copy the keys to each node (this needs to be done from each node)

    ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
    ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
    ssh-copy-id -i /root/.ssh/id_rsa.pub root@node3
    

    Update the packages on each node

    apt-get update
    sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common -y
    

    Install docker

    We need to install Docker on each node

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
    
        sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable"
    
     sudo apt-get update
    
     apt-get install docker-ce -y
    
    

    Install Kubernetes

    We need to add the Kubernetes signing key on each node
    To do so. run the following command:

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
    

    Also, we need to add the Xenial Kubernetes Repository on each node
    Run the following command to do so and then update the package list:

    sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
    apt-get update
    

    The next step in the installation process is to install Kubeadm on all the nodes through the following command:

    sudo apt install kubeadm -y
    

    Once the installation finishes, you can check the version by running

    kubeadm version
    

    Before moving on, we need to disable the SWAP memory in all the nodes
    You need to disable swap memory on both the nodes as Kubernetes does not perform properly on a system that is using swap memory.

    Run the following command to do so:

    sudo swapoff -a
    

    Then, only in our master node (node1), initialize kubelet by typing the following command

    kubeadm init --pod-network-cidr=10.244.0.0/16
    

    Once the process finishes, take note of the output as it's really important.
    We will be using later this information to join the worker nodes to our cluster.

    Don't execute it yet

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.16.0.15:6443 --token ahlqlw.bk3iqojsp519rf7d \
        --discovery-token-ca-cert-hash sha256:ebf7c6b895b52a059872d198a44b942267bef89d8d1d6802bbd3cc8082ebe600
    

    Run the following commands on the master node

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    Then, on the master node download the calico network manifest using the following command:

    curl https://docs.projectcalico.org/v3.11/manifests/calico.yaml -O
    

    edit the file calico.yaml and modify as per below (defining the CIDR block that we configured during the kube installation)

            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
    

    for

            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
    

    The section will look like:

    alt text

    then apply the manifest by running

    kubectl apply -f calico.yaml
    

    After a few minutes, you can query the status of the pods by using:

    kubectl get pods --all-namespaces
    

    alt text

    And also verify that your master node is now ready by using:

    kubectl get nodes
    

    alt text

    Joining worker nodes to the cluster

    Now you can go to node2 and node 3 and run the command generated during the installtion to join the worker nodes to our cluster

    kubeadm join 172.16.0.15:6443 --token ahlqlw.bk3iqojsp519rf7d \
        --discovery-token-ca-cert-hash sha256:ebf7c6b895b52a059872d198a44b942267bef89d8d1d6802bbd3cc8082ebe600
    

    after a couple of minutes you can check the nodes status:

    kubectl get nodes
    

    alt text

    And get additional node info

    kubectl get nodes -o wide
    

    alt text

    Access the cluster from your workstation

    In order to access from our laptop or a remote machine, we need to create a cluster role and cluster role binding.
    In the master node create a yaml file witht the content:

    We are creating a role with full admin permissions. In a PROD environment we should follow the best practices and limit the permissions.

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
      name: remote
    rules:
    - apiGroups:
      - '*'
      resources:
      - '*'
      verbs:
      - '*'
    - nonResourceURLs:
      - '*'
      verbs:
      - '*'
    

    then create the cluster role binding by executing the command from the master node

    kubectl create clusterrolebinding remote   --clusterrole=remote  --user=remote   --group=system:serviceaccounts
    

    get the cluster token by running

    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
    

    you will get a long token string such as the following output.
    take note of it

    eyJhbGciOiJSUzI1NiIsImtpZCI6ImxnRlJxeFpOZUF2bWFKSktXSzNLVGdvYjRkQk5rU2tyT0JNWVdMbWtvYzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZ2pxenEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2NkMWI0ZTItM2NlOC00MzhjLWIzNmItMjMxYzI1ZTU2ZGU4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.EV06Ljf3jZH3iH7Fi4YHFjKoSFPK8tLQIoR68GEzt1CGNTwKWKsDSkp6VhvZ3LJO9k4H60bvm6GIfj8aU8qXfE1JRw70hp83_oVsyj-0D-r1THlBiXazeAcw8bP02bWp0_005cNsZZ88UEVfWq1f1cFWUS4aqESfo8w5LFmudEnrA1j2tIikR61u__Mr0WJJrzvc7HhIt79RC2_A1004tjaUWJRhXJM1U52WC6o_B4Iyv1k9xs6rNeFwW3C6vSsfyB381ZN3ItiHGBDPPIZnWhsQb7m1HTRX3kCWi949ngdFtbo0_fJttgob8YIkQaTm77gZcdAERNMwZsfr6NtDiw
    

    Then, from your workstation, open a shell or powershell and perform the following commands

    kubectl config set-cluster kubernetes --server=https://your_node1_public_ip:6443 --insecure-skip-tls-verify=true
    
     kubectl config set-context kubernetes --cluster=kubernetes
    
     kubectl config set-credentials remote --token=your_token
    
     kubectl config set-context kubernetes --user=remote
    
     kubectl config use-context kubernetes
    

    you can then run the usual commands to check your cluster info

    kubectl get nodes
    kuectl get cluster-info
    

    Deploy a stateless app

    In order to test our cluster, we can deploy a stateless example app based on the official kubernetes doc, create a yaml file with the following content in your workstation:

    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 2 # tells deployment to run 2 pods matching the template
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
    

    Then, perform

    kubectl apply -f my-file.yaml
    

    And then verify if your deployment is running and where

    kubectl.exe get pods --all-namespaces -o wide
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
    default       nginx-deployment-54f57cf6bf-n7nfc          1/1     Running   0          2m11s   10.244.104.1     node2   <none>           <none>
    default       nginx-deployment-54f57cf6bf-wrtsd          1/1     Running   0          2m11s   10.244.135.1     node3   <none>           <none>
    

    Access your app

    Now, you can access your nginx server by doing a kubernetes port-forward.
    Replace the pod name by the value returned by the previous command

    kubectl port-forward nginx-deployment-54f57cf6bf-n7nfc 8080:80
    
    Forwarding from 127.0.0.1:8080 -> 80
    Forwarding from [::1]:8080 -> 80
    Handling connection for 8080
    Handling connection for 8080
    

    Then, open a web browser to http://localhost:8080

    Using octant

    As you know, you can deploy and use the kubernetes dashboard.
    Nevertheless, we recommend you to take a look to octant

    Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes.

    The benefit is that it´s installed in your workstation and the access is based in your permissions, meaning that no additional deployments are needed in the cluster.
    You can download the last release here

    Once installed, just execute it by running the following command, and a tab in your web browser will launch the console.
    You can check the deployment and pods that we just triggered before.

    octant
    

    alt text

Log in to reply