1 - Introduction

Overview

To better understand and expand my knowledge using Kubernetes I decided to follow the excellent walkthrough put together by Kelsey Hightower over on github.

This repository has over 28.3k stars as of writing so it is a very popular guide to follow.

I had already learned how to use Kubernetes but the guide put together by Kelsey demonstrated how to setup a Production like Kubernetes cluster.

Understanding how Kubernetes works by building a cluster gave me a better insight into the how each component works.

The guide is written solely for setting up Kubernetes on the Google Cloud Platform, but the steps can be refined to work on any public cloud platform if required.

After getting acquainted to understanding how to run the steps manually.

I decided to automate the steps carried out to setup a Kubernetes cluster.

Scripting language used

  • Python
  • Bash

Automation Tools used

Setup Cloud infrastructure

  • Terraform

Server Configuration

  • Ansible to configure Compute instances.

Network Components

  • x3 Worker node(s) | Debian instances
  • x3 Controller(s) | Debian instances
  • x1 VPC
  • x1 Private Subnet
  • x1 Firewall for Internal traffic [allow: tcp, udp, icmp]
  • x1 Firewall for External traffic [allow: tcp, ssh, icmp]
  • x1 Firewall for HTTP Health check [associated with an External network loadbalancer]
  • x1 External IP Address
  • x1 HTTP Health check
  • x1 Compute Target pool
  • x1 Forwarding rule
  • x3 Compute route(s) [for each Worker node(s)]

Guide

Key Topic Tool/Script
1 Prerequisites NA
2 Client tools Bash
3 Network resources Terraform
4.1 Compute instances Terraform
4.2 Terraform outputs Terraform
4.3 Terraform variables Terraform
5 Certificate Authority Python
6.1 Kubeconfig Python
6.2 Transfer Config Ansible
7 Data encryption keys Python
8 Bootstrap etcd Ansible
9 Bootstrap Kubernetes controllers Ansible
10 Bootstrap Kubernetes worker nodes Ansible
11 Kubernetes Pod network routing Terraform
12 Kubectl remote configuration Bash
13 Kubernetes DNS addon Bash
14 Kubernetes smoke test Bash

Ansible structure

The Ansible playbook runs through the 4 roles below.

  1. Transfer files across to the Compute instances
  2. Bootstrap etcd
  3. Bootstrap Controllers
  4. Bootstrap Workers

Roles

4 Roles were created to help support automating the Kubernetes cluster setup:

  • k8s-transfer-conf » Transfer certificates and configuration
  • k8s-bootstrap-etcd » Bootstrap etcd on Controllers
  • k8s-bootstrap-control-plane » Bootstrap Controllers
  • k8s-bootstrap-workers » Bootstrap Worker nodes

Variables

Role: k8s-bootstrap-control-plane

kube_api_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kube-apiserver
kube_controller_manager_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kube-controller-manager
kube_scheduler_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kube-scheduler
kubectl_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kubectl
kube_api_service_path: /etc/systemd/system/kube-apiserver.service
kube_service_cluster_ip_range : 10.32.0.0/24
kube_controller_manager_cidr: 10.200.0.0/16
kube_controller_manager_service_cluster_ip_range: 10.32.0.0/24
kube_controller_manager_service_path: /etc/systemd/system/kube-controller-manager.service
kube_scheduler_config_path: /etc/kubernetes/config/kube-scheduler.yaml
kube_scheduler_service_path: /etc/systemd/system/kube-scheduler.service
nginx_healthcheck_path: /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
nginx_sites_enabled_path: /etc/nginx/sites-enabled

Role: k8s-bootstrap-etcd

etcd_version: 3.4.17
etcdtl_api: 3
etcd_dl: https://github.com/etcd-io/etcd/releases/download/v{{etcd_version}}/etcd-v{{ etcd_version }}-linux-amd64.tar.gz
etcd_file: etcd-v{{ etcd_version }}-linux-amd64.tar.gz
etcd_service_path: /etc/systemd/system/etcd.service

Role: k8s-bootstrap-workers

cri_tools_version: 1.22.0
runc_version: 1.0.2
cni_plugins_version: 1.0.1
containerd_version: 1.5.7

cri_tools_dl: https://github.com/kubernetes-sigs/cri-tools/releases/download/v{{ cri_tools_version }}/crictl-v{{cri_tools_version}}-linux-amd64.tar.gz
runc_dl: https://github.com/opencontainers/runc/releases/download/v{{ runc_version }}/runc.amd64
cni_plugins_dl: https://github.com/containernetworking/plugins/releases/download/v{{ cni_plugins_version }}/cni-plugins-linux-amd64-v{{ cni_plugins_version }}.tgz
containerd_dl: https://github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-amd64.tar.gz
kube_proxy_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kube-proxy
kubelet_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kubelet
kubectl_dl: https://dl.k8s.io/v{{ kubernetes_version }}/bin/linux/amd64/kubectl

Role: k8s-transfer-conf

remote_path: /home/{{ ansible_ssh_user }}/k8s-thw
certs_archive: ~/Documents/utils/terraform/02-k8s-gcp-cluster/k8s-certs/k8s-certs.tar.gz
kubeconfig_archive: ~/Documents/utils/terraform/02-k8s-gcp-cluster/k8s-conf/k8s-kubeconfig.tar.gz
encryption_config: ~/Documents/utils/terraform/02-k8s-gcp-cluster/k8s-conf/encryption-config.yaml

Templates

Role: k8s-bootstrap-control-plane

  • kube-apiserver.service.j2
  • kube-controller-manager.j2

Role: k8s-bootstrap-etcd

  • etcd.service.j2

Role: k8s-bootstrap-workers

  • 10-bridge.conf.j2
  • kubelet-config.yaml.j2
  • kube-proxy-config.yaml.j2

Defaults

Role: k8s-bootstrap-control-plane

  • cluster-role-bind-rbac.yml
  • cluster-role-rbac.yml
  • kubernetes.default.svc.cluster.local
  • kube-scheduler.service
  • kube-scheduler.yaml

Role: k8s-bootstrap-etcd

N/A

Role: k8s-bootstrap-workers

  • 99-loopback.conf
  • containerd-config.toml
  • containerd.service
  • kubelet.service
  • kube-proxy.service

Playbook

---
- hosts: controllers:workers
  roles:
    - ansible/roles/k8s-transfer-conf
    - ansible/roles/k8s-bootstrap-etcd
    - ansible/roles/k8s-bootstrap-control-plane
    - ansible/roles/k8s-bootstrap-workers
  vars:
    etcd_servers:
      - controller-1: 10.240.0.11
      - controller-2: 10.240.0.12
      - controller-3: 10.240.0.13
    pod_cidr:
      - worker-1: 10.200.1.0/24
      - worker-2: 10.200.2.0/24
      - worker-3: 10.200.3.0/24
    kubernetes_version: 1.22.2
    remote_path: /home/{{ ansible_ssh_user }}/k8s-thw
    temp_dir: /tmp
    kubernetes_public_address: <static external IP-Address>
    kube_apiserver_count: 3
    cluster_dns: "10.32.0.10"

Inventory

---
controllers:
  hosts:
        controller-1:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>
        controller-2:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>
        controller-3:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>
workers:
  hosts:
        worker-1:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>
        worker-2:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>
        worker-3:
            ansible_host: <external-IP-Address>
            ansible_ssh_user: <ssh-username>

Naming convention of Compute instances

Worker node hostnames follow the pattern “worker-n” where n starts from 1.

  • e.g. worker-1, worker-2..

Controller hostnames follow the pattern “controller-n” where n starts from 1.

  • e.g. controller-1, controller-3

Configuration file

Various settings which the automation uses to setup the Kubernetes cluster are defined in a json formatted file.

Note. This configuration file is not used by Terraform which has a separate variables file.

This file contains settings regarding the number of worker nodes, controllers, IP-Addresses, SSH usernames, paths to files. e.g.

[
    {
        "workersCount": 3,
        "controllersCount": 3,
        "ansibleSettings": {
            "etcdServers": {
                "controller-1": "10.240.0.11",
                "controller-2": "10.240.0.12",
                "controller-3": "10.240.0.13"
            },
            "podCIDR": {
                "worker-1": "10.200.1.0/24",
                "worker-2": "10.200.2.0/24",
                "worker-3": "10.200.3.0/24"
            },
            "clusterCIDR": "10.200.0.0/16",
            "kubernetesVersion": "1.22.2",
            "kubeAPIServerCount": 3,
            "clusterDNS": "10.32.0.10"
        },
        "staticExternalIP": "",
        "clusterName": "kubernetes-the-hard-way",
        "certificatesPath": "",
        "k8sConfPath": "",
        "templatesPath": "/templates",
        "ansiblePlaybook": "/src/k8s-thw-cluster-playbook.yml",
        "ansibleInventory": "/src/ansible-hosts",
        "controllers": [
            {
                "name": "controller-1",
                "internalIP": "10.240.0.11",
                "externalIP": "",
                "sshUser": ""
            },
            {
                "name": "controller-2",
                "internalIP": "10.240.0.12",
                "externalIP": "",
                "sshUser": ""
            },
            {
                "name": "controller-3",
                "internalIP": "10.240.0.13",
                "externalIP": "",
                "sshUser": ""
            }
        ],
        "workers": [
            {
                "name": "worker-1",
                "internalIP": "10.240.0.21",
                "externalIP": "",
                "sshUser": ""
            },
            {
                "name": "worker-2",
                "internalIP": "10.240.0.22",
                "externalIP": "",
                "sshUser": ""
            },
            {
                "name": "worker-3",
                "internalIP": "10.240.0.23",
                "externalIP": "",
                "sshUser": ""
            }
        ]
    }
]
Last updated on 17 Aug 2021
Published on 17 Aug 2021