garutilorenzo.ansible-role-linux-kubernetes
Installing and Setting Up a Highly Available Kubernetes Cluster
This Ansible role simplifies the installation and configuration of a highly available Kubernetes cluster. It automates the installation process using kubeadm.
This repository serves as an example of how to use Ansible for automating the setup of a Kubernetes cluster. For production use, consider using Kubespray.
Requirements
Install the necessary packages (ansible
, ipaddr
, netaddr
) using:
pip install -r requirements.txt
To download this role from GitHub, run:
ansible-galaxy install git+https://github.com/garutilorenzo/ansible-role-linux-kubernetes.git
Role Variables
This role accepts the following variables:
Variable | Required | Default | Description |
---|---|---|---|
kubernetes_subnet |
Yes | 192.168.25.0/24 |
The subnet for Kubernetes deployment. |
disable_firewall |
No | no |
Set to yes to disable the firewall. |
kubernetes_version |
No | 1.25.0 |
Version of Kubernetes to install. |
kubernetes_cri |
No | containerd |
Kubernetes container runtime interface to install. |
kubernetes_cni |
No | flannel |
Kubernetes container network interface to install. |
kubernetes_dns_domain |
No | cluster.local |
Default DNS domain for Kubernetes. |
kubernetes_pod_subnet |
No | 10.244.0.0/16 |
Subnet for Kubernetes pods. |
kubernetes_service_subnet |
No | 10.96.0.0/12 |
Subnet for Kubernetes services. |
kubernetes_api_port |
No | 6443 |
The listening port for kubeAPI. |
setup_vip |
No | no |
Set to yes to set up a VIP address using kube-vip. |
kubernetes_vip_ip |
No | 192.168.25.225 |
Required if setup_vip is yes . The VIP for the control plane. |
kubevip_version |
No | v0.4.3 |
The version of kube-vip container. |
install_longhorn |
No | no |
Install Longhorn, a cloud-native storage solution for Kubernetes. |
longhorn_version |
No | v1.3.1 |
The Longhorn release version. |
install_nginx_ingress |
No | no |
Install nginx ingress controller. |
nginx_ingress_controller_version |
No | controller-v1.3.0 |
Version of the nginx ingress controller. |
nginx_ingress_controller_http_nodeport |
No | 30080 |
NodePort for HTTP traffic. |
nginx_ingress_controller_https_nodeport |
No | 30443 |
NodePort for HTTPS traffic. |
enable_nginx_ingress_proxy_protocol |
No | no |
Enable proxy protocol mode for nginx ingress controller. |
enable_nginx_real_ip |
No | no |
Enable real IP module for nginx ingress controller. |
nginx_ingress_real_ip_cidr |
No | 0.0.0.0/0 |
Required if enable_nginx_real_ip is yes . |
nginx_ingress_proxy_body_size |
No | 20m |
Max proxy body size for nginx ingress controller. |
sans_base |
No | [list of values, see defaults/main.yml] |
IPs or FQDNs for signing the kube-api certificate. |
Extra Variables
You can also set an additional variable called kubernetes_init_host
, used for the initial cluster setup. This should be the hostname of one of the master nodes.
Resources Deployed
This role will install the Nginx ingress controller and Longhorn.
Nginx Ingress Controller
The Nginx ingress controller is used. It is exposed via a NodePort service for bare metal installations. You can customize the NodePort values using Role Variables.
Longhorn
Longhorn is a lightweight and reliable storage system for Kubernetes. It uses containers and microservices to implement distributed storage, creating a storage controller for each volume and synchronizing it across multiple replicas.
Testing with Vagrant
To test this role, use Vagrant and Virtualbox to set up a test environment. After downloading this repo, run:
vagrant up
You can insert your public SSH key into the authorized_keys
of the vagrant user in the Vagrantfile. Change the placeholder CHANGE_ME
in the Vagrantfile, and adjust the number of VMs by modifying the NNODES
variable (Default: 6).
Using the Role
Use the examples from the examples/ directory. Modify the hosts.ini
file with your hosts and run the playbook:
ansible-playbook -i hosts-ubuntu.ini site.yml -e kubernetes_init_host=k8s-ubuntu-0
Check Cluster Status
After deployment, check the status of your Kubernetes cluster:
kubectl get nodes
To check the status of your pods:
kubectl get pods --all-namespaces
Inspect Nginx Ingress Controller Service
You can view the Nginx ingress controller services with:
kubectl get svc -n ingress-nginx
To test the ingress controller from an external machine, use:
curl -v http://192.168.25.110:30080
You should see an HTTP 404 response indicating the service is active.
Install and configure a high available Kubernetes cluster
ansible-galaxy install garutilorenzo.ansible-role-linux-kubernetes