githubixx.kubernetes_worker
ansible-role-kubernetes-worker
This Ansible role is used to set up Kubernetes worker nodes in a tutorial called Kubernetes the not so hard way with Ansible - Worker. For more details, refer to the same tutorial.
Versions
Each release is tagged, and I follow semantic versioning. It's best to use the latest tag. The master branch is for ongoing development, while the tags indicate stable versions. For example, the tag 24.0.0+1.27.8
means it's version 24.0.0
of this role, compatible with Kubernetes version 1.27.8
. If the role changes, the version before the +
will increase; if the Kubernetes version changes, the version after the +
will increase. This helps in managing changes, especially for major Kubernetes releases.
Requirements
This playbook assumes you have already set up the Kubernetes controller components (see kubernetes-controller and Kubernetes the not so hard way with Ansible - Control plane.
You also need to have containerd, CNI plugins, and runc installed. To allow Kubernetes Pods to communicate across different hosts, later on, consider installing Cilium. Other networking solutions like Calico
, WeaveNet
, kube-router
, or flannel are also valid options.
Supported Operating Systems
- Ubuntu 20.04 (Focal Fossa)
- Ubuntu 22.04 (Jammy Jellyfish)
Changelog
Change history:
For a detailed list of changes, see the CHANGELOG.md.
IMPORTANT Version 24.0.0+1.27.8
included several potentially breaking changes. If upgrading from a version below this, also check the CHANGELOG for that version!
Recent changes:
26.0.0+1.29.4
UPDATE
- Updated
k8s_release
to1.29.4
.
- Updated
MOLECULE
- Changed to use
alvistack
instead ofgeneric
Vagrant boxes.
- Changed to use
25.0.1+1.28.8
- UPDATE
- Updated
k8s_release
to1.28.8
.
- Updated
25.0.0+1.28.5
UPDATE
- Updated
k8s_release
to1.28.5
.
- Updated
OTHER CHANGES
- Adjusted GitHub actions due to Ansible Galaxy updates.
- Increased max line length in
.yamllint
from 200 to 300.
MOLECULE
- Changed test assets VM to Ubuntu 22.04.
- Updated IP addresses.
- Updated certificate names and encryption method to ecdsa.
- Removed
collections.yml
.
Installation
- You can download directly from GitHub (make sure to navigate to the Ansible roles directory first):
git clone https://github.com/githubixx/ansible-role-kubernetes-worker.git githubixx.kubernetes_worker
- Alternatively, you can install using the
ansible-galaxy
command:
ansible-galaxy install role githubixx.kubernetes_worker
- You can also create a
requirements.yml
file to download the role from GitHub and install with:
ansible-galaxy role install -r requirements.yml
---
roles:
- name: githubixx.kubernetes_worker
src: https://github.com/githubixx/ansible-role-kubernetes-worker.git
version: 26.0.0+1.29.4
Role Variables
# Directory for Kubernetes configuration and certificate files for worker nodes.
k8s_worker_conf_dir: "/etc/kubernetes/worker"
# Directory for all certificate files related to the worker nodes.
k8s_worker_pki_dir: "{{ k8s_worker_conf_dir }}/pki"
# Directory for Kubernetes binaries.
k8s_worker_bin_dir: "/usr/local/bin"
# Kubernetes release version to be used.
k8s_worker_release: "1.29.4"
# Network interface for Kubernetes services communication.
k8s_interface: "eth0"
# Directory to copy K8s certificates from the user's local home directory.
k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"
# IP address or hostname of the Kubernetes API endpoint.
k8s_worker_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host]['ansible_' + hostvars[controller_host]['k8s_interface']].ipv4.address }}"
# The port on which the Kubernetes API servers are accessible.
k8s_worker_api_endpoint_port: "6443"
# Required OS packages for a Kubernetes worker node.
k8s_worker_os_packages:
- ebtables
- ethtool
- ipset
- conntrack
- iptables
- iptstate
- netstat-nat
- socat
- netbase
# Directory for kubelet configuration.
k8s_worker_kubelet_conf_dir: "{{ k8s_worker_conf_dir }}/kubelet"
# Kubelet settings.
k8s_worker_kubelet_settings:
"config": "{{ k8s_worker_kubelet_conf_dir }}/kubelet-config.yaml"
"node-ip": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
"kubeconfig": "{{ k8s_worker_kubelet_conf_dir }}/kubeconfig"
# Kubelet kubeconfig file structure.
k8s_worker_kubelet_conf_yaml: |
# Configuration for kubelet
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: {{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "{{ k8s_worker_pki_dir }}/ca-k8s-apiserver.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.254"
failSwapOn: true
healthzBindAddress: "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
healthzPort: 10248
runtimeRequestTimeout: "15m"
serializeImagePulls: false
tlsCertFile: "{{ k8s_worker_pki_dir }}/cert-{{ inventory_hostname }}.pem"
tlsPrivateKeyFile: "{{ k8s_worker_pki_dir }}/cert-{{ inventory_hostname }}-key.pem"
cgroupDriver: "systemd"
registerNode: true
containerRuntimeEndpoint: "unix:///run/containerd/containerd.sock"
# Directory for kube-proxy configuration.
k8s_worker_kubeproxy_conf_dir: "{{ k8s_worker_conf_dir }}/kube-proxy"
# Kube-proxy settings.
k8s_worker_kubeproxy_settings:
"config": "{{ k8s_worker_kubeproxy_conf_dir }}/kubeproxy-config.yaml"
# Kube-proxy configuration structure.
k8s_worker_kubeproxy_conf_yaml: |
# Configuration for kube-proxy
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: {{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}
clientConnection:
kubeconfig: "{{ k8s_worker_kubeproxy_conf_dir }}/kubeconfig"
healthzBindAddress: {{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}:10256
mode: "ipvs"
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 2s
iptables:
masqueradeAll: true
clusterCIDR: "10.200.0.0/16"
Dependencies
Example Playbook
- hosts: k8s_worker
roles:
- githubixx.kubernetes_worker
Testing
This role includes a test setup using Molecule, libvirt (vagrant-libvirt), and QEMU/KVM. For setup instructions, check my blog post Testing Ansible roles with Molecule, libvirt (vagrant-libvirt) and QEMU/KVM. The test configuration can be found here.
You can run the following command to set up virtual machines (VM) with supported Ubuntu OS and install a Kubernetes cluster:
molecule converge
At this point, the cluster isn't fully functional due to the absence of a network plugin, meaning Pod to Pod communication is not yet possible. You can install Cilium for network communication and CoreDNS for DNS services with the command:
molecule converge -- --extra-vars k8s_worker_setup_networking=install
This will set up a functional Kubernetes cluster.
A verification step is also included:
molecule verify
To clean up afterward, run:
molecule destroy
License
GNU GENERAL PUBLIC LICENSE Version 3
Author Information
You can find more on my blog: http://www.tauceti.blog
ansible-galaxy install githubixx.kubernetes_worker