lablabs.rke2
RKE2 Ansible Role
This Ansible role helps in setting up an RKE2 Kubernetes Cluster. It installs RKE2 using the tarball method.
You can install RKE2 in 3 ways:
Single Node: A single node acts as both server and agent.
Cluster Mode: One server (master) with one or more agent (worker) nodes.
High Availability (HA) Mode: A server (master) running in HA with odd-numbered master nodes (three recommended) and zero or more agent (worker) nodes. This mode requires proper configuration of etcd and Kubernetes API (using Keepalived VIP or Kube-VIP).
- You can also install RKE2 in an Air-Gapped scenario by using local artifacts.
You can upgrade RKE2 by updating the
rke2_version
variable and re-running the playbook. The RKE2 service on each node will restart one by one, and the role will ensure each node is ready before continuing the restart on the next node.
Requirements
- Ansible 2.10+
Tested on
- Rocky Linux 9
- Ubuntu 24.04 LTS
Role Variables
These are the default settings from defaults/main.yml
---
rke2_type: server # Type: server or agent
rke2_ha_mode: false # Control plane in HA mode
rke2_ha_mode_keepalived: true # Install Keepalived on servers (can be disabled if using a pre-configured load balancer)
rke2_ha_mode_kubevip: false # For kube-vip Load Balancer (requires Keepalived to be false)
rke2_api_ip: "{{ hostvars[groups[rke2_servers_group_name].0]['ansible_default_ipv4']['address'] }}" # API address (defaults to server's IPv4)
rke2_loadbalancer_ip_range: {} # Range of load balancer IPs
rke2_kubevip_cloud_provider_enable: true # Enable kubevip as a cloud provider
rke2_kubevip_svc_enable: true # Enable watching LoadBalancer services
rke2_kubevip_image: ghcr.io/kube-vip/kube-vip:v0.6.4 # Image for kube-vip container
rke2_token: defaultSecret12345 # Pre-shared token for node registration
rke2_version: v1.25.3+rke2r1 # RKE2 version
# ... other configuration options ...
Inventory File Example
This role requires nodes to be grouped into masters
and workers
. Master nodes should go into the masters
group and worker nodes in the workers
group. Both groups should be under the k8s_cluster
group.
[masters]
master-01 ansible_host=192.168.123.1 rke2_type=server
master-02 ansible_host=192.168.123.2 rke2_type=server
master-03 ansible_host=192.168.123.3 rke2_type=server
[workers]
worker-01 ansible_host=192.168.123.11 rke2_type=agent
worker-02 ansible_host=192.168.123.12 rke2_type=agent
worker-03 ansible_host=192.168.123.13 rke2_type=agent
[k8s_cluster:children]
masters
workers
Playbook Examples
- Deploy RKE2 to a Single Node (both server and agent):
- name: Deploy RKE2
hosts: node
become: yes
roles:
- role: lablabs.rke2
- Deploy RKE2 to Cluster (one server and several agents):
- name: Deploy RKE2
hosts: all
become: yes
roles:
- role: lablabs.rke2
- Deploy RKE2 in Air-Gapped Mode (using Multus and Calico):
- name: Deploy RKE2
hosts: all
become: yes
vars:
rke2_airgap_mode: true
rke2_cni:
- multus
- calico
roles:
- role: lablabs.rke2
- Deploy RKE2 with HA Configuration:
- name: Deploy RKE2
hosts: all
become: yes
vars:
rke2_ha_mode: true
rke2_download_kubeconf: true
roles:
- role: lablabs.rke2
Separate Token for Agent Nodes
You can set a different token for agent nodes so they have limited access. Here’s how:
- Remove
rke2_token
from global vars. - In
group_vars/masters.yml
, add:
rke2_token: defaultSecret12345
rke2_agent_token: agentSecret54321
- In
group_vars/workers.yml
, add:
rke2_token: agentSecret54321
Troubleshooting
Playbook Stuck While Starting RKE2 Service
If the playbook hangs while starting the RKE2 service, it may be due to network issues. Check the required inbound rules for RKE2 server nodes here.
License
MIT
Author Information
Created in 2021 by Labyrinth Labs
This Ansible Role will deploy Rancher RKE2 Kubernetes
ansible-galaxy install lablabs.rke2