kubevirt.kubevirt-modules
Ansible KubeVirt Modules
Ansible KubeVirt modules help automate the management of different Kubernetes objects, including:
- Virtual Machines (and their templates and presets),
- VM Replica Sets,
- and Persistent Volume Claims (with Containerized Data Importer features).
Since Ansible 2.8, these modules, along with the inventory plugin and related tests, are available in the main Ansible git repository. This repository contains only integration tests and example playbooks.
Table of Contents
Quickstart
For an easy introduction, check out these blog posts on kubevirt.io:
Requirements
- Ansible version 2.8 or newer:
- Install with:
pip3 --user install ansible
- Install with:
- openshift-restclient-python version 0.8.2 or newer:
- Install with:
pip3 --user install openshift
- Install with:
Source Code
Testing
There are two types of tests available here:
- Module tests in tests/playbooks
- Role tests in tests/roles
To run all tests for a specific target, use the all.yml
playbook.
Automatic Testing
Unit Tests
The upstream Ansible repository includes unit tests for the KubeVirt modules.
Integration Tests
Module tests (tests/playbooks/all.yml) are executed on actual clusters that have both KubeVirt and CDI deployed. These tests run on:
- TravisCI (Ubuntu VMs that support only minikube; KubeVirt VMs cannot use KVM acceleration)
- oVirt Jenkins (Physical servers that support any cluster with kubevirtci)
Module tests are conducted using:
- The latest version of Ansible (whatever is installed with
pip install ansible
) - Stable branches of Ansible
- Development branches of Ansible
Role tests (tests/roles/all.yml) are only run on TravisCI using the development branch.
To catch issues quickly, Travis runs all tests every 24 hours using a fresh clone of ansible.git and notifies KubeVirt module developers if any tests fail.
Manual Testing
Clone this repository to a machine where you can use
oc login
to your cluster:$ git clone https://github.com/kubevirt/ansible-kubevirt-modules.git $ cd ./ansible-kubevirt-modules
(Optional) Set up a virtual environment for dependency isolation:
$ python3 -m venv env $ source env/bin/activate
Install necessary dependencies:
$ pip install openshift
If you skipped the previous step, you might need to run that command with
sudo
.Install Ansible in one of the following ways:
Install the latest released version:
$ pip install ansible
You may need
sudo
here.Build an RPM from the development branch:
$ git clone https://github.com/ansible/ansible.git $ cd ./ansible $ make rpm $ sudo rpm -Uvh ./rpm-build/ansible-*.noarch.rpm
Run the tests:
$ ansible-playbook tests/playbooks/all.yml
Note: The playbook examples include cloud-init configuration to access the created VMIs.
To use SSH:
$ kubectl get all NAME READY STATUS RESTARTS AGE po/virt-launcher-bbecker-jw5kk 1/1 Running 0 22m $ kubectl expose pod virt-launcher-bbecker-jw5kk --port=27017 --target-port=22 --name=vmservice $ kubectl get svc vmservice NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.133.9 <none> 27017/TCP 19m $ ssh -i tests/test_rsa -p 27017 [email protected]
It might take some time for the VM to start up before SSH is available.
To use
virtctl
:$ virtctl console <vmi_name>
Or
$ virtctl vnc <vmi_name>
Use the username
kubevirt
and the passwordkubevirt
.
(Optional) Exit the virtual environment and delete it:
$ deactivate $ rm -rf env/
Notes on kubevirt_cdi_upload
Module
To upload an image from your local machine using the kubevirt_cdi_upload
module, your system must connect to the CDI upload proxy pod. You can do this by:
Exposing the
cdi-uploadproxy
Service from thecdi
namespace, orUsing
kubectl port-forward
to set up temporary port forwarding through the Kubernetes API server:kubectl port-forward -n cdi service/cdi-uploadproxy 9443:443
Notes on k8s_facts
Module
Use the following command to collect information about existing VMs, if there are any, and print it in a JSON format based on the KubeVirt VM spec:
$ ansible-playbook examples/playbooks/k8s_facts_vm.yml
Notes on the KubeVirt Inventory Plugin
Inventory plugins allow users to link data sources to create the list of hosts that Ansible targets for tasks. You can specify sources with the command line parameters -i /path/to/file
or -i 'host1, host2'
or from other configurations.
Enabling the KubeVirt Inventory Plugin
To enable the KubeVirt plugin, add this section in the tests/ansible.cfg
file:
[inventory]
enable_plugins = kubevirt
Configuring the KubeVirt Inventory Plugin
Define the plugin configuration in tests/playbooks/plugin/kubevirt.yaml
like this:
plugin: kubevirt
connections:
- namespaces:
- default
interface_name: default
In this example, the KubeVirt plugin lists all VMIs from the default
namespace and uses the default
interface name.
Using the KubeVirt Inventory Plugin
To use the plugin in a playbook, run:
$ ansible-playbook -i kubevirt.yaml <playbook>
Note: The KubeVirt inventory plugin is designed to work with Multus. It can only be used for VMIs that are connected to the bridge and have an IP address in the Status field. For VMIs available through Kubernetes services, use the k8s Ansible module.
ansible-galaxy install kubevirt.kubevirt-modules