containerd

ansible-role-containerd

Ansible role to install containerd. containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.

Changelog

Change history:

See full CHANGELOG

Recent changes:

0.13.1+1.7.20

  • UPDATE
    • update containerd to v1.7.20

0.13.0+1.7.19

  • FEATURE

    • add support for Ubuntu 24.04
  • UPDATE

    • update containerd to v1.7.19

Installation

  • Directly download from Github (change into Ansible role directory before cloning): git clone https://github.com/githubixx/ansible-role-containerd.git githubixx.containerd

  • Via ansible-galaxy command and download directly from Ansible Galaxy: ansible-galaxy install role githubixx.containerd

  • Create a requirements.yml file with the following content (this will download the role from Github) and install with ansible-galaxy role install -r requirements.yml:

---
roles:
  - name: githubixx.containerd
    src: https://github.com/githubixx/ansible-role-containerd.git
    version: 0.13.1+1.7.20

Role Variables

# Only value "base" is currently supported
containerd_flavor: "base"

# containerd version to install
containerd_version: "1.7.19"

# Directory where to store "containerd" binaries
containerd_binary_directory: "/usr/local/bin"

# Location of containerd configuration file
containerd_config_directory: "/etc/containerd"

# Directory to store the archive
containerd_tmp_directory: "{{ lookup('env', 'TMPDIR') | default('/tmp', true) }}"

# Owner/group of "containerd" binaries. If the variables are not set
# the resulting binary will be owned by the current user.
containerd_owner: "root"
containerd_group: "root"

# Specifies the permissions of the "containerd" binaries
containerd_binary_mode: "0755"

# Operating system
# Possible options: "linux", "windows"
containerd_os: "linux"

# Processor architecture "containerd" should run on.
# Other possible values: "arm64","arm"
containerd_arch: "amd64"

# Name of the archive file name
containerd_archive_base: "containerd-{{ containerd_version }}-{{ containerd_os }}-{{ containerd_arch }}.tar.gz"

# The containerd download URL (normally no need to change it)
containerd_url: "https://github.com/containerd/containerd/releases/download/v{{ containerd_version }}/{{ containerd_archive_base }}"

# containerd systemd service settings
containerd_service_settings:
  "ExecStartPre": "{{ modprobe_location }} overlay"
  "ExecStart": "{{ containerd_binary_directory }}/containerd"
  "Restart": "always"
  "RestartSec": "5"
  "Type": "notify"
  "Delegate": "yes"
  "KillMode": "process"
  "OOMScoreAdjust": "-999"
  "LimitNOFILE": "1048576"
  "LimitNPROC": "infinity"
  "LimitCORE": "infinity"

# Content of configuration file of "containerd". The settings below are the
# settings that are different to the default "containerd" settings.
#
# The default "containerd" configuration can be generated with this command:
#
# containerd config default
#
# Difference to default configuration:
#
# - The configuration file contains a few role variables that will be replaced when
#   the configuration template is processed.
# - In 'plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options' the
#   setting "SystemdCgroup" is set to "true" instead of "false". This is relevant for
#   Kubernetes e.g. Also see:
#   https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd)
#
containerd_config: |
  version = 2
  [plugins]
    [plugins."io.containerd.grpc.v1.cri"]
      sandbox_image = "registry.k8s.io/pause:3.8"
      [plugins."io.containerd.grpc.v1.cri".cni]
        bin_dir = "/opt/cni/bin"
        conf_dir = "/etc/cni/net.d"
      [plugins."io.containerd.grpc.v1.cri".containerd]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
            runtime_type = "io.containerd.runc.v2"
            [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
              BinaryName = "/usr/local/sbin/runc"
              SystemdCgroup = true
  [stream_processors]
    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
      args = ["--decryption-keys-path", "{{ containerd_config_directory }}/ocicrypt/keys"]
      env = ["OCICRYPT_KEYPROVIDER_CONFIG={{ containerd_config_directory }}/ocicrypt/ocicrypt_keyprovider.conf"]
    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
      args = ["--decryption-keys-path", "{{ containerd_config_directory }}/ocicrypt/keys"]
      env = ["OCICRYPT_KEYPROVIDER_CONFIG={{ containerd_config_directory }}/ocicrypt/ocicrypt_keyprovider.conf"]

Dependencies

Optional dependencies (e.g. needed for Kubernetes):

You can use every other runc and CNI role of course.

Example Playbook

- hosts: your-host
  roles:
    - githubixx.containerd

More examples are available in the Molecule tests.

Testing

This role has a small test setup that is created using Molecule, libvirt (vagrant-libvirt) and QEMU/KVM. Please see my blog post Testing Ansible roles with Molecule, libvirt (vagrant-libvirt) and QEMU/KVM how to setup. The test configuration is here.

Afterwards molecule can be executed:

molecule converge

This will setup a few virtual machines (VM) with different supported Linux operating systems and installs containerd, runc and the CNI plugins (which are needed by Kubernetes e.g.).

A small verification step is also included. It pulls a nginx container and runs it to make sure that containerd is setup correctly and is able to run container images:

molecule verify

To clean up run

molecule destroy

License

GNU GENERAL PUBLIC LICENSE Version 3

Author Information

http://www.tauceti.blog

Install
ansible-galaxy install githubixx/ansible-role-containerd
GitHub repository
License
Unknown
Downloads
2196
Owner
Senior System Engineer - Python, Go, Cloud, Kubernetes, Commodore, Retro, 80's ;-)