zorun.garage
Ansible Role for Garage
This Ansible role installs and configures Garage, which is an open-source, distributed object storage service designed for self-hosting.
It downloads the Garage software, creates a system user, sets up directories for data and metadata, generates a configuration file, and installs a systemd service to run Garage.
Right now, this role does not automatically connect all nodes to each other, but this is a feature that will be added in the future.
Installation
You can find this role on Ansible Galaxy.
Basic Configuration
To get started, you need:
- A template for Garage's configuration file.
- Four variables:
garage_version,garage_local_template,garage_metadata_dir,garage_data_dir.
Here’s an example playbook:
- hosts: mycluster
roles:
- garage
vars:
garage_version: "0.8.0"
garage_local_template: "garage.toml.j2"
garage_metadata_dir: "/var/lib/garage"
garage_data_dir: "/mnt/data"
my_rpc_secret: "130458bfce56b518db49e5f72029070b5e0fcbe514052c108036d361a087643f"
my_admin_token: "7b3e91b552089363ab94eb95f62324fb4138c9a6d71a69daefae0c5047b33bb7"
You'll also need to create a file named templates/garage.toml.j2 in your Ansible directory with the content below:
# Managed by Ansible
metadata_dir = "{{ garage_metadata_dir }}"
data_dir = "{{ garage_data_dir }}"
db_engine = "lmdb"
replication_mode = "3"
block_size = 1048576
compression_level = 1
rpc_bind_addr = "{{ ansible_default_ipv4.address }}:3901"
rpc_public_addr = "{{ ansible_default_ipv4.address }}:3901"
rpc_secret = "{{ my_rpc_secret }}"
bootstrap_peers = []
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.garage.localhost"
index = "index.html"
[admin]
api_bind_addr = "[::1]:3903"
admin_token = "{{ my_admin_token }}"
In this example, each node uses its main IPv4 address as the RPC address. If your nodes are behind NAT, set rpc_public_addr to the public IP address. For an IPv6 cluster, use {{ ansible_default_ipv6.address }} instead.
This example also uses two custom variables: my_rpc_secret and my_admin_token. You can use your own custom variables for any configuration setting you wish to manage.
While needing to provide a template can be a bit of hassle, it offers a lot of flexibility. Check the official documentation for more details on configuring Garage.
Variable Reference
Here's a summary of all the variables you can use in this role, along with brief explanations. Some of these variables are required, while others have default values.
garage_version (required): Version of Garage to download and use (e.g.,
0.8.0).garage_local_template (required): Local path to the Garage configuration file template.
garage_metadata_dir (required): Directory where Garage will store metadata. This will be created with the right permissions.
garage_data_dir (required): Directory where Garage will store actual data. This will also be created with the right permissions.
garage_config_file:
/etc/garage.tomlPath to the configuration file created on the host.
garage_systemd_service:
garageThe name of the systemd service. Useful if running multiple Garage daemons on the same host.
garage_binaries_dir:
/usr/local/binDirectory for storing downloaded Garage binaries (e.g.,
/usr/local/bin/garage-0.8.0).garage_main_binary:
/usr/local/bin/garagePath to the main binary for the systemd service; this will be a symlink to the desired version of Garage.
garage_system_user:
garageThe name of the system user created to run Garage. All files created by Garage will belong to this user.
garage_system_group:
garageThe name of the system group created for Garage.
garage_logging:
netapp=info,garage=infoLogging configuration for Garage.
garage_architecture:
{{ ansible_architecture }}CPU architecture for the downloaded binary. It will be set automatically to the target host architecture but can be overridden if necessary.
Advanced Setup: Multiple Garage Daemons
If you want to run multiple Garage daemons on the same machine, for example, with overlapping nodes across clusters, you can do so as follows:
Example Ansible inventory:
[cluster1]
host1
host2
host3
[cluster2]
host1
host2
host3
host4
host5
Here’s how you can manage this situation with a playbook:
- hosts: cluster1
roles:
- garage
vars:
garage_version: "0.8.0"
garage_local_template: "garage.toml.j2"
garage_config_file: /etc/garage-cluster1.toml
garage_metadata_dir: "/var/lib/garage/cluster1"
garage_data_dir: "/mnt/data/cluster1"
garage_systemd_service: garage-cluster1
garage_main_binary: /usr/local/bin/garage-cluster1
garage_system_user: garage-cluster1
garage_system_group: garage-cluster1
- hosts: cluster2
roles:
- garage
vars:
garage_version: "0.8.1"
garage_local_template: "garage.toml.j2"
garage_metadata_dir: "/var/lib/garage/cluster2"
garage_data_dir: "/mnt/data/cluster2"
garage_config_file: /etc/garage-cluster2.toml
garage_systemd_service: garage-cluster2
garage_main_binary: /usr/local/bin/garage-cluster2
garage_system_user: garage-cluster2
garage_system_group: garage-cluster2
It is fine to use the same garage_version and garage_local_template. You can also share the same system user and group depending on your security needs.
If you want to use the Garage command-line interface, you can run it like this: garage-cluster1 -c /etc/garage-cluster1.toml status, or restart the service with systemctl restart garage-cluster1.
Upgrading a Cluster
Before upgrading, ensure you read the official upgrade documentation.
Straightforward Upgrades
For a simple upgrade:
- Read the release notes.
- Update
garage_version. - Add
serial: 1to your playbook (check Ansible documentation). - Run your
ansible-playbookwith--step. - After each host upgrades, verify everything is working and prompt Ansible to
(c)ontinue.
Advanced Upgrades
For upgrades between incompatible versions, the strategy will vary by version, so consult the official documentation and migration guides.
To download a new version of Garage while keeping the existing setup intact, you can run the download task by itself:
ansible-playbook garage.yaml -e garage_version=0.9.0 --tags download
After this, the new binary will be available on your hosts using its full path (e.g., /usr/local/bin/garage-0.9.0), allowing you to perform offline migrations.
