blob: 32b392dbcee04cf5d2f3b5a78efd3e68e6602216 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
There are several variables you can define to configure a machines response to the borg role :
- borg_server: a string that contains a borg servers hostname
- borg_jobs: a list of dict, one item per job with the following keys:
- name: the name of the borg job
- path: an optional path containing the files to backup
- command_to_pipe: an optional command to pipe the backup data from
- pre_command: an optional command to run before a job
- post_command: an optional command to run after a job
To be valid, a borg job entry needs to have exactly one of the path or command_to_pipe keys.
Here are some job examples :
- { name: etc, path: "/etc" }
- { name: mysqldump, command_to_pipe: "/usr/bin/mysqldump -h {{ mysql_server }} -u{{ ansible_hostname }} -p{{ ansible_local.mysql_client.password }} --single-transaction --add-drop-database -B {{ ansible_hostname }}" }
- { name: gitea, path: "/tmp/gitea.zip", pre_command: "echo '/usr/local/sbin/gitea -C /etc/gitea -c /etc/gitea/app.ini dump -f /tmp/gitea.zip' | su -l _gitea", post_command: "rm -f /tmp/gitea.zip" }
There is an action plugin that parses the borg_server entries from all hosts and set a flag to True in adyxax['is_borg_server'] for any machine specified as a backup target
|