diff options
Diffstat (limited to '')
-rw-r--r-- | content/blog/ansible/borg-ansible-role-2.md | 303 | ||||
-rw-r--r-- | content/blog/ansible/factorio.md | 265 | ||||
-rw-r--r-- | content/blog/ansible/nginx-ansible-role.md | 336 | ||||
-rw-r--r-- | content/blog/ansible/podman-ansible-role.md | 307 | ||||
-rw-r--r-- | content/blog/ansible/postgresql-ansible-role.md | 261 |
5 files changed, 1472 insertions, 0 deletions
diff --git a/content/blog/ansible/borg-ansible-role-2.md b/content/blog/ansible/borg-ansible-role-2.md new file mode 100644 index 0000000..54198cc --- /dev/null +++ b/content/blog/ansible/borg-ansible-role-2.md @@ -0,0 +1,303 @@ +--- +title: 'Borg ansible role (continued)' +description: 'The ansible role I rewrote to manage my borg backups' +date: '2024-10-07' +tags: +- ansible +- backups +- borg +--- + +## Introduction + +I initially wrote about my borg ansible role in [a blog article three and a half years ago]({{< ref "borg-ansible-role.md" >}}). I released a second version two years ago (time flies!) and it still works well, but I am no longer using it. + +I put down ansible when I got infatuated with nixos a little more than a year ago. As I am dialing it back on nixos, I am reviewing and changing some of my design choices. + +## Borg repositories changes + +One of the main breaking change is that I no longer want to use one borg repository per host as my old role managed: I want one per job/application so that backups are agnostic from the hosts they are running on. + +The main advantages are: +- one private ssh key per job +- no more data expiration when a job stops running on a job for a time +- easier monitoring of job run: now checking if a repository has new data is enough, before I had to check the number of jobs that wrote to it in a specific time frame. + +The main drawback is that I lose the ability to automatically clean a borg server's `authorized_keys` file when I completely stop using an application or service. Migrating from host to host is properly handled, but complete removal will be manual. I tolerate this because now each job has its own private ssh key, generated on the fly when the job is deployed to a host. + +## The new role + +### Tasks + +The main.yaml contains: + +``` yaml +--- +- name: 'Install borg' + package: + name: + - 'borgbackup' + # This use attribute is a work around for https://github.com/ansible/ansible/issues/82598 + # Invoking the package module without this fails in a delegate_to context + use: '{{ ansible_facts["pkg_mgr"] }}' +``` + +It will be included in a `delete_to` context when a client configures its server. For the client itself, this tasks file will run normally and be invoked from a `meta` dependency. + +The meat of the role is in the client.yaml: + +``` yaml +--- +# Inputs: +# client: +# name: string +# jobs: list(job) +# server: string +# With: +# job: +# command_to_pipe: optional(string) +# exclude: optional(list(string)) +# name: string +# paths: optional(list(string)) +# post_command: optional(string) +# pre_command: optional(string) + +- name: 'Ensure borg directories exists on server' + file: + state: 'directory' + path: '{{ item }}' + owner: 'root' + mode: '0700' + loop: + - '/etc/borg' + - '/root/.cache/borg' + - '/root/.config/borg' + +- name: 'Generate openssh key pair' + openssh_keypair: + path: '/etc/borg/{{ client.name }}.key' + type: 'ed25519' + owner: 'root' + mode: '0400' + +- name: 'Read the public key' + ansible.builtin.slurp: + src: '/etc/borg/{{ client.name }}.key.pub' + register: 'borg_public_key' + +- include_role: + name: 'borg' + tasks_from: 'server' + args: + apply: + delegate_to: '{{ client.server }}' + vars: + server: + name: '{{ client.name }}' + pubkey: '{{ borg_public_key.content | b64decode | trim }}' + +- name: 'Deploy the jobs script' + template: + src: 'jobs.sh' + dest: '/etc/borg/{{ client.name }}.sh' + owner: 'root' + mode: '0500' + +- name: 'Deploy the systemd service and timer' + template: + src: '{{ item.src }}' + dest: '{{ item.dest }}' + owner: 'root' + mode: '0444' + notify: 'systemctl daemon-reload' + loop: + - { src: 'jobs.service', dest: '/etc/systemd/system/borg-job-{{ client.name }}.service' } + - { src: 'jobs.timer', dest: '/etc/systemd/system/borg-job-{{ client.name }}.timer' } + +- name: 'Activate job' + service: + name: 'borg-job-{{ client.name }}.timer' + enabled: true + state: 'started' + +``` + +The server.yaml contains: + +``` yaml +--- +# Inputs: +# server: +# name: string +# pubkey: string + +- name: 'Run common tasks' + include_tasks: 'main.yaml' + +- name: 'Create borg group on server' + group: + name: 'borg' + system: 'yes' + +- name: 'Create borg user on server' + user: + name: 'borg' + group: 'borg' + shell: '/bin/sh' + home: '/srv/borg' + createhome: 'yes' + system: 'yes' + password: '*' + +- name: 'Ensure borg directories exist on server' + file: + state: 'directory' + path: '{{ item }}' + owner: 'borg' + mode: '0700' + loop: + - '/srv/borg/.ssh' + - '/srv/borg/{{ server.name }}' + +- name: 'Authorize client public key' + lineinfile: + path: '/srv/borg/.ssh/authorized_keys' + line: '{{ line }}{{ server.pubkey }}' + search_string: '{{ line }}' + create: true + owner: 'borg' + group: 'borg' + mode: '0400' + vars: + line: 'command="borg serve --restrict-to-path /srv/borg/{{ server.name }}",restrict ' +``` + +### Handlers + +I have a single handler: + +``` yaml +--- +- name: 'systemctl daemon-reload' + shell: + cmd: 'systemctl daemon-reload' +``` + +### Templates + +The `jobs.sh` script contains: + +``` shell +#!/usr/bin/env bash +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### +set -euo pipefail + +archiveSuffix=".failed" + +# Run borg init if the repo doesn't exist yet +if ! borg list > /dev/null; then + borg init --encryption none +fi + +{% for job in client.jobs %} +archiveName="{{ ansible_fqdn }}-{{ client.name }}-{{ job.name }}-$(date +%Y-%m-%dT%H:%M:%S)" +{% if job.pre_command is defined %} +{{ job.pre_command }} +{% endif %} +{% if job.command_to_pipe is defined %} +{{ job.command_to_pipe }} \ + | borg create \ + --compression auto,zstd \ + "::${archiveName}${archiveSuffix}" \ + - +{% else %} +borg create \ + {% for exclude in job.exclude|default([]) %} --exclude {{ exclude }}{% endfor %} \ + --compression auto,zstd \ + "::${archiveName}${archiveSuffix}" \ + {{ job.paths | join(" ") }} +{% endif %} +{% if job.post_command is defined %} +{{ job.post_command }} +{% endif %} +borg rename "::${archiveName}${archiveSuffix}" "${archiveName}" +borg prune \ + --keep-daily=14 --keep-monthly=3 --keep-weekly=4 \ + --glob-archives '*-{{ client.name }}-{{ job.name }}-*' +{% endfor %} + +borg compact +``` + +The `jobs.service` systemd unit file contains: + +``` ini +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +[Unit] +Description=BorgBackup job {{ client.name }} + +[Service] +Environment="BORG_REPO=ssh://borg@{{ client.server }}/srv/borg/{{ client.name }}" +Environment="BORG_RSH=ssh -i /etc/borg/{{ client.name }}.key -o StrictHostKeyChecking=accept-new" +CPUSchedulingPolicy=idle +ExecStart=/etc/borg/{{ client.name }}.sh +Group=root +IOSchedulingClass=idle +PrivateTmp=true +ProtectSystem=strict +ReadWritePaths=/root/.cache/borg +ReadWritePaths=/root/.config/borg +User=root +``` + +Finally the `jobs.timer` systemd timer file contains: + +``` ini +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +[Unit] +Description=BorgBackup job {{ client.name }} timer + +[Timer] +FixedRandomDelay=true +OnCalendar=daily +Persistent=true +RandomizedDelaySec=3600 + +[Install] +WantedBy=timers.target +``` + +## Invoking the role + +The role can be invoked by: + +``` yaml +- include_role: + name: 'borg' + tasks_from: 'client' + vars: + client: + jobs: + - name: 'data' + paths: + - '/srv/vaultwarden' + - name: 'postgres' + command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d vaultwarden'" + name: 'vaultwarden' + server: '{{ vaultwarden.borg }}' +``` + +## Conclusion + +I am happy with this new design! The immediate consequence is that I am archiving my old role since I do not intend to maintain it anymore. diff --git a/content/blog/ansible/factorio.md b/content/blog/ansible/factorio.md new file mode 100644 index 0000000..08e2827 --- /dev/null +++ b/content/blog/ansible/factorio.md @@ -0,0 +1,265 @@ +--- +title: 'How to self host a Factorio headless server' +description: 'Automated with ansible' +date: '2024-09-25' +tags: +- ansible +- Debian +- Factorio +--- + +## Introduction + +With the upcoming v2.0 release next month, we decided to try a [seablock](https://mods.factorio.com/mod/SeaBlock) run with a friend and see how far we go in this time frame. Here is a the small ansible role I wrote to deploy this. It is for a Debian server but any Linux distribution with systemd will do. And if you ignore the service unit file, any Linux or even [FreeBSD](factorio-server-in-a-linux-jail.md) will do. + +## Tasks + +This role has a single `tasks/main.yaml` file containing the following. + +### User + +This is fairly standard: +``` yaml +- name: 'Create factorio group' + group: + name: 'factorio' + system: 'yes' + +- name: 'Create factorio user' + user: + name: 'factorio' + group: 'factorio' + shell: '/usr/bin/bash' + home: '/srv/factorio' + createhome: 'yes' + system: 'yes' + password: '*' +``` + +### Factorio + +Factorio has an API endpoint that provides information about its latest releases, I query and then parse it with: +``` yaml +- name: 'Retrieve factorio latest release number' + shell: + cmd: "curl -s https://factorio.com/api/latest-releases | jq -r '.stable.headless'" + register: 'factorio_version_info' + changed_when: False + +- set_fact: + factorio_version: '{{ factorio_version_info.stdout_lines[0] }}' +``` + +Afterwards, it is just a question of downloading and extracting factorio: +``` yaml +- name: 'Download factorio' + get_url: + url: "https://www.factorio.com/get-download/{{ factorio_version }}/headless/linux64" + dest: '/srv/factorio/headless-{{ factorio_version }}.zip' + mode: '0444' + register: 'factorio_downloaded' + +- name: 'Extract new factorio version' + ansible.builtin.unarchive: + src: '/srv/factorio/headless-{{ factorio_version }}.zip' + dest: '/srv/factorio' + owner: 'factorio' + group: 'factorio' + remote_src: 'yes' + notify: 'restart factorio' + when: 'factorio_downloaded.changed' +``` + +I also create the saves directory with: +``` yaml +- name: 'Make factorio saves directory' + file: + path: '/srv/factorio/factorio/saves' + owner: 'factorio' + group: 'factorio' + mode: '0750' + state: 'directory' +``` + +### Configuration files + +There are two configuration files to copy from the `files` folder: +``` yaml +- name: 'Deploy configuration files' + copy: + src: '{{ item.src }}' + dest: '{{ item.dest }}' + owner: 'factorio' + group: 'factorio' + mode: '0440' + notify: + - 'systemctl daemon-reload' + - 'restart factorio' + loop: + - { src: 'factorio.service', dest: '/etc/systemd/system/' } + - { src: 'server-adminlist.json', dest: '/srv/factorio/factorio/' } +``` + +The systemd service unit file contains: +``` ini +[Unit] +Descripion=Factorio Headless Server +After=network.target +After=systemd-user-sessions.service +After=network-online.target + +[Service] +Type=simple +User=factorio +ExecStart=/srv/factorio/factorio/bin/x64/factorio --start-server game.zip +WorkingDirectory=/srv/factorio/factorio + +[Install] +WantedBy=multi-user.target +``` + +The admin list is simply: + +``` json +["adyxax"] +``` + +I generate the factorio game password with terraform/OpenTofu using a resource like: + +``` hcl +resource "random_password" "factorio" { + length = 16 + + lifecycle { + ignore_changes = [ + length, + lower, + ] + } +} +``` + +This allows me to have it persist in the terraform state which is a good thing. For simplification, let's say that this state (which is a json file) is in a local file that I can load with: +``` yaml +- name: 'Load the tofu state to read the factorio game password' + include_vars: + file: '../../../../adyxax.org/01-legacy/terraform.tfstate' + name: 'tofu_state_legacy' +``` + +Given this template file: +``` json +{ + "name": "Normalians", + "description": "C'est sur ce serveur que jouent les beaux gosses", + "tags": ["game", "tags"], + "max_players": 0, + "visibility": { + "public": false, + "lan": false + }, + "username": "", + "password": "", + "token": "", + "game_password": "{{ factorio_game_password[0] }}", + "require_user_verification": false, + "max_upload_in_kilobytes_per_second": 0, + "max_upload_slots": 5, + "minimum_latency_in_ticks": 0, + "max_heartbeats_per_second": 60, + "ignore_player_limit_for_returning_players": false, + "allow_commands": "admins-only", + "autosave_interval": 10, + "autosave_slots": 5, + "afk_autokick_interval": 0, + "auto_pause": true, + "only_admins_can_pause_the_game": true, + "autosave_only_on_server": true, + "non_blocking_saving": true, + "minimum_segment_size": 25, + "minimum_segment_size_peer_count": 20, + "maximum_segment_size": 100, + "maximum_segment_size_peer_count": 10 +} +``` + +Note the usage of `[0]` for the variable expansion: it is a disappointing trick that you have to remember when dealing with json query parsing using ansible's filters: these always return an array. The template invocation is: +``` yaml +- name: 'Deploy configuration templates' + template: + src: 'server-settings.json' + dest: '/srv/factorio/factorio/' + owner: 'factorio' + group: 'factorio' + mode: '0440' + notify: 'restart factorio' + vars: + factorio_game_password: "{{ tofu_state_legacy | json_query(\"resources[?type=='random_password'&&name=='factorio'].instances[0].attributes.result\") }}" +``` + +### Service + +Finally I start and activate the factorio service on boot: +``` yaml +- name: 'Start factorio and activate it on boot' + service: + name: 'factorio' + enabled: 'yes' + state: 'started' +``` + +### Backups + +I invoke a personal borg role to configure my backups. I will detail the workings of this role in a next article: +``` yaml +- include_role: + name: 'borg' + tasks_from: 'client' + vars: + client: + jobs: + - name: 'save' + paths: + - '/srv/factorio/factorio/saves/game.zip' + name: 'factorio' + server: '{{ factorio.borg }}' +``` + +## Handlers + +I have these two handlers: + +``` yaml +--- +- name: 'systemctl daemon-reload' + shell: + cmd: 'systemctl daemon-reload' + +- name: 'restart factorio' + service: + name: 'factorio' + state: 'restarted' +``` + +## Generating a map and starting the game + +If you just followed this guide factorio failed to start on the server because it does not have a map in its save folder. If that is not the case for you because you are coming back to this article after some time, remember to stop factorio with `systemctl stop factorio` before continuing. If you do not, when you later restart factorio will overwrite your newly uploaded save. + +Launch factorio locally, install any mod you want then go to single player and generate a new map with your chosen settings. Save the game then quit and go back to your terminal. + +Find the save file (if playing on steam it will be in `~/.factorio/saves/`) and upload it to `/srv/factorio/factorio/saves/game.zip`. If you are using mods, `rsync` the mods folder that leaves next to your saves directory to the server with: + +``` shell +rsync -r ~/.factorio/mods/ root@factorio.adyxax.org:/srv/factorio/factorio/mods/` +``` + +Then give these files to the factorio user on your server before restarting the game: + +``` shell +chown -R factorio:factorio /srv/factorio +systemctl start factorio +``` + +## Conclusion + +Good luck and have fun! diff --git a/content/blog/ansible/nginx-ansible-role.md b/content/blog/ansible/nginx-ansible-role.md new file mode 100644 index 0000000..0c465a9 --- /dev/null +++ b/content/blog/ansible/nginx-ansible-role.md @@ -0,0 +1,336 @@ +--- +title: 'Nginx ansible role' +description: 'The ansible role I use to manage my nginx web servers' +date: '2024-10-28' +tags: +- ansible +- nginx +--- + +## Introduction + +Before succumbing to nixos, I had been using an ansible role to manage my nginx web servers. Now that I am in need of it again I refined it a bit: here is the result. + +## The role + +### Vars + +The role has OS specific vars in files named after the operating system. For example in `vars/Debian.yaml` I have: + +``` yaml +--- +nginx: + etc_dir: '/etc/nginx' + pid_file: '/run/nginx.pid' + www_user: 'www-data' +``` + +While in `vars/FreeBSD.yaml` I have: + +``` yaml +--- +nginx: + etc_dir: '/usr/local/etc/nginx' + pid_file: '/var/run/nginx.pid' + www_user: 'www' +``` + +### Tasks + +The main tasks file setups nginx and the global configuration common to all virtual hosts: + +``` yaml +--- +- include_vars: '{{ ansible_distribution }}.yaml' + +- name: 'Install nginx' + package: + name: + - 'nginx' + +- name: 'Make nginx vhost directory' + file: + path: '{{ nginx.etc_dir }}/vhost.d' + mode: '0755' + owner: 'root' + state: 'directory' + +- name: 'Deploy nginx configuration files' + copy: + src: '{{ item }}' + dest: '{{ nginx.etc_dir }}/{{ item }}' + notify: 'reload nginx' + loop: + - 'headers_base.conf' + - 'headers_secure.conf' + - 'headers_static.conf' + - 'headers_unsafe_inline_csp.conf' + +- name: 'Deploy nginx configuration template' + template: + src: 'nginx.conf' + dest: '{{ nginx.etc_dir }}/' + notify: 'reload nginx' + +- name: 'Deploy nginx certificates' + copy: + src: '{{ item }}' + dest: '{{ nginx.etc_dir }}/' + notify: 'reload nginx' + loop: + - 'adyxax.org.fullchain' + - 'adyxax.org.key' + - 'dh4096.pem' + +- name: 'Start nginx and activate it on boot' + service: + name: 'nginx' + enabled: true + state: 'started' +``` + +I have a `vhost.yaml` task file which currently simply deploys a file and reload nginx: + +``` yaml +- name: 'Deploy {{ vhost.name }} vhost {{ vhost.path }}' + template: + src: '{{ vhost.path }}' + dest: '{{ nginx.etc_dir }}/vhost.d/{{ vhost.name }}.conf' + notify: 'reload nginx' +``` + +### Handlers + +There is a single `main.yaml` handler: + +``` yaml +--- +- name: 'reload nginx' + service: + name: 'nginx' + state: 'reloaded' +``` + +### Files + +I deploy four configuration files in this role. These are all variants of the same theme and their purpose is just to prevent duplicating statements in the virtual hosts configuration files. + +`headers_base.conf`: + +``` nginx +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +add_header X-Frame-Options deny; +add_header X-XSS-Protection "1; mode=block"; +add_header X-Content-Type-Options nosniff; +add_header Referrer-Policy strict-origin; +add_header Cache-Control no-transform; +add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()"; +# 6 months HSTS pinning +add_header Strict-Transport-Security max-age=16000000; +``` + +`headers_secure.conf`: + +``` nginx +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +include headers_base.conf; +add_header Content-Security-Policy "script-src 'self'"; +``` + +`headers_static.conf`: + +``` nginx +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +include headers_secure.conf; +# Infinite caching +add_header Cache-Control "public, max-age=31536000, immutable"; +``` + +`headers_unsafe_inline_csp.conf`: + +``` nginx +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +include headers_base.conf; +add_header Content-Security-Policy "script-src 'self' 'unsafe-inline'"; +``` + +### Templates + +I have a single template for `nginx.conf`: + +``` nginx +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +user {{ nginx.www_user }}; +worker_processes auto; +pid {{ nginx.pid_file }}; +error_log /var/log/nginx/error.log; + +events { + worker_connections 1024; +} + +http { + include mime.types; + types_hash_max_size 4096; + sendfile on; + tcp_nopush on; + tcp_nodelay on; + keepalive_timeout 65; + + ssl_protocols TLSv1.2 TLSv1.3; + ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; + ssl_prefer_server_ciphers on; + + gzip on; + gzip_static on; + gzip_vary on; + gzip_comp_level 5; + gzip_min_length 256; + gzip_proxied expired no-cache no-store private auth; + gzip_types application/atom+xml application/geo+json application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/vnd.ms-fontobject application/wasm application/x-rss+xml application/x-web-app-manifest+json application/xhtml+xml application/xliff+xml application/xml font/collection font/otf font/ttf image/bmp image/svg+xml image/vnd.microsoft.icon text/cache-manifest text/calendar text/css text/csv text/javascript text/markdown text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/xml; + + proxy_redirect off; + proxy_connect_timeout 60s; + proxy_send_timeout 60s; + proxy_read_timeout 60s; + proxy_http_version 1.1; + proxy_set_header "Connection" ""; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Host $host; + proxy_set_header X-Forwarded-Server $host; + + map $http_upgrade $connection_upgrade { + default upgrade; + '' close; + } + + client_max_body_size 40M; + server_tokens off; + default_type application/octet-stream; + access_log /var/log/nginx/access.log; + + fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; + fastcgi_param QUERY_STRING $query_string; + fastcgi_param REQUEST_METHOD $request_method; + fastcgi_param CONTENT_TYPE $content_type; + fastcgi_param CONTENT_LENGTH $content_length; + + fastcgi_param SCRIPT_NAME $fastcgi_script_name; + fastcgi_param REQUEST_URI $request_uri; + fastcgi_param DOCUMENT_URI $document_uri; + fastcgi_param DOCUMENT_ROOT $document_root; + fastcgi_param SERVER_PROTOCOL $server_protocol; + fastcgi_param REQUEST_SCHEME $scheme; + fastcgi_param HTTPS $https if_not_empty; + + fastcgi_param GATEWAY_INTERFACE CGI/1.1; + fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; + + fastcgi_param REMOTE_ADDR $remote_addr; + fastcgi_param REMOTE_PORT $remote_port; + fastcgi_param REMOTE_USER $remote_user; + fastcgi_param SERVER_ADDR $server_addr; + fastcgi_param SERVER_PORT $server_port; + fastcgi_param SERVER_NAME $server_name; + + # PHP only, required if PHP was built with --enable-force-cgi-redirect + fastcgi_param REDIRECT_STATUS 200; + + uwsgi_param QUERY_STRING $query_string; + uwsgi_param REQUEST_METHOD $request_method; + uwsgi_param CONTENT_TYPE $content_type; + uwsgi_param CONTENT_LENGTH $content_length; + + uwsgi_param REQUEST_URI $request_uri; + uwsgi_param PATH_INFO $document_uri; + uwsgi_param DOCUMENT_ROOT $document_root; + uwsgi_param SERVER_PROTOCOL $server_protocol; + uwsgi_param REQUEST_SCHEME $scheme; + uwsgi_param HTTPS $https if_not_empty; + + uwsgi_param REMOTE_ADDR $remote_addr; + uwsgi_param REMOTE_PORT $remote_port; + uwsgi_param SERVER_PORT $server_port; + uwsgi_param SERVER_NAME $server_name; + + ssl_dhparam dh4096.pem; + ssl_session_cache shared:SSL:2m; + ssl_session_timeout 1h; + ssl_session_tickets off; + + server { + listen 80 default_server; + listen [::]:80 default_server; + server_name _; + access_log off; + server_name_in_redirect off; + return 444; + } + + server { + listen 443 ssl; + listen [::]:443 ssl; + server_name _; + access_log off; + server_name_in_redirect off; + return 444; + ssl_certificate adyxax.org.fullchain; + ssl_certificate_key adyxax.org.key; + } + + include vhost.d/*.conf; +} +``` + +## Usage example + +I do not call the role from a playbook, I prefer running the setup from an application's role that relies on nginx using a `meta/main.yaml` containing something like: + +``` yaml +--- +dependencies: + - role: 'borg' + - role: 'nginx' + - role: 'postgresql' +``` + +Then from a tasks file: + +``` yaml +- include_role: + name: 'nginx' + tasks_from: 'vhost' + vars: + vhost: + name: 'www' + path: 'roles/www.adyxax.org/files/nginx-vhost.conf' +``` + +I did not find an elegant way to pass a file path local to one role to another. Because of that, here I just specify the full vhost file path complete with the `roles/` prefix. + +### Conclusion + +I you have an elegant idea for passing the local file path from one role to another do not hesitate to ping me! diff --git a/content/blog/ansible/podman-ansible-role.md b/content/blog/ansible/podman-ansible-role.md new file mode 100644 index 0000000..37cdabf --- /dev/null +++ b/content/blog/ansible/podman-ansible-role.md @@ -0,0 +1,307 @@ +--- +title: 'Podman ansible role' +description: 'The ansible role I use to manage my podman containers' +date: '2024-11-08' +tags: +- ansible +- podman +--- + +## Introduction + +Before succumbing to nixos, I had was running all my containers on k3s. This time I am migrating things to podman and trying to achieve a lighter setup. This article presents the ansible role I wrote to manage podman containers. + +## The role + +### Tasks + +The main tasks file setups podman and the required network configurations with: + +``` yaml +--- +- name: 'Run OS specific tasks for the podman role' + include_tasks: '{{ ansible_distribution }}.yaml' + +- name: 'Make podman scripts directory' + file: + path: '/etc/podman' + mode: '0700' + owner: 'root' + state: 'directory' + +- name: 'Deploy podman configuration files' + copy: + src: 'cni-podman0' + dest: '/etc/network/interfaces.d/' + owner: 'root' + mode: '444' +``` + +My OS specific task file `Debian.yaml` looks like this: + +``` yaml +--- +- name: 'Install podman dependencies' + ansible.builtin.apt: + name: + - 'buildah' + - 'podman' + - 'rootlesskit' + - 'slirp4netns' + +- name: 'Deploy podman configuration files' + copy: + src: 'podman-bridge.json' + dest: '/etc/cni/net.d/87-podman-bridge.conflist' + owner: 'root' + mode: '444' +``` + +The entrypoint tasks for this role is the `container.yaml` task file: + +``` yaml +--- +# Inputs: +# container: +# cmd: optional(list(string)) +# env_vars: list(env_var) +# image: string +# name: string +# publishs: list(publish) +# volumes: list(volume) +# With: +# env_var: +# name: string +# value: string +# publish: +# container_port: string +# host_port: string +# ip: string +# volume: +# dest: string +# src: string + +- name: 'Deploy podman systemd service for {{ container.name }}' + template: + src: 'container.service' + dest: '/etc/systemd/system/podman-{{ container.name }}.service' + owner: 'root' + mode: '0444' + notify: 'systemctl daemon-reload' + +- name: 'Deploy podman scripts for {{ container.name }}' + template: + src: 'container-{{ item }}.sh' + dest: '/etc/podman/{{ container.name }}-{{ item }}.sh' + owner: 'root' + mode: '0500' + register: 'deploy_podman_scripts' + loop: + - 'start' + - 'stop' + +- name: 'Restart podman container {{ container.name }}' + shell: + cmd: "systemctl restart podman-{{ container.name }}" + when: 'deploy_podman_scripts.changed' + +- name: 'Start podman container {{ container.name }} and activate it on boot' + service: + name: 'podman-{{ container.name }}' + enabled: true + state: 'started' +``` + +### Handlers + +There is a single `main.yaml` handler: + +``` yaml +--- +- name: 'systemctl daemon-reload' + shell: + cmd: 'systemctl daemon-reload' +``` + +### Files + +Here is the `cni-podman0` I deploy on Debian hosts. It is required for the bridge to be up on boot so that other services can bind ports on it. Without this, the bridge would only come up when the first container starts which is too late in the boot process. + +``` text +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +auto cni-podman0 +iface cni-podman0 inet static +address 10.88.0.1/16 +pre-up brctl addbr cni-podman0 +post-down brctl delbr cni-podman0 +``` + +Here is the JSON cni bridge configuration file I use, customized to add IPv6 support: + +``` json +{ + "cniVersion": "0.4.0", + "name": "podman", + "plugins": [ + { + "type": "bridge", + "bridge": "cni-podman0", + "isGateway": true, + "ipMasq": true, + "hairpinMode": true, + "ipam": { + "type": "host-local", + "routes": [ + { + "dst": "0.0.0.0/0" + }, { + "dst": "::/0" + } + ], + "ranges": [ + [{ + "subnet": "10.88.0.0/16", + "gateway": "10.88.0.1" + }], [{ + "subnet": "fd42::/48", + "gateway": "fd42::1" + }] + ] + } + }, { + "type": "portmap", + "capabilities": { + "portMappings": true + } + }, { + "type": "firewall" + }, { + "type": "tuning" + } + ] +} +``` + +### Templates + +Here is the jinja templated start bash script: + +``` shell +#!/usr/bin/env bash +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### +set -euo pipefail + +podman rm -f {{ container.name }} || true +rm -f /run/podman-{{ container.name }}.ctr-id + +exec podman run \ + --rm \ + --name={{ container.name }} \ + --log-driver=journald \ + --cidfile=/run/podman-{{ container.name }}.ctr-id \ + --cgroups=no-conmon \ + --sdnotify=conmon \ + -d \ +{% for env_var in container.env_vars | default([]) %} + -e {{ env_var.name }}={{ env_var.value }} \ +{% endfor %} +{% for publish in container.publishs | default([]) %} + -p {{ publish.ip }}:{{ publish.host_port }}:{{ publish.container_port }} \ +{% endfor %} +{% for volume in container.volumes | default([]) %} + -v {{ volume.src }}:{{ volume.dest }} \ +{% endfor %} + {{ container.image }} {% for cmd in container.cmd | default([]) %}{{ cmd }} {% endfor %} +``` + +Here is the jinja templated stop bash script: + +``` shell +#!/usr/bin/env bash +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### +set -euo pipefail + +if [[ ! "$SERVICE_RESULT" = success ]]; then + podman stop --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id +fi + +podman rm -f --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id +``` + +Here is the jinja templated systemd unit service: + +``` shell +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +[Unit] +After=network-online.target +Description=Podman container {{ container.name }} + +[Service] +ExecStart=/etc/podman/{{ container.name }}-start.sh +ExecStop=/etc/podman/{{ container.name }}-stop.sh +NotifyAccess=all +Restart=always +TimeoutStartSec=0 +TimeoutStopSec=120 +Type=notify + +[Install] +WantedBy=multi-user.target +``` + +## Usage example + +I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on podman using a meta/main.yaml containing something like: + +``` yaml +--- +dependencies: + - role: 'borg' + - role: 'nginx' + - role: 'podman' +``` + +Then from a tasks file: + +``` yaml +- include_role: + name: 'podman' + tasks_from: 'container' + vars: + container: + cmd: ['--config-path', '/srv/cfg/conf.php'] + name: 'privatebin' + env_vars: + - name: 'PHP_TZ' + value: 'Europe/Paris' + - name: 'TZ' + value: 'Europe/Paris' + image: 'docker.io/privatebin/nginx-fpm-alpine:1.7.4' + publishs: + - container_port: '8080' + host_port: '8082' + ip: '127.0.0.1' + volumes: + - dest: '/srv/cfg/conf.php:ro' + src: '/etc/privatebin.conf.php' + - dest: '/srv/data' + src: '/srv/privatebin' +``` + +## Conclusion + +I enjoy this design, it works really well. I am missing a task for deprovisioning a container but I have not needed it yet. diff --git a/content/blog/ansible/postgresql-ansible-role.md b/content/blog/ansible/postgresql-ansible-role.md new file mode 100644 index 0000000..02614c0 --- /dev/null +++ b/content/blog/ansible/postgresql-ansible-role.md @@ -0,0 +1,261 @@ +--- +title: 'PostgreSQL ansible role' +description: 'The ansible role I use to manage my PostgreSQL databases' +date: '2024-10-09' +tags: +- ansible +- PostgreSQL +--- + +## Introduction + +Before succumbing to nixos, I had been using an ansible role to manage my PostgreSQL databases. Now that I am in need of it again I refined it a bit: here is the result. + +## The role + +### Tasks + +My `main.yaml` relies on OS specific tasks: + +``` yaml +--- +- name: 'Generate postgres user password' + include_tasks: 'generate_password.yaml' + vars: + name: 'postgres' + when: '(ansible_local["postgresql_postgres"]|default({})).password is undefined' + +- name: 'Run OS tasks' + include_tasks: '{{ ansible_distribution }}.yaml' + +- name: 'Start postgresql and activate it on boot' + service: + name: 'postgresql' + enabled: true + state: 'started' +``` + +Here is an example in `Debian.yaml`: + +``` yaml +--- +- name: 'Install postgresql' + package: + name: + - 'postgresql' + - 'python3-psycopg2' # necessary for the ansible postgresql modules + +- name: 'Configure postgresql' + template: + src: 'pg_hba.conf' + dest: '/etc/postgresql/15/main/' + owner: 'root' + group: 'postgres' + mode: '0440' + notify: 'reload postgresql' + +- name: 'Configure postgresql (file that require a restart when modified)' + template: + src: 'postgresql.conf' + dest: '/etc/postgresql/15/main/' + owner: 'root' + group: 'postgres' + mode: '0440' + notify: 'restart postgresql' + +- meta: 'flush_handlers' + +- name: 'Set postgres admin password' + shell: + cmd: "printf \"ALTER USER postgres WITH PASSWORD '%s';\" \"{{ ansible_local.postgresql_postgres.password }}\" | su -c psql - postgres" + when: 'postgresql_password_postgres is defined' +``` + +My `generate_password.yaml` will persist a password with a custom fact: + +``` yaml +--- +# Inputs: +# name: string +# Outputs: +# ansible_local["postgresql_" + postgresql.name].password +- name: 'Generate a password' + set_fact: { "postgresql_password_{{ name }}": "{{ lookup('password', '/dev/null length=32 chars=ascii_letters') }}" } + +- name: 'Deploy ansible fact to persist the password' + template: + src: 'postgresql.fact' + dest: '/etc/ansible/facts.d/postgresql_{{ name }}.fact' + owner: 'root' + mode: '0500' + vars: + password: "{{ lookup('vars', 'postgresql_password_' + name) }}" + +- name: 'reload ansible_local' + setup: 'filter=ansible_local' +``` + +The main entry point of the role is the `database.yaml` task: + +``` yaml +--- +# Inputs: +# postgresql: +# name: string +# extension: list +# Outputs: +# ansible_local["postgresql_" + postgresql.name].password +- name: 'Generate {{ postgresql.name }} password' + include_tasks: 'generate_password.yaml' + vars: + name: '{{ postgresql.name }}' + when: '(ansible_local["postgresql_" + postgresql.name]|default({})).password is undefined' + +- name: 'Create {{ postgresql.name }} user' + community.postgresql.postgresql_user: + login_host: 'localhost' + login_password: '{{ ansible_local.postgresql_postgres.password }}' + name: '{{ postgresql.name }}' + password: '{{ ansible_local["postgresql_" + postgresql.name].password }}' + +- name: 'Create {{ postgresql.name }} database' + community.postgresql.postgresql_db: + login_host: 'localhost' + login_password: '{{ ansible_local.postgresql_postgres.password }}' + name: '{{ postgresql.name }}' + owner: '{{ postgresql.name }}' + +- name: 'Activate {{ postgres.name }} extensions' + community.postgresql.postgresql_ext: + db: '{{ postgresql.name }}' + login_host: 'localhost' + login_password: '{{ ansible_local.postgresql_postgres.password }}' + name: '{{ item }}' + loop: '{{ postgresql.extensions | default([]) }}' +``` + +### Handlers + +Here are the two handlers: + +``` yaml +--- +- name: 'reload postgresql' + service: + name: 'postgresql' + state: 'reloaded' + +- name: 'restart postgresql' + service: + name: 'postgresql' + state: 'restarted' +``` + +### Templates + +Here is my usual `pg_hba.conf`: + +``` yaml +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +local all all peer #unix socket + +host all all 127.0.0.0/8 scram-sha-256 +host all all ::1/128 scram-sha-256 +host all all 10.88.0.0/16 scram-sha-256 # podman +``` + +Here is my `postgresql.conf` for Debian: + +``` yaml +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### + +data_directory = '/var/lib/postgresql/15/main' # use data in another directory +hba_file = '/etc/postgresql/15/main/pg_hba.conf' # host-based authentication file +ident_file = '/etc/postgresql/15/main/pg_ident.conf' # ident configuration file +external_pid_file = '/var/run/postgresql/15-main.pid' # write an extra PID file + +port = 5432 # (change requires restart) +max_connections = 100 # (change requires restart) + +unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories +listen_addresses = 'localhost,10.88.0.1' + +shared_buffers = 128MB # min 128kB +dynamic_shared_memory_type = posix # the default is usually the first option +max_wal_size = 1GB +min_wal_size = 80MB +log_line_prefix = '%m [%p] %q%u@%d ' # special values: +log_timezone = 'Europe/Paris' +cluster_name = '15/main' # added to process titles if nonempty +datestyle = 'iso, mdy' +timezone = 'Europe/Paris' +lc_messages = 'en_US.UTF-8' # locale for system error message +lc_monetary = 'en_US.UTF-8' # locale for monetary formatting +lc_numeric = 'en_US.UTF-8' # locale for number formatting +lc_time = 'en_US.UTF-8' # locale for time formatting +default_text_search_config = 'pg_catalog.english' +include_dir = 'conf.d' # include files ending in '.conf' from +``` + +And here is the simple fact script: + +``` shell +#!/bin/sh +############################################################################### +# \_o< WARNING : This file is being managed by ansible! >o_/ # +# ~~~~ ~~~~ # +############################################################################### +set -eu + +printf '{"password": "%s"}' "{{ password }}" +``` + +## Usage example + +I do not call the role from a playbook, I prefer running the setup from an application's role that relies on postgresql using a `meta/main.yaml` containing something like: + +``` yaml +--- +dependencies: + - role: 'borg' + - role: 'postgresql' +``` + +Then from a tasks file: + +``` yaml +- include_role: + name: 'postgresql' + tasks_from: 'database' + vars: + postgresql: + extensions: + - 'pgcrypto' + name: 'eventline' +``` + +Backup jobs can be setup with: + +``` yaml +- include_role: + name: 'borg' + tasks_from: 'client' + vars: + client: + jobs: + - name: 'postgres' + command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d eventline'" + name: 'eventline' + server: '{{ eventline_adyxax_org.borg }}' +``` + +## Conclusion + +I enjoy this design, it has served me well. |