aboutsummaryrefslogtreecommitdiff
path: root/content/blog
diff options
context:
space:
mode:
Diffstat (limited to 'content/blog')
-rw-r--r--content/blog/ansible/borg-ansible-role-2.md303
-rw-r--r--content/blog/ansible/factorio.md17
-rw-r--r--content/blog/ansible/nginx-ansible-role.md336
-rw-r--r--content/blog/ansible/podman-ansible-role.md307
-rw-r--r--content/blog/ansible/postgresql-ansible-role.md261
-rw-r--r--content/blog/aws/ansible-fact-metadata.md88
-rw-r--r--content/blog/aws/defaults.md4
-rw-r--r--content/blog/aws/secrets.md4
-rw-r--r--content/blog/cloudflare/importing-terraform.md6
-rw-r--r--content/blog/kubernetes/dev-shm.md36
-rw-r--r--content/blog/terraform/acme.md6
-rw-r--r--content/blog/terraform/caa.md2
-rw-r--r--content/blog/terraform/chart-http-datasources.md8
-rw-r--r--content/blog/terraform/email-dns-unused-zone.md4
-rw-r--r--content/blog/terraform/tofu.md18
15 files changed, 1374 insertions, 26 deletions
diff --git a/content/blog/ansible/borg-ansible-role-2.md b/content/blog/ansible/borg-ansible-role-2.md
new file mode 100644
index 0000000..54198cc
--- /dev/null
+++ b/content/blog/ansible/borg-ansible-role-2.md
@@ -0,0 +1,303 @@
+---
+title: 'Borg ansible role (continued)'
+description: 'The ansible role I rewrote to manage my borg backups'
+date: '2024-10-07'
+tags:
+- ansible
+- backups
+- borg
+---
+
+## Introduction
+
+I initially wrote about my borg ansible role in [a blog article three and a half years ago]({{< ref "borg-ansible-role.md" >}}). I released a second version two years ago (time flies!) and it still works well, but I am no longer using it.
+
+I put down ansible when I got infatuated with nixos a little more than a year ago. As I am dialing it back on nixos, I am reviewing and changing some of my design choices.
+
+## Borg repositories changes
+
+One of the main breaking change is that I no longer want to use one borg repository per host as my old role managed: I want one per job/application so that backups are agnostic from the hosts they are running on.
+
+The main advantages are:
+- one private ssh key per job
+- no more data expiration when a job stops running on a job for a time
+- easier monitoring of job run: now checking if a repository has new data is enough, before I had to check the number of jobs that wrote to it in a specific time frame.
+
+The main drawback is that I lose the ability to automatically clean a borg server's `authorized_keys` file when I completely stop using an application or service. Migrating from host to host is properly handled, but complete removal will be manual. I tolerate this because now each job has its own private ssh key, generated on the fly when the job is deployed to a host.
+
+## The new role
+
+### Tasks
+
+The main.yaml contains:
+
+``` yaml
+---
+- name: 'Install borg'
+ package:
+ name:
+ - 'borgbackup'
+ # This use attribute is a work around for https://github.com/ansible/ansible/issues/82598
+ # Invoking the package module without this fails in a delegate_to context
+ use: '{{ ansible_facts["pkg_mgr"] }}'
+```
+
+It will be included in a `delete_to` context when a client configures its server. For the client itself, this tasks file will run normally and be invoked from a `meta` dependency.
+
+The meat of the role is in the client.yaml:
+
+``` yaml
+---
+# Inputs:
+# client:
+# name: string
+# jobs: list(job)
+# server: string
+# With:
+# job:
+# command_to_pipe: optional(string)
+# exclude: optional(list(string))
+# name: string
+# paths: optional(list(string))
+# post_command: optional(string)
+# pre_command: optional(string)
+
+- name: 'Ensure borg directories exists on server'
+ file:
+ state: 'directory'
+ path: '{{ item }}'
+ owner: 'root'
+ mode: '0700'
+ loop:
+ - '/etc/borg'
+ - '/root/.cache/borg'
+ - '/root/.config/borg'
+
+- name: 'Generate openssh key pair'
+ openssh_keypair:
+ path: '/etc/borg/{{ client.name }}.key'
+ type: 'ed25519'
+ owner: 'root'
+ mode: '0400'
+
+- name: 'Read the public key'
+ ansible.builtin.slurp:
+ src: '/etc/borg/{{ client.name }}.key.pub'
+ register: 'borg_public_key'
+
+- include_role:
+ name: 'borg'
+ tasks_from: 'server'
+ args:
+ apply:
+ delegate_to: '{{ client.server }}'
+ vars:
+ server:
+ name: '{{ client.name }}'
+ pubkey: '{{ borg_public_key.content | b64decode | trim }}'
+
+- name: 'Deploy the jobs script'
+ template:
+ src: 'jobs.sh'
+ dest: '/etc/borg/{{ client.name }}.sh'
+ owner: 'root'
+ mode: '0500'
+
+- name: 'Deploy the systemd service and timer'
+ template:
+ src: '{{ item.src }}'
+ dest: '{{ item.dest }}'
+ owner: 'root'
+ mode: '0444'
+ notify: 'systemctl daemon-reload'
+ loop:
+ - { src: 'jobs.service', dest: '/etc/systemd/system/borg-job-{{ client.name }}.service' }
+ - { src: 'jobs.timer', dest: '/etc/systemd/system/borg-job-{{ client.name }}.timer' }
+
+- name: 'Activate job'
+ service:
+ name: 'borg-job-{{ client.name }}.timer'
+ enabled: true
+ state: 'started'
+
+```
+
+The server.yaml contains:
+
+``` yaml
+---
+# Inputs:
+# server:
+# name: string
+# pubkey: string
+
+- name: 'Run common tasks'
+ include_tasks: 'main.yaml'
+
+- name: 'Create borg group on server'
+ group:
+ name: 'borg'
+ system: 'yes'
+
+- name: 'Create borg user on server'
+ user:
+ name: 'borg'
+ group: 'borg'
+ shell: '/bin/sh'
+ home: '/srv/borg'
+ createhome: 'yes'
+ system: 'yes'
+ password: '*'
+
+- name: 'Ensure borg directories exist on server'
+ file:
+ state: 'directory'
+ path: '{{ item }}'
+ owner: 'borg'
+ mode: '0700'
+ loop:
+ - '/srv/borg/.ssh'
+ - '/srv/borg/{{ server.name }}'
+
+- name: 'Authorize client public key'
+ lineinfile:
+ path: '/srv/borg/.ssh/authorized_keys'
+ line: '{{ line }}{{ server.pubkey }}'
+ search_string: '{{ line }}'
+ create: true
+ owner: 'borg'
+ group: 'borg'
+ mode: '0400'
+ vars:
+ line: 'command="borg serve --restrict-to-path /srv/borg/{{ server.name }}",restrict '
+```
+
+### Handlers
+
+I have a single handler:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+```
+
+### Templates
+
+The `jobs.sh` script contains:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+archiveSuffix=".failed"
+
+# Run borg init if the repo doesn't exist yet
+if ! borg list > /dev/null; then
+ borg init --encryption none
+fi
+
+{% for job in client.jobs %}
+archiveName="{{ ansible_fqdn }}-{{ client.name }}-{{ job.name }}-$(date +%Y-%m-%dT%H:%M:%S)"
+{% if job.pre_command is defined %}
+{{ job.pre_command }}
+{% endif %}
+{% if job.command_to_pipe is defined %}
+{{ job.command_to_pipe }} \
+ | borg create \
+ --compression auto,zstd \
+ "::${archiveName}${archiveSuffix}" \
+ -
+{% else %}
+borg create \
+ {% for exclude in job.exclude|default([]) %} --exclude {{ exclude }}{% endfor %} \
+ --compression auto,zstd \
+ "::${archiveName}${archiveSuffix}" \
+ {{ job.paths | join(" ") }}
+{% endif %}
+{% if job.post_command is defined %}
+{{ job.post_command }}
+{% endif %}
+borg rename "::${archiveName}${archiveSuffix}" "${archiveName}"
+borg prune \
+ --keep-daily=14 --keep-monthly=3 --keep-weekly=4 \
+ --glob-archives '*-{{ client.name }}-{{ job.name }}-*'
+{% endfor %}
+
+borg compact
+```
+
+The `jobs.service` systemd unit file contains:
+
+``` ini
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+Description=BorgBackup job {{ client.name }}
+
+[Service]
+Environment="BORG_REPO=ssh://borg@{{ client.server }}/srv/borg/{{ client.name }}"
+Environment="BORG_RSH=ssh -i /etc/borg/{{ client.name }}.key -o StrictHostKeyChecking=accept-new"
+CPUSchedulingPolicy=idle
+ExecStart=/etc/borg/{{ client.name }}.sh
+Group=root
+IOSchedulingClass=idle
+PrivateTmp=true
+ProtectSystem=strict
+ReadWritePaths=/root/.cache/borg
+ReadWritePaths=/root/.config/borg
+User=root
+```
+
+Finally the `jobs.timer` systemd timer file contains:
+
+``` ini
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+Description=BorgBackup job {{ client.name }} timer
+
+[Timer]
+FixedRandomDelay=true
+OnCalendar=daily
+Persistent=true
+RandomizedDelaySec=3600
+
+[Install]
+WantedBy=timers.target
+```
+
+## Invoking the role
+
+The role can be invoked by:
+
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'data'
+ paths:
+ - '/srv/vaultwarden'
+ - name: 'postgres'
+ command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d vaultwarden'"
+ name: 'vaultwarden'
+ server: '{{ vaultwarden.borg }}'
+```
+
+## Conclusion
+
+I am happy with this new design! The immediate consequence is that I am archiving my old role since I do not intend to maintain it anymore.
diff --git a/content/blog/ansible/factorio.md b/content/blog/ansible/factorio.md
index c4fad35..08e2827 100644
--- a/content/blog/ansible/factorio.md
+++ b/content/blog/ansible/factorio.md
@@ -208,6 +208,23 @@ Finally I start and activate the factorio service on boot:
state: 'started'
```
+### Backups
+
+I invoke a personal borg role to configure my backups. I will detail the workings of this role in a next article:
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'save'
+ paths:
+ - '/srv/factorio/factorio/saves/game.zip'
+ name: 'factorio'
+ server: '{{ factorio.borg }}'
+```
+
## Handlers
I have these two handlers:
diff --git a/content/blog/ansible/nginx-ansible-role.md b/content/blog/ansible/nginx-ansible-role.md
new file mode 100644
index 0000000..0c465a9
--- /dev/null
+++ b/content/blog/ansible/nginx-ansible-role.md
@@ -0,0 +1,336 @@
+---
+title: 'Nginx ansible role'
+description: 'The ansible role I use to manage my nginx web servers'
+date: '2024-10-28'
+tags:
+- ansible
+- nginx
+---
+
+## Introduction
+
+Before succumbing to nixos, I had been using an ansible role to manage my nginx web servers. Now that I am in need of it again I refined it a bit: here is the result.
+
+## The role
+
+### Vars
+
+The role has OS specific vars in files named after the operating system. For example in `vars/Debian.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/etc/nginx'
+ pid_file: '/run/nginx.pid'
+ www_user: 'www-data'
+```
+
+While in `vars/FreeBSD.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/usr/local/etc/nginx'
+ pid_file: '/var/run/nginx.pid'
+ www_user: 'www'
+```
+
+### Tasks
+
+The main tasks file setups nginx and the global configuration common to all virtual hosts:
+
+``` yaml
+---
+- include_vars: '{{ ansible_distribution }}.yaml'
+
+- name: 'Install nginx'
+ package:
+ name:
+ - 'nginx'
+
+- name: 'Make nginx vhost directory'
+ file:
+ path: '{{ nginx.etc_dir }}/vhost.d'
+ mode: '0755'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy nginx configuration files'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/{{ item }}'
+ notify: 'reload nginx'
+ loop:
+ - 'headers_base.conf'
+ - 'headers_secure.conf'
+ - 'headers_static.conf'
+ - 'headers_unsafe_inline_csp.conf'
+
+- name: 'Deploy nginx configuration template'
+ template:
+ src: 'nginx.conf'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+
+- name: 'Deploy nginx certificates'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+ loop:
+ - 'adyxax.org.fullchain'
+ - 'adyxax.org.key'
+ - 'dh4096.pem'
+
+- name: 'Start nginx and activate it on boot'
+ service:
+ name: 'nginx'
+ enabled: true
+ state: 'started'
+```
+
+I have a `vhost.yaml` task file which currently simply deploys a file and reload nginx:
+
+``` yaml
+- name: 'Deploy {{ vhost.name }} vhost {{ vhost.path }}'
+ template:
+ src: '{{ vhost.path }}'
+ dest: '{{ nginx.etc_dir }}/vhost.d/{{ vhost.name }}.conf'
+ notify: 'reload nginx'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'reload nginx'
+ service:
+ name: 'nginx'
+ state: 'reloaded'
+```
+
+### Files
+
+I deploy four configuration files in this role. These are all variants of the same theme and their purpose is just to prevent duplicating statements in the virtual hosts configuration files.
+
+`headers_base.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+add_header X-Frame-Options deny;
+add_header X-XSS-Protection "1; mode=block";
+add_header X-Content-Type-Options nosniff;
+add_header Referrer-Policy strict-origin;
+add_header Cache-Control no-transform;
+add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";
+# 6 months HSTS pinning
+add_header Strict-Transport-Security max-age=16000000;
+```
+
+`headers_secure.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self'";
+```
+
+`headers_static.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_secure.conf;
+# Infinite caching
+add_header Cache-Control "public, max-age=31536000, immutable";
+```
+
+`headers_unsafe_inline_csp.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self' 'unsafe-inline'";
+```
+
+### Templates
+
+I have a single template for `nginx.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+user {{ nginx.www_user }};
+worker_processes auto;
+pid {{ nginx.pid_file }};
+error_log /var/log/nginx/error.log;
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include mime.types;
+ types_hash_max_size 4096;
+ sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+
+ ssl_protocols TLSv1.2 TLSv1.3;
+ ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
+ ssl_prefer_server_ciphers on;
+
+ gzip on;
+ gzip_static on;
+ gzip_vary on;
+ gzip_comp_level 5;
+ gzip_min_length 256;
+ gzip_proxied expired no-cache no-store private auth;
+ gzip_types application/atom+xml application/geo+json application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/vnd.ms-fontobject application/wasm application/x-rss+xml application/x-web-app-manifest+json application/xhtml+xml application/xliff+xml application/xml font/collection font/otf font/ttf image/bmp image/svg+xml image/vnd.microsoft.icon text/cache-manifest text/calendar text/css text/csv text/javascript text/markdown text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/xml;
+
+ proxy_redirect off;
+ proxy_connect_timeout 60s;
+ proxy_send_timeout 60s;
+ proxy_read_timeout 60s;
+ proxy_http_version 1.1;
+ proxy_set_header "Connection" "";
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Forwarded-Host $host;
+ proxy_set_header X-Forwarded-Server $host;
+
+ map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+ }
+
+ client_max_body_size 40M;
+ server_tokens off;
+ default_type application/octet-stream;
+ access_log /var/log/nginx/access.log;
+
+ fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
+ fastcgi_param QUERY_STRING $query_string;
+ fastcgi_param REQUEST_METHOD $request_method;
+ fastcgi_param CONTENT_TYPE $content_type;
+ fastcgi_param CONTENT_LENGTH $content_length;
+
+ fastcgi_param SCRIPT_NAME $fastcgi_script_name;
+ fastcgi_param REQUEST_URI $request_uri;
+ fastcgi_param DOCUMENT_URI $document_uri;
+ fastcgi_param DOCUMENT_ROOT $document_root;
+ fastcgi_param SERVER_PROTOCOL $server_protocol;
+ fastcgi_param REQUEST_SCHEME $scheme;
+ fastcgi_param HTTPS $https if_not_empty;
+
+ fastcgi_param GATEWAY_INTERFACE CGI/1.1;
+ fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
+
+ fastcgi_param REMOTE_ADDR $remote_addr;
+ fastcgi_param REMOTE_PORT $remote_port;
+ fastcgi_param REMOTE_USER $remote_user;
+ fastcgi_param SERVER_ADDR $server_addr;
+ fastcgi_param SERVER_PORT $server_port;
+ fastcgi_param SERVER_NAME $server_name;
+
+ # PHP only, required if PHP was built with --enable-force-cgi-redirect
+ fastcgi_param REDIRECT_STATUS 200;
+
+ uwsgi_param QUERY_STRING $query_string;
+ uwsgi_param REQUEST_METHOD $request_method;
+ uwsgi_param CONTENT_TYPE $content_type;
+ uwsgi_param CONTENT_LENGTH $content_length;
+
+ uwsgi_param REQUEST_URI $request_uri;
+ uwsgi_param PATH_INFO $document_uri;
+ uwsgi_param DOCUMENT_ROOT $document_root;
+ uwsgi_param SERVER_PROTOCOL $server_protocol;
+ uwsgi_param REQUEST_SCHEME $scheme;
+ uwsgi_param HTTPS $https if_not_empty;
+
+ uwsgi_param REMOTE_ADDR $remote_addr;
+ uwsgi_param REMOTE_PORT $remote_port;
+ uwsgi_param SERVER_PORT $server_port;
+ uwsgi_param SERVER_NAME $server_name;
+
+ ssl_dhparam dh4096.pem;
+ ssl_session_cache shared:SSL:2m;
+ ssl_session_timeout 1h;
+ ssl_session_tickets off;
+
+ server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ }
+
+ server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ ssl_certificate adyxax.org.fullchain;
+ ssl_certificate_key adyxax.org.key;
+ }
+
+ include vhost.d/*.conf;
+}
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application's role that relies on nginx using a `meta/main.yaml` containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'postgresql'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'nginx'
+ tasks_from: 'vhost'
+ vars:
+ vhost:
+ name: 'www'
+ path: 'roles/www.adyxax.org/files/nginx-vhost.conf'
+```
+
+I did not find an elegant way to pass a file path local to one role to another. Because of that, here I just specify the full vhost file path complete with the `roles/` prefix.
+
+### Conclusion
+
+I you have an elegant idea for passing the local file path from one role to another do not hesitate to ping me!
diff --git a/content/blog/ansible/podman-ansible-role.md b/content/blog/ansible/podman-ansible-role.md
new file mode 100644
index 0000000..37cdabf
--- /dev/null
+++ b/content/blog/ansible/podman-ansible-role.md
@@ -0,0 +1,307 @@
+---
+title: 'Podman ansible role'
+description: 'The ansible role I use to manage my podman containers'
+date: '2024-11-08'
+tags:
+- ansible
+- podman
+---
+
+## Introduction
+
+Before succumbing to nixos, I had was running all my containers on k3s. This time I am migrating things to podman and trying to achieve a lighter setup. This article presents the ansible role I wrote to manage podman containers.
+
+## The role
+
+### Tasks
+
+The main tasks file setups podman and the required network configurations with:
+
+``` yaml
+---
+- name: 'Run OS specific tasks for the podman role'
+ include_tasks: '{{ ansible_distribution }}.yaml'
+
+- name: 'Make podman scripts directory'
+ file:
+ path: '/etc/podman'
+ mode: '0700'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'cni-podman0'
+ dest: '/etc/network/interfaces.d/'
+ owner: 'root'
+ mode: '444'
+```
+
+My OS specific task file `Debian.yaml` looks like this:
+
+``` yaml
+---
+- name: 'Install podman dependencies'
+ ansible.builtin.apt:
+ name:
+ - 'buildah'
+ - 'podman'
+ - 'rootlesskit'
+ - 'slirp4netns'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'podman-bridge.json'
+ dest: '/etc/cni/net.d/87-podman-bridge.conflist'
+ owner: 'root'
+ mode: '444'
+```
+
+The entrypoint tasks for this role is the `container.yaml` task file:
+
+``` yaml
+---
+# Inputs:
+# container:
+# cmd: optional(list(string))
+# env_vars: list(env_var)
+# image: string
+# name: string
+# publishs: list(publish)
+# volumes: list(volume)
+# With:
+# env_var:
+# name: string
+# value: string
+# publish:
+# container_port: string
+# host_port: string
+# ip: string
+# volume:
+# dest: string
+# src: string
+
+- name: 'Deploy podman systemd service for {{ container.name }}'
+ template:
+ src: 'container.service'
+ dest: '/etc/systemd/system/podman-{{ container.name }}.service'
+ owner: 'root'
+ mode: '0444'
+ notify: 'systemctl daemon-reload'
+
+- name: 'Deploy podman scripts for {{ container.name }}'
+ template:
+ src: 'container-{{ item }}.sh'
+ dest: '/etc/podman/{{ container.name }}-{{ item }}.sh'
+ owner: 'root'
+ mode: '0500'
+ register: 'deploy_podman_scripts'
+ loop:
+ - 'start'
+ - 'stop'
+
+- name: 'Restart podman container {{ container.name }}'
+ shell:
+ cmd: "systemctl restart podman-{{ container.name }}"
+ when: 'deploy_podman_scripts.changed'
+
+- name: 'Start podman container {{ container.name }} and activate it on boot'
+ service:
+ name: 'podman-{{ container.name }}'
+ enabled: true
+ state: 'started'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+```
+
+### Files
+
+Here is the `cni-podman0` I deploy on Debian hosts. It is required for the bridge to be up on boot so that other services can bind ports on it. Without this, the bridge would only come up when the first container starts which is too late in the boot process.
+
+``` text
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+auto cni-podman0
+iface cni-podman0 inet static
+address 10.88.0.1/16
+pre-up brctl addbr cni-podman0
+post-down brctl delbr cni-podman0
+```
+
+Here is the JSON cni bridge configuration file I use, customized to add IPv6 support:
+
+``` json
+{
+ "cniVersion": "0.4.0",
+ "name": "podman",
+ "plugins": [
+ {
+ "type": "bridge",
+ "bridge": "cni-podman0",
+ "isGateway": true,
+ "ipMasq": true,
+ "hairpinMode": true,
+ "ipam": {
+ "type": "host-local",
+ "routes": [
+ {
+ "dst": "0.0.0.0/0"
+ }, {
+ "dst": "::/0"
+ }
+ ],
+ "ranges": [
+ [{
+ "subnet": "10.88.0.0/16",
+ "gateway": "10.88.0.1"
+ }], [{
+ "subnet": "fd42::/48",
+ "gateway": "fd42::1"
+ }]
+ ]
+ }
+ }, {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }, {
+ "type": "firewall"
+ }, {
+ "type": "tuning"
+ }
+ ]
+}
+```
+
+### Templates
+
+Here is the jinja templated start bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+podman rm -f {{ container.name }} || true
+rm -f /run/podman-{{ container.name }}.ctr-id
+
+exec podman run \
+ --rm \
+ --name={{ container.name }} \
+ --log-driver=journald \
+ --cidfile=/run/podman-{{ container.name }}.ctr-id \
+ --cgroups=no-conmon \
+ --sdnotify=conmon \
+ -d \
+{% for env_var in container.env_vars | default([]) %}
+ -e {{ env_var.name }}={{ env_var.value }} \
+{% endfor %}
+{% for publish in container.publishs | default([]) %}
+ -p {{ publish.ip }}:{{ publish.host_port }}:{{ publish.container_port }} \
+{% endfor %}
+{% for volume in container.volumes | default([]) %}
+ -v {{ volume.src }}:{{ volume.dest }} \
+{% endfor %}
+ {{ container.image }} {% for cmd in container.cmd | default([]) %}{{ cmd }} {% endfor %}
+```
+
+Here is the jinja templated stop bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+if [[ ! "$SERVICE_RESULT" = success ]]; then
+ podman stop --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+fi
+
+podman rm -f --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+```
+
+Here is the jinja templated systemd unit service:
+
+``` shell
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+After=network-online.target
+Description=Podman container {{ container.name }}
+
+[Service]
+ExecStart=/etc/podman/{{ container.name }}-start.sh
+ExecStop=/etc/podman/{{ container.name }}-stop.sh
+NotifyAccess=all
+Restart=always
+TimeoutStartSec=0
+TimeoutStopSec=120
+Type=notify
+
+[Install]
+WantedBy=multi-user.target
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on podman using a meta/main.yaml containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'podman'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'podman'
+ tasks_from: 'container'
+ vars:
+ container:
+ cmd: ['--config-path', '/srv/cfg/conf.php']
+ name: 'privatebin'
+ env_vars:
+ - name: 'PHP_TZ'
+ value: 'Europe/Paris'
+ - name: 'TZ'
+ value: 'Europe/Paris'
+ image: 'docker.io/privatebin/nginx-fpm-alpine:1.7.4'
+ publishs:
+ - container_port: '8080'
+ host_port: '8082'
+ ip: '127.0.0.1'
+ volumes:
+ - dest: '/srv/cfg/conf.php:ro'
+ src: '/etc/privatebin.conf.php'
+ - dest: '/srv/data'
+ src: '/srv/privatebin'
+```
+
+## Conclusion
+
+I enjoy this design, it works really well. I am missing a task for deprovisioning a container but I have not needed it yet.
diff --git a/content/blog/ansible/postgresql-ansible-role.md b/content/blog/ansible/postgresql-ansible-role.md
new file mode 100644
index 0000000..02614c0
--- /dev/null
+++ b/content/blog/ansible/postgresql-ansible-role.md
@@ -0,0 +1,261 @@
+---
+title: 'PostgreSQL ansible role'
+description: 'The ansible role I use to manage my PostgreSQL databases'
+date: '2024-10-09'
+tags:
+- ansible
+- PostgreSQL
+---
+
+## Introduction
+
+Before succumbing to nixos, I had been using an ansible role to manage my PostgreSQL databases. Now that I am in need of it again I refined it a bit: here is the result.
+
+## The role
+
+### Tasks
+
+My `main.yaml` relies on OS specific tasks:
+
+``` yaml
+---
+- name: 'Generate postgres user password'
+ include_tasks: 'generate_password.yaml'
+ vars:
+ name: 'postgres'
+ when: '(ansible_local["postgresql_postgres"]|default({})).password is undefined'
+
+- name: 'Run OS tasks'
+ include_tasks: '{{ ansible_distribution }}.yaml'
+
+- name: 'Start postgresql and activate it on boot'
+ service:
+ name: 'postgresql'
+ enabled: true
+ state: 'started'
+```
+
+Here is an example in `Debian.yaml`:
+
+``` yaml
+---
+- name: 'Install postgresql'
+ package:
+ name:
+ - 'postgresql'
+ - 'python3-psycopg2' # necessary for the ansible postgresql modules
+
+- name: 'Configure postgresql'
+ template:
+ src: 'pg_hba.conf'
+ dest: '/etc/postgresql/15/main/'
+ owner: 'root'
+ group: 'postgres'
+ mode: '0440'
+ notify: 'reload postgresql'
+
+- name: 'Configure postgresql (file that require a restart when modified)'
+ template:
+ src: 'postgresql.conf'
+ dest: '/etc/postgresql/15/main/'
+ owner: 'root'
+ group: 'postgres'
+ mode: '0440'
+ notify: 'restart postgresql'
+
+- meta: 'flush_handlers'
+
+- name: 'Set postgres admin password'
+ shell:
+ cmd: "printf \"ALTER USER postgres WITH PASSWORD '%s';\" \"{{ ansible_local.postgresql_postgres.password }}\" | su -c psql - postgres"
+ when: 'postgresql_password_postgres is defined'
+```
+
+My `generate_password.yaml` will persist a password with a custom fact:
+
+``` yaml
+---
+# Inputs:
+# name: string
+# Outputs:
+# ansible_local["postgresql_" + postgresql.name].password
+- name: 'Generate a password'
+ set_fact: { "postgresql_password_{{ name }}": "{{ lookup('password', '/dev/null length=32 chars=ascii_letters') }}" }
+
+- name: 'Deploy ansible fact to persist the password'
+ template:
+ src: 'postgresql.fact'
+ dest: '/etc/ansible/facts.d/postgresql_{{ name }}.fact'
+ owner: 'root'
+ mode: '0500'
+ vars:
+ password: "{{ lookup('vars', 'postgresql_password_' + name) }}"
+
+- name: 'reload ansible_local'
+ setup: 'filter=ansible_local'
+```
+
+The main entry point of the role is the `database.yaml` task:
+
+``` yaml
+---
+# Inputs:
+# postgresql:
+# name: string
+# extension: list
+# Outputs:
+# ansible_local["postgresql_" + postgresql.name].password
+- name: 'Generate {{ postgresql.name }} password'
+ include_tasks: 'generate_password.yaml'
+ vars:
+ name: '{{ postgresql.name }}'
+ when: '(ansible_local["postgresql_" + postgresql.name]|default({})).password is undefined'
+
+- name: 'Create {{ postgresql.name }} user'
+ community.postgresql.postgresql_user:
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ postgresql.name }}'
+ password: '{{ ansible_local["postgresql_" + postgresql.name].password }}'
+
+- name: 'Create {{ postgresql.name }} database'
+ community.postgresql.postgresql_db:
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ postgresql.name }}'
+ owner: '{{ postgresql.name }}'
+
+- name: 'Activate {{ postgres.name }} extensions'
+ community.postgresql.postgresql_ext:
+ db: '{{ postgresql.name }}'
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ item }}'
+ loop: '{{ postgresql.extensions | default([]) }}'
+```
+
+### Handlers
+
+Here are the two handlers:
+
+``` yaml
+---
+- name: 'reload postgresql'
+ service:
+ name: 'postgresql'
+ state: 'reloaded'
+
+- name: 'restart postgresql'
+ service:
+ name: 'postgresql'
+ state: 'restarted'
+```
+
+### Templates
+
+Here is my usual `pg_hba.conf`:
+
+``` yaml
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+local all all peer #unix socket
+
+host all all 127.0.0.0/8 scram-sha-256
+host all all ::1/128 scram-sha-256
+host all all 10.88.0.0/16 scram-sha-256 # podman
+```
+
+Here is my `postgresql.conf` for Debian:
+
+``` yaml
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+data_directory = '/var/lib/postgresql/15/main' # use data in another directory
+hba_file = '/etc/postgresql/15/main/pg_hba.conf' # host-based authentication file
+ident_file = '/etc/postgresql/15/main/pg_ident.conf' # ident configuration file
+external_pid_file = '/var/run/postgresql/15-main.pid' # write an extra PID file
+
+port = 5432 # (change requires restart)
+max_connections = 100 # (change requires restart)
+
+unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
+listen_addresses = 'localhost,10.88.0.1'
+
+shared_buffers = 128MB # min 128kB
+dynamic_shared_memory_type = posix # the default is usually the first option
+max_wal_size = 1GB
+min_wal_size = 80MB
+log_line_prefix = '%m [%p] %q%u@%d ' # special values:
+log_timezone = 'Europe/Paris'
+cluster_name = '15/main' # added to process titles if nonempty
+datestyle = 'iso, mdy'
+timezone = 'Europe/Paris'
+lc_messages = 'en_US.UTF-8' # locale for system error message
+lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
+lc_numeric = 'en_US.UTF-8' # locale for number formatting
+lc_time = 'en_US.UTF-8' # locale for time formatting
+default_text_search_config = 'pg_catalog.english'
+include_dir = 'conf.d' # include files ending in '.conf' from
+```
+
+And here is the simple fact script:
+
+``` shell
+#!/bin/sh
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -eu
+
+printf '{"password": "%s"}' "{{ password }}"
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application's role that relies on postgresql using a `meta/main.yaml` containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'postgresql'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'postgresql'
+ tasks_from: 'database'
+ vars:
+ postgresql:
+ extensions:
+ - 'pgcrypto'
+ name: 'eventline'
+```
+
+Backup jobs can be setup with:
+
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'postgres'
+ command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d eventline'"
+ name: 'eventline'
+ server: '{{ eventline_adyxax_org.borg }}'
+```
+
+## Conclusion
+
+I enjoy this design, it has served me well.
diff --git a/content/blog/aws/ansible-fact-metadata.md b/content/blog/aws/ansible-fact-metadata.md
new file mode 100644
index 0000000..3c48f1c
--- /dev/null
+++ b/content/blog/aws/ansible-fact-metadata.md
@@ -0,0 +1,88 @@
+---
+title: 'Shell script for gathering imdsv2 instance metadata on AWS ec2'
+description: 'An ansible fact I wrote'
+date: '2024-10-12'
+tags:
+- ansible
+- aws
+---
+
+## Introduction
+
+I wrote a shell script to gather ec2 instance metadata with an ansible fact.
+
+## The script
+
+I am using POSIX `/bin/sh` because I wanted to support a variety of operating systems. Besides that, the only dependency is `curl`:
+
+``` shell
+#!/bin/sh
+set -eu
+
+metadata() {
+ local METHOD=$1
+ local URI_PATH=$2
+ local TOKEN="${3:-}"
+ local HEADER
+ if [ -z "${TOKEN}" ]; then
+ HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
+ else
+ HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
+ fi
+ curl -sSfL --request "${METHOD}" \
+ "http://169.254.169.254/latest${URI_PATH}" \
+ --header "${HEADER}"
+}
+
+METADATA_TOKEN=$(metadata PUT /api/token)
+KEYS=$(metadata GET /meta-data/tags/instance "${METADATA_TOKEN}")
+PREFIX='{'
+for KEY in $KEYS; do
+ VALUE=$(metadata GET "/meta-data/tags/instance/${KEY}" "${METADATA_TOKEN}")
+ printf '%s"%s":"%s"' "${PREFIX}" "${KEY}" "${VALUE}"
+ PREFIX=','
+done
+printf '}'
+```
+
+## Bonus version without depending on curl
+
+Depending on curl can be avoided. If you are willing to use netcat instead and be declared a madman by your colleagues, you can rewrite the function with:
+
+``` shell
+metadata() {
+ local METHOD=$1
+ local URI_PATH=$2
+ local TOKEN="${3:-}"
+ local HEADER
+ if [ -z "${TOKEN}" ]; then
+ HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
+ else
+ HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
+ fi
+ printf "${METHOD} /latest${URI_PATH} HTTP/1.0\r\n%s\r\n\r\n" \
+ "${HEADER}" \
+ | nc -w 5 169.254.169.254 80 | tail -n 1
+}
+```
+
+## Deploying an ansible fact
+
+I deploy the script this way:
+``` yaml
+- name: 'Deploy ec2 metadata fact gathering script'
+ copy:
+ src: 'ec2_metadata.sh'
+ dest: '/etc/ansible/facts.d/ec2_metadata.fact'
+ owner: 'root'
+ mode: '0500'
+ register: 'ec2_metadata_fact'
+
+- name: 'reload facts'
+ setup: 'filter=ansible_local'
+ when: 'ec2_metadata_fact.changed'
+```
+
+## Conclusion
+
+It works, is simple and I like it. I am happy!
diff --git a/content/blog/aws/defaults.md b/content/blog/aws/defaults.md
index 9fdbfa3..454b325 100644
--- a/content/blog/aws/defaults.md
+++ b/content/blog/aws/defaults.md
@@ -1,10 +1,10 @@
---
title: Securing AWS default VPCs
-description: With terraform/opentofu
+description: With terraform/OpenTofu
date: 2024-09-10
tags:
- aws
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/aws/secrets.md b/content/blog/aws/secrets.md
index 476d235..a25f9ef 100644
--- a/content/blog/aws/secrets.md
+++ b/content/blog/aws/secrets.md
@@ -1,10 +1,10 @@
---
title: Managing AWS secrets
-description: with the CLI and with terraform/opentofu
+description: with the CLI and with terraform/OpenTofu
date: 2024-08-13
tags:
- aws
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/cloudflare/importing-terraform.md b/content/blog/cloudflare/importing-terraform.md
index 7fc5dfd..1ddb635 100644
--- a/content/blog/cloudflare/importing-terraform.md
+++ b/content/blog/cloudflare/importing-terraform.md
@@ -1,16 +1,16 @@
---
-title: Importing cloudflare DNS records in terraform/opentofu
+title: Importing cloudflare DNS records in terraform/OpenTofu
description: a way to get the records IDs
date: 2024-07-16
tags:
- cloudflare
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-Managing cloudflare DNS records using terraform/opentofu is easy enough, but importing existing records into your automation is not straightforward.
+Managing cloudflare DNS records using terraform/OpenTofu is easy enough, but importing existing records into your automation is not straightforward.
## The problem
diff --git a/content/blog/kubernetes/dev-shm.md b/content/blog/kubernetes/dev-shm.md
new file mode 100644
index 0000000..9369052
--- /dev/null
+++ b/content/blog/kubernetes/dev-shm.md
@@ -0,0 +1,36 @@
+---
+title: 'How to increase /dev/shm size on kubernetes'
+description: "the equivalent to docker's shm-size flag"
+date: '2024-10-02'
+tags:
+- kubernetes
+---
+
+## Introduction
+
+Today I had to find a way to increase the size of the shared memory filesystem offered to containers for a specific workload. `/dev/shm` is a Linux specific `tmpfs` filesystem that some applications use for inter process communications. The defaults size of this filesystem on kubernetes nodes is 64MiB.
+
+Docker has a `--shm-size 1g` flag to specify that. Though kubernetes does not offer a direct equivalent, we can replicate this with volumes.
+
+## Configuration in pod specification
+
+Here are the relevant sections of the spec we need to set:
+``` yaml
+spec:
+ template:
+ spec:
+ container:
+ volume_mount:
+ mount_path = "/dev/shm"
+ name = "dev-shm"
+ read_only = false
+ volume:
+ empty_dir:
+ medium = "Memory"
+ size_limit = "1Gi"
+ name = "dev-shm"
+```
+
+## Conclusion
+
+Well it works!
diff --git a/content/blog/terraform/acme.md b/content/blog/terraform/acme.md
index f19302b..37045fd 100644
--- a/content/blog/terraform/acme.md
+++ b/content/blog/terraform/acme.md
@@ -1,16 +1,16 @@
---
-title: Certificate management with opentofu and eventline
+title: Certificate management with OpenTofu and eventline
description: How I manage for my personal infrastructure
date: 2024-03-06
tags:
- Eventline
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-In this article, I will explain how I handle the management and automatic renewal of SSL certificates on my personal infrastructure using opentofu (the fork of terraform) and [eventline](https://www.exograd.com/products/eventline/). I chose to centralise the renewal on my single host running eventline and to generate a single wildcard certificate for each domain I manage.
+In this article, I will explain how I handle the management and automatic renewal of SSL certificates on my personal infrastructure using OpenTofu (the fork of terraform) and [eventline](https://www.exograd.com/products/eventline/). I chose to centralise the renewal on my single host running eventline and to generate a single wildcard certificate for each domain I manage.
## Wildcard certificates
diff --git a/content/blog/terraform/caa.md b/content/blog/terraform/caa.md
index defcd6a..ce6ff37 100644
--- a/content/blog/terraform/caa.md
+++ b/content/blog/terraform/caa.md
@@ -3,7 +3,7 @@ title: CAA DNS records with OpenTofu
description: How I manage which acme CA can issue certificates for me
date: 2024-05-27
tags:
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/terraform/chart-http-datasources.md b/content/blog/terraform/chart-http-datasources.md
index ebf0aba..5c4108d 100644
--- a/content/blog/terraform/chart-http-datasources.md
+++ b/content/blog/terraform/chart-http-datasources.md
@@ -1,18 +1,18 @@
---
-title: Manage helm charts extras with opentofu
+title: Manage helm charts extras with OpenTofu
description: a use case for the http datasource
date: 2024-04-25
tags:
- aws
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-When managing helm charts with opentofu (terraform), you often have to hard code correlated settings for versioning (like app version and chart version). Sometimes it goes even further and you need to fetch a policy or a manifest with some CRDs that the chart will depend on.
+When managing helm charts with OpenTofu (terraform), you often have to hard code correlated settings for versioning (like app version and chart version). Sometimes it goes even further and you need to fetch a policy or a manifest with some CRDs that the chart will depend on.
-Here is an example of how to manage that with opentofu and an http datasource for the AWS load balancer controller.
+Here is an example of how to manage that with OpenTofu and an http datasource for the AWS load balancer controller.
## A word about the AWS load balancer controller
diff --git a/content/blog/terraform/email-dns-unused-zone.md b/content/blog/terraform/email-dns-unused-zone.md
index cc8dc77..e1f9b81 100644
--- a/content/blog/terraform/email-dns-unused-zone.md
+++ b/content/blog/terraform/email-dns-unused-zone.md
@@ -1,11 +1,11 @@
---
title: Email DNS records for zones that do not send emails
-description: Automated with terraform/opentofu
+description: Automated with terraform/OpenTofu
date: 2024-09-03
tags:
- cloudflare
- DNS
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/terraform/tofu.md b/content/blog/terraform/tofu.md
index 48ec621..b52b97f 100644
--- a/content/blog/terraform/tofu.md
+++ b/content/blog/terraform/tofu.md
@@ -1,20 +1,20 @@
---
-title: Testing opentofu
+title: Testing OpenTofu
description: Little improvements and what it means for small providers like mine
date: 2024-01-31
tags:
- Eventline
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-This January, the opentofu project announced the general availability of their terraform fork. Not much changes for now between terraform and opentofu (and that is a good thing!), as far as I can tell the announcement was mostly about the new provider registry and of course the truly open source license.
+This January, the OpenTofu project announced the general availability of their terraform fork. Not much changes for now between terraform and OpenTofu (and that is a good thing!), as far as I can tell the announcement was mostly about the new provider registry and of course the truly open source license.
## Registry change
-The opentofu registry already has all the providers you are accustomed to, but your state will need to be migrated with:
+The OpenTofu registry already has all the providers you are accustomed to, but your state will need to be migrated with:
```sh
tofu init -upgrade`
```
@@ -24,19 +24,19 @@ For some providers you might encounter the following warning:
- Installed cloudflare/cloudflare v4.23.0. Signature validation was skipped due to the registry not containing GPG keys for this provider
```
-This is harmless and will resolve itself when the providers' developers provide the public GPG key used to sign their releases to the opentofu registry. The process is very simple thanks to their GitHub workflow automation.
+This is harmless and will resolve itself when the providers' developers provide the public GPG key used to sign their releases to the OpenTofu registry. The process is very simple thanks to their GitHub workflow automation.
## Little improvements
- `tofu init` seems significantly faster than `terraform init`.
-- You never could interrupt a terraform plan with `C-C`. I am so very glad to see that it is not a problem with opentofu! This really needs more advertising: proper Unix signal handling is like a superpower that is too often ignored by modern software.
-- `tofu test` can be used to assert things about your state and your configuration. I did not play with it yet but it opens [a whole new realm of possibilities](https://opentofu.org/docs/cli/commands/test/)!
+- You never could interrupt a terraform plan with `C-C`. I am so very glad to see that it is not a problem with OpenTofu! This really needs more advertising: proper Unix signal handling is like a superpower that is too often ignored by modern software.
+- `tofu test` can be used to assert things about your state and your configuration. I did not play with it yet but it opens [a whole new realm of possibilities](https://OpenTofu.org/docs/cli/commands/test/)!
- `tofu import` can use expressions referencing other values or resources attributes, this is a big deal when handling massive imports!
## Eventline terraform provider
-I did the required pull requests on the [opentofu registry](https://github.com/opentofu/registry) to have my [Eventline provider](https://github.com/adyxax/terraform-provider-eventline) all fixed up and ready to rock!
+I did the required pull requests on the [OpenTofu registry](https://github.com/OpenTofu/registry) to have my [Eventline provider](https://github.com/adyxax/terraform-provider-eventline) all fixed up and ready to rock!
## Conclusion
-I hope opentofu really takes off, the little improvements they made already feel like a breath of fresh air. Terraform could be so much more!
+I hope OpenTofu really takes off, the little improvements they made already feel like a breath of fresh air. Terraform could be so much more!