aboutsummaryrefslogtreecommitdiff
path: root/content/blog
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--content/blog/ansible/borg-ansible-role-2.md303
-rw-r--r--content/blog/ansible/factorio.md265
-rw-r--r--content/blog/ansible/nginx-ansible-role.md336
-rw-r--r--content/blog/ansible/podman-ansible-role.md307
-rw-r--r--content/blog/ansible/postgresql-ansible-role.md261
-rw-r--r--content/blog/aws/ansible-fact-metadata.md88
-rw-r--r--content/blog/aws/defaults.md254
-rw-r--r--content/blog/aws/secrets.md4
-rw-r--r--content/blog/cloudflare/importing-terraform.md6
-rw-r--r--content/blog/debian/ovh-rescue.md116
-rw-r--r--content/blog/kubernetes/dev-shm.md36
-rw-r--r--content/blog/miscellaneous/generate-github-access-token-for-github-app.md67
-rw-r--r--content/blog/terraform/acme.md6
-rw-r--r--content/blog/terraform/caa.md2
-rw-r--r--content/blog/terraform/chart-http-datasources.md8
-rw-r--r--content/blog/terraform/email-dns-unused-zone.md104
-rw-r--r--content/blog/terraform/tofu.md18
17 files changed, 2159 insertions, 22 deletions
diff --git a/content/blog/ansible/borg-ansible-role-2.md b/content/blog/ansible/borg-ansible-role-2.md
new file mode 100644
index 0000000..54198cc
--- /dev/null
+++ b/content/blog/ansible/borg-ansible-role-2.md
@@ -0,0 +1,303 @@
+---
+title: 'Borg ansible role (continued)'
+description: 'The ansible role I rewrote to manage my borg backups'
+date: '2024-10-07'
+tags:
+- ansible
+- backups
+- borg
+---
+
+## Introduction
+
+I initially wrote about my borg ansible role in [a blog article three and a half years ago]({{< ref "borg-ansible-role.md" >}}). I released a second version two years ago (time flies!) and it still works well, but I am no longer using it.
+
+I put down ansible when I got infatuated with nixos a little more than a year ago. As I am dialing it back on nixos, I am reviewing and changing some of my design choices.
+
+## Borg repositories changes
+
+One of the main breaking change is that I no longer want to use one borg repository per host as my old role managed: I want one per job/application so that backups are agnostic from the hosts they are running on.
+
+The main advantages are:
+- one private ssh key per job
+- no more data expiration when a job stops running on a job for a time
+- easier monitoring of job run: now checking if a repository has new data is enough, before I had to check the number of jobs that wrote to it in a specific time frame.
+
+The main drawback is that I lose the ability to automatically clean a borg server's `authorized_keys` file when I completely stop using an application or service. Migrating from host to host is properly handled, but complete removal will be manual. I tolerate this because now each job has its own private ssh key, generated on the fly when the job is deployed to a host.
+
+## The new role
+
+### Tasks
+
+The main.yaml contains:
+
+``` yaml
+---
+- name: 'Install borg'
+ package:
+ name:
+ - 'borgbackup'
+ # This use attribute is a work around for https://github.com/ansible/ansible/issues/82598
+ # Invoking the package module without this fails in a delegate_to context
+ use: '{{ ansible_facts["pkg_mgr"] }}'
+```
+
+It will be included in a `delete_to` context when a client configures its server. For the client itself, this tasks file will run normally and be invoked from a `meta` dependency.
+
+The meat of the role is in the client.yaml:
+
+``` yaml
+---
+# Inputs:
+# client:
+# name: string
+# jobs: list(job)
+# server: string
+# With:
+# job:
+# command_to_pipe: optional(string)
+# exclude: optional(list(string))
+# name: string
+# paths: optional(list(string))
+# post_command: optional(string)
+# pre_command: optional(string)
+
+- name: 'Ensure borg directories exists on server'
+ file:
+ state: 'directory'
+ path: '{{ item }}'
+ owner: 'root'
+ mode: '0700'
+ loop:
+ - '/etc/borg'
+ - '/root/.cache/borg'
+ - '/root/.config/borg'
+
+- name: 'Generate openssh key pair'
+ openssh_keypair:
+ path: '/etc/borg/{{ client.name }}.key'
+ type: 'ed25519'
+ owner: 'root'
+ mode: '0400'
+
+- name: 'Read the public key'
+ ansible.builtin.slurp:
+ src: '/etc/borg/{{ client.name }}.key.pub'
+ register: 'borg_public_key'
+
+- include_role:
+ name: 'borg'
+ tasks_from: 'server'
+ args:
+ apply:
+ delegate_to: '{{ client.server }}'
+ vars:
+ server:
+ name: '{{ client.name }}'
+ pubkey: '{{ borg_public_key.content | b64decode | trim }}'
+
+- name: 'Deploy the jobs script'
+ template:
+ src: 'jobs.sh'
+ dest: '/etc/borg/{{ client.name }}.sh'
+ owner: 'root'
+ mode: '0500'
+
+- name: 'Deploy the systemd service and timer'
+ template:
+ src: '{{ item.src }}'
+ dest: '{{ item.dest }}'
+ owner: 'root'
+ mode: '0444'
+ notify: 'systemctl daemon-reload'
+ loop:
+ - { src: 'jobs.service', dest: '/etc/systemd/system/borg-job-{{ client.name }}.service' }
+ - { src: 'jobs.timer', dest: '/etc/systemd/system/borg-job-{{ client.name }}.timer' }
+
+- name: 'Activate job'
+ service:
+ name: 'borg-job-{{ client.name }}.timer'
+ enabled: true
+ state: 'started'
+
+```
+
+The server.yaml contains:
+
+``` yaml
+---
+# Inputs:
+# server:
+# name: string
+# pubkey: string
+
+- name: 'Run common tasks'
+ include_tasks: 'main.yaml'
+
+- name: 'Create borg group on server'
+ group:
+ name: 'borg'
+ system: 'yes'
+
+- name: 'Create borg user on server'
+ user:
+ name: 'borg'
+ group: 'borg'
+ shell: '/bin/sh'
+ home: '/srv/borg'
+ createhome: 'yes'
+ system: 'yes'
+ password: '*'
+
+- name: 'Ensure borg directories exist on server'
+ file:
+ state: 'directory'
+ path: '{{ item }}'
+ owner: 'borg'
+ mode: '0700'
+ loop:
+ - '/srv/borg/.ssh'
+ - '/srv/borg/{{ server.name }}'
+
+- name: 'Authorize client public key'
+ lineinfile:
+ path: '/srv/borg/.ssh/authorized_keys'
+ line: '{{ line }}{{ server.pubkey }}'
+ search_string: '{{ line }}'
+ create: true
+ owner: 'borg'
+ group: 'borg'
+ mode: '0400'
+ vars:
+ line: 'command="borg serve --restrict-to-path /srv/borg/{{ server.name }}",restrict '
+```
+
+### Handlers
+
+I have a single handler:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+```
+
+### Templates
+
+The `jobs.sh` script contains:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+archiveSuffix=".failed"
+
+# Run borg init if the repo doesn't exist yet
+if ! borg list > /dev/null; then
+ borg init --encryption none
+fi
+
+{% for job in client.jobs %}
+archiveName="{{ ansible_fqdn }}-{{ client.name }}-{{ job.name }}-$(date +%Y-%m-%dT%H:%M:%S)"
+{% if job.pre_command is defined %}
+{{ job.pre_command }}
+{% endif %}
+{% if job.command_to_pipe is defined %}
+{{ job.command_to_pipe }} \
+ | borg create \
+ --compression auto,zstd \
+ "::${archiveName}${archiveSuffix}" \
+ -
+{% else %}
+borg create \
+ {% for exclude in job.exclude|default([]) %} --exclude {{ exclude }}{% endfor %} \
+ --compression auto,zstd \
+ "::${archiveName}${archiveSuffix}" \
+ {{ job.paths | join(" ") }}
+{% endif %}
+{% if job.post_command is defined %}
+{{ job.post_command }}
+{% endif %}
+borg rename "::${archiveName}${archiveSuffix}" "${archiveName}"
+borg prune \
+ --keep-daily=14 --keep-monthly=3 --keep-weekly=4 \
+ --glob-archives '*-{{ client.name }}-{{ job.name }}-*'
+{% endfor %}
+
+borg compact
+```
+
+The `jobs.service` systemd unit file contains:
+
+``` ini
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+Description=BorgBackup job {{ client.name }}
+
+[Service]
+Environment="BORG_REPO=ssh://borg@{{ client.server }}/srv/borg/{{ client.name }}"
+Environment="BORG_RSH=ssh -i /etc/borg/{{ client.name }}.key -o StrictHostKeyChecking=accept-new"
+CPUSchedulingPolicy=idle
+ExecStart=/etc/borg/{{ client.name }}.sh
+Group=root
+IOSchedulingClass=idle
+PrivateTmp=true
+ProtectSystem=strict
+ReadWritePaths=/root/.cache/borg
+ReadWritePaths=/root/.config/borg
+User=root
+```
+
+Finally the `jobs.timer` systemd timer file contains:
+
+``` ini
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+Description=BorgBackup job {{ client.name }} timer
+
+[Timer]
+FixedRandomDelay=true
+OnCalendar=daily
+Persistent=true
+RandomizedDelaySec=3600
+
+[Install]
+WantedBy=timers.target
+```
+
+## Invoking the role
+
+The role can be invoked by:
+
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'data'
+ paths:
+ - '/srv/vaultwarden'
+ - name: 'postgres'
+ command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d vaultwarden'"
+ name: 'vaultwarden'
+ server: '{{ vaultwarden.borg }}'
+```
+
+## Conclusion
+
+I am happy with this new design! The immediate consequence is that I am archiving my old role since I do not intend to maintain it anymore.
diff --git a/content/blog/ansible/factorio.md b/content/blog/ansible/factorio.md
new file mode 100644
index 0000000..08e2827
--- /dev/null
+++ b/content/blog/ansible/factorio.md
@@ -0,0 +1,265 @@
+---
+title: 'How to self host a Factorio headless server'
+description: 'Automated with ansible'
+date: '2024-09-25'
+tags:
+- ansible
+- Debian
+- Factorio
+---
+
+## Introduction
+
+With the upcoming v2.0 release next month, we decided to try a [seablock](https://mods.factorio.com/mod/SeaBlock) run with a friend and see how far we go in this time frame. Here is a the small ansible role I wrote to deploy this. It is for a Debian server but any Linux distribution with systemd will do. And if you ignore the service unit file, any Linux or even [FreeBSD](factorio-server-in-a-linux-jail.md) will do.
+
+## Tasks
+
+This role has a single `tasks/main.yaml` file containing the following.
+
+### User
+
+This is fairly standard:
+``` yaml
+- name: 'Create factorio group'
+ group:
+ name: 'factorio'
+ system: 'yes'
+
+- name: 'Create factorio user'
+ user:
+ name: 'factorio'
+ group: 'factorio'
+ shell: '/usr/bin/bash'
+ home: '/srv/factorio'
+ createhome: 'yes'
+ system: 'yes'
+ password: '*'
+```
+
+### Factorio
+
+Factorio has an API endpoint that provides information about its latest releases, I query and then parse it with:
+``` yaml
+- name: 'Retrieve factorio latest release number'
+ shell:
+ cmd: "curl -s https://factorio.com/api/latest-releases | jq -r '.stable.headless'"
+ register: 'factorio_version_info'
+ changed_when: False
+
+- set_fact:
+ factorio_version: '{{ factorio_version_info.stdout_lines[0] }}'
+```
+
+Afterwards, it is just a question of downloading and extracting factorio:
+``` yaml
+- name: 'Download factorio'
+ get_url:
+ url: "https://www.factorio.com/get-download/{{ factorio_version }}/headless/linux64"
+ dest: '/srv/factorio/headless-{{ factorio_version }}.zip'
+ mode: '0444'
+ register: 'factorio_downloaded'
+
+- name: 'Extract new factorio version'
+ ansible.builtin.unarchive:
+ src: '/srv/factorio/headless-{{ factorio_version }}.zip'
+ dest: '/srv/factorio'
+ owner: 'factorio'
+ group: 'factorio'
+ remote_src: 'yes'
+ notify: 'restart factorio'
+ when: 'factorio_downloaded.changed'
+```
+
+I also create the saves directory with:
+``` yaml
+- name: 'Make factorio saves directory'
+ file:
+ path: '/srv/factorio/factorio/saves'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0750'
+ state: 'directory'
+```
+
+### Configuration files
+
+There are two configuration files to copy from the `files` folder:
+``` yaml
+- name: 'Deploy configuration files'
+ copy:
+ src: '{{ item.src }}'
+ dest: '{{ item.dest }}'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0440'
+ notify:
+ - 'systemctl daemon-reload'
+ - 'restart factorio'
+ loop:
+ - { src: 'factorio.service', dest: '/etc/systemd/system/' }
+ - { src: 'server-adminlist.json', dest: '/srv/factorio/factorio/' }
+```
+
+The systemd service unit file contains:
+``` ini
+[Unit]
+Descripion=Factorio Headless Server
+After=network.target
+After=systemd-user-sessions.service
+After=network-online.target
+
+[Service]
+Type=simple
+User=factorio
+ExecStart=/srv/factorio/factorio/bin/x64/factorio --start-server game.zip
+WorkingDirectory=/srv/factorio/factorio
+
+[Install]
+WantedBy=multi-user.target
+```
+
+The admin list is simply:
+
+``` json
+["adyxax"]
+```
+
+I generate the factorio game password with terraform/OpenTofu using a resource like:
+
+``` hcl
+resource "random_password" "factorio" {
+ length = 16
+
+ lifecycle {
+ ignore_changes = [
+ length,
+ lower,
+ ]
+ }
+}
+```
+
+This allows me to have it persist in the terraform state which is a good thing. For simplification, let's say that this state (which is a json file) is in a local file that I can load with:
+``` yaml
+- name: 'Load the tofu state to read the factorio game password'
+ include_vars:
+ file: '../../../../adyxax.org/01-legacy/terraform.tfstate'
+ name: 'tofu_state_legacy'
+```
+
+Given this template file:
+``` json
+{
+ "name": "Normalians",
+ "description": "C'est sur ce serveur que jouent les beaux gosses",
+ "tags": ["game", "tags"],
+ "max_players": 0,
+ "visibility": {
+ "public": false,
+ "lan": false
+ },
+ "username": "",
+ "password": "",
+ "token": "",
+ "game_password": "{{ factorio_game_password[0] }}",
+ "require_user_verification": false,
+ "max_upload_in_kilobytes_per_second": 0,
+ "max_upload_slots": 5,
+ "minimum_latency_in_ticks": 0,
+ "max_heartbeats_per_second": 60,
+ "ignore_player_limit_for_returning_players": false,
+ "allow_commands": "admins-only",
+ "autosave_interval": 10,
+ "autosave_slots": 5,
+ "afk_autokick_interval": 0,
+ "auto_pause": true,
+ "only_admins_can_pause_the_game": true,
+ "autosave_only_on_server": true,
+ "non_blocking_saving": true,
+ "minimum_segment_size": 25,
+ "minimum_segment_size_peer_count": 20,
+ "maximum_segment_size": 100,
+ "maximum_segment_size_peer_count": 10
+}
+```
+
+Note the usage of `[0]` for the variable expansion: it is a disappointing trick that you have to remember when dealing with json query parsing using ansible's filters: these always return an array. The template invocation is:
+``` yaml
+- name: 'Deploy configuration templates'
+ template:
+ src: 'server-settings.json'
+ dest: '/srv/factorio/factorio/'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0440'
+ notify: 'restart factorio'
+ vars:
+ factorio_game_password: "{{ tofu_state_legacy | json_query(\"resources[?type=='random_password'&&name=='factorio'].instances[0].attributes.result\") }}"
+```
+
+### Service
+
+Finally I start and activate the factorio service on boot:
+``` yaml
+- name: 'Start factorio and activate it on boot'
+ service:
+ name: 'factorio'
+ enabled: 'yes'
+ state: 'started'
+```
+
+### Backups
+
+I invoke a personal borg role to configure my backups. I will detail the workings of this role in a next article:
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'save'
+ paths:
+ - '/srv/factorio/factorio/saves/game.zip'
+ name: 'factorio'
+ server: '{{ factorio.borg }}'
+```
+
+## Handlers
+
+I have these two handlers:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+
+- name: 'restart factorio'
+ service:
+ name: 'factorio'
+ state: 'restarted'
+```
+
+## Generating a map and starting the game
+
+If you just followed this guide factorio failed to start on the server because it does not have a map in its save folder. If that is not the case for you because you are coming back to this article after some time, remember to stop factorio with `systemctl stop factorio` before continuing. If you do not, when you later restart factorio will overwrite your newly uploaded save.
+
+Launch factorio locally, install any mod you want then go to single player and generate a new map with your chosen settings. Save the game then quit and go back to your terminal.
+
+Find the save file (if playing on steam it will be in `~/.factorio/saves/`) and upload it to `/srv/factorio/factorio/saves/game.zip`. If you are using mods, `rsync` the mods folder that leaves next to your saves directory to the server with:
+
+``` shell
+rsync -r ~/.factorio/mods/ root@factorio.adyxax.org:/srv/factorio/factorio/mods/`
+```
+
+Then give these files to the factorio user on your server before restarting the game:
+
+``` shell
+chown -R factorio:factorio /srv/factorio
+systemctl start factorio
+```
+
+## Conclusion
+
+Good luck and have fun!
diff --git a/content/blog/ansible/nginx-ansible-role.md b/content/blog/ansible/nginx-ansible-role.md
new file mode 100644
index 0000000..0c465a9
--- /dev/null
+++ b/content/blog/ansible/nginx-ansible-role.md
@@ -0,0 +1,336 @@
+---
+title: 'Nginx ansible role'
+description: 'The ansible role I use to manage my nginx web servers'
+date: '2024-10-28'
+tags:
+- ansible
+- nginx
+---
+
+## Introduction
+
+Before succumbing to nixos, I had been using an ansible role to manage my nginx web servers. Now that I am in need of it again I refined it a bit: here is the result.
+
+## The role
+
+### Vars
+
+The role has OS specific vars in files named after the operating system. For example in `vars/Debian.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/etc/nginx'
+ pid_file: '/run/nginx.pid'
+ www_user: 'www-data'
+```
+
+While in `vars/FreeBSD.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/usr/local/etc/nginx'
+ pid_file: '/var/run/nginx.pid'
+ www_user: 'www'
+```
+
+### Tasks
+
+The main tasks file setups nginx and the global configuration common to all virtual hosts:
+
+``` yaml
+---
+- include_vars: '{{ ansible_distribution }}.yaml'
+
+- name: 'Install nginx'
+ package:
+ name:
+ - 'nginx'
+
+- name: 'Make nginx vhost directory'
+ file:
+ path: '{{ nginx.etc_dir }}/vhost.d'
+ mode: '0755'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy nginx configuration files'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/{{ item }}'
+ notify: 'reload nginx'
+ loop:
+ - 'headers_base.conf'
+ - 'headers_secure.conf'
+ - 'headers_static.conf'
+ - 'headers_unsafe_inline_csp.conf'
+
+- name: 'Deploy nginx configuration template'
+ template:
+ src: 'nginx.conf'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+
+- name: 'Deploy nginx certificates'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+ loop:
+ - 'adyxax.org.fullchain'
+ - 'adyxax.org.key'
+ - 'dh4096.pem'
+
+- name: 'Start nginx and activate it on boot'
+ service:
+ name: 'nginx'
+ enabled: true
+ state: 'started'
+```
+
+I have a `vhost.yaml` task file which currently simply deploys a file and reload nginx:
+
+``` yaml
+- name: 'Deploy {{ vhost.name }} vhost {{ vhost.path }}'
+ template:
+ src: '{{ vhost.path }}'
+ dest: '{{ nginx.etc_dir }}/vhost.d/{{ vhost.name }}.conf'
+ notify: 'reload nginx'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'reload nginx'
+ service:
+ name: 'nginx'
+ state: 'reloaded'
+```
+
+### Files
+
+I deploy four configuration files in this role. These are all variants of the same theme and their purpose is just to prevent duplicating statements in the virtual hosts configuration files.
+
+`headers_base.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+add_header X-Frame-Options deny;
+add_header X-XSS-Protection "1; mode=block";
+add_header X-Content-Type-Options nosniff;
+add_header Referrer-Policy strict-origin;
+add_header Cache-Control no-transform;
+add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";
+# 6 months HSTS pinning
+add_header Strict-Transport-Security max-age=16000000;
+```
+
+`headers_secure.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self'";
+```
+
+`headers_static.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_secure.conf;
+# Infinite caching
+add_header Cache-Control "public, max-age=31536000, immutable";
+```
+
+`headers_unsafe_inline_csp.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self' 'unsafe-inline'";
+```
+
+### Templates
+
+I have a single template for `nginx.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+user {{ nginx.www_user }};
+worker_processes auto;
+pid {{ nginx.pid_file }};
+error_log /var/log/nginx/error.log;
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include mime.types;
+ types_hash_max_size 4096;
+ sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+
+ ssl_protocols TLSv1.2 TLSv1.3;
+ ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
+ ssl_prefer_server_ciphers on;
+
+ gzip on;
+ gzip_static on;
+ gzip_vary on;
+ gzip_comp_level 5;
+ gzip_min_length 256;
+ gzip_proxied expired no-cache no-store private auth;
+ gzip_types application/atom+xml application/geo+json application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/vnd.ms-fontobject application/wasm application/x-rss+xml application/x-web-app-manifest+json application/xhtml+xml application/xliff+xml application/xml font/collection font/otf font/ttf image/bmp image/svg+xml image/vnd.microsoft.icon text/cache-manifest text/calendar text/css text/csv text/javascript text/markdown text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/xml;
+
+ proxy_redirect off;
+ proxy_connect_timeout 60s;
+ proxy_send_timeout 60s;
+ proxy_read_timeout 60s;
+ proxy_http_version 1.1;
+ proxy_set_header "Connection" "";
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Forwarded-Host $host;
+ proxy_set_header X-Forwarded-Server $host;
+
+ map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+ }
+
+ client_max_body_size 40M;
+ server_tokens off;
+ default_type application/octet-stream;
+ access_log /var/log/nginx/access.log;
+
+ fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
+ fastcgi_param QUERY_STRING $query_string;
+ fastcgi_param REQUEST_METHOD $request_method;
+ fastcgi_param CONTENT_TYPE $content_type;
+ fastcgi_param CONTENT_LENGTH $content_length;
+
+ fastcgi_param SCRIPT_NAME $fastcgi_script_name;
+ fastcgi_param REQUEST_URI $request_uri;
+ fastcgi_param DOCUMENT_URI $document_uri;
+ fastcgi_param DOCUMENT_ROOT $document_root;
+ fastcgi_param SERVER_PROTOCOL $server_protocol;
+ fastcgi_param REQUEST_SCHEME $scheme;
+ fastcgi_param HTTPS $https if_not_empty;
+
+ fastcgi_param GATEWAY_INTERFACE CGI/1.1;
+ fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
+
+ fastcgi_param REMOTE_ADDR $remote_addr;
+ fastcgi_param REMOTE_PORT $remote_port;
+ fastcgi_param REMOTE_USER $remote_user;
+ fastcgi_param SERVER_ADDR $server_addr;
+ fastcgi_param SERVER_PORT $server_port;
+ fastcgi_param SERVER_NAME $server_name;
+
+ # PHP only, required if PHP was built with --enable-force-cgi-redirect
+ fastcgi_param REDIRECT_STATUS 200;
+
+ uwsgi_param QUERY_STRING $query_string;
+ uwsgi_param REQUEST_METHOD $request_method;
+ uwsgi_param CONTENT_TYPE $content_type;
+ uwsgi_param CONTENT_LENGTH $content_length;
+
+ uwsgi_param REQUEST_URI $request_uri;
+ uwsgi_param PATH_INFO $document_uri;
+ uwsgi_param DOCUMENT_ROOT $document_root;
+ uwsgi_param SERVER_PROTOCOL $server_protocol;
+ uwsgi_param REQUEST_SCHEME $scheme;
+ uwsgi_param HTTPS $https if_not_empty;
+
+ uwsgi_param REMOTE_ADDR $remote_addr;
+ uwsgi_param REMOTE_PORT $remote_port;
+ uwsgi_param SERVER_PORT $server_port;
+ uwsgi_param SERVER_NAME $server_name;
+
+ ssl_dhparam dh4096.pem;
+ ssl_session_cache shared:SSL:2m;
+ ssl_session_timeout 1h;
+ ssl_session_tickets off;
+
+ server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ }
+
+ server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ ssl_certificate adyxax.org.fullchain;
+ ssl_certificate_key adyxax.org.key;
+ }
+
+ include vhost.d/*.conf;
+}
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application's role that relies on nginx using a `meta/main.yaml` containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'postgresql'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'nginx'
+ tasks_from: 'vhost'
+ vars:
+ vhost:
+ name: 'www'
+ path: 'roles/www.adyxax.org/files/nginx-vhost.conf'
+```
+
+I did not find an elegant way to pass a file path local to one role to another. Because of that, here I just specify the full vhost file path complete with the `roles/` prefix.
+
+### Conclusion
+
+I you have an elegant idea for passing the local file path from one role to another do not hesitate to ping me!
diff --git a/content/blog/ansible/podman-ansible-role.md b/content/blog/ansible/podman-ansible-role.md
new file mode 100644
index 0000000..37cdabf
--- /dev/null
+++ b/content/blog/ansible/podman-ansible-role.md
@@ -0,0 +1,307 @@
+---
+title: 'Podman ansible role'
+description: 'The ansible role I use to manage my podman containers'
+date: '2024-11-08'
+tags:
+- ansible
+- podman
+---
+
+## Introduction
+
+Before succumbing to nixos, I had was running all my containers on k3s. This time I am migrating things to podman and trying to achieve a lighter setup. This article presents the ansible role I wrote to manage podman containers.
+
+## The role
+
+### Tasks
+
+The main tasks file setups podman and the required network configurations with:
+
+``` yaml
+---
+- name: 'Run OS specific tasks for the podman role'
+ include_tasks: '{{ ansible_distribution }}.yaml'
+
+- name: 'Make podman scripts directory'
+ file:
+ path: '/etc/podman'
+ mode: '0700'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'cni-podman0'
+ dest: '/etc/network/interfaces.d/'
+ owner: 'root'
+ mode: '444'
+```
+
+My OS specific task file `Debian.yaml` looks like this:
+
+``` yaml
+---
+- name: 'Install podman dependencies'
+ ansible.builtin.apt:
+ name:
+ - 'buildah'
+ - 'podman'
+ - 'rootlesskit'
+ - 'slirp4netns'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'podman-bridge.json'
+ dest: '/etc/cni/net.d/87-podman-bridge.conflist'
+ owner: 'root'
+ mode: '444'
+```
+
+The entrypoint tasks for this role is the `container.yaml` task file:
+
+``` yaml
+---
+# Inputs:
+# container:
+# cmd: optional(list(string))
+# env_vars: list(env_var)
+# image: string
+# name: string
+# publishs: list(publish)
+# volumes: list(volume)
+# With:
+# env_var:
+# name: string
+# value: string
+# publish:
+# container_port: string
+# host_port: string
+# ip: string
+# volume:
+# dest: string
+# src: string
+
+- name: 'Deploy podman systemd service for {{ container.name }}'
+ template:
+ src: 'container.service'
+ dest: '/etc/systemd/system/podman-{{ container.name }}.service'
+ owner: 'root'
+ mode: '0444'
+ notify: 'systemctl daemon-reload'
+
+- name: 'Deploy podman scripts for {{ container.name }}'
+ template:
+ src: 'container-{{ item }}.sh'
+ dest: '/etc/podman/{{ container.name }}-{{ item }}.sh'
+ owner: 'root'
+ mode: '0500'
+ register: 'deploy_podman_scripts'
+ loop:
+ - 'start'
+ - 'stop'
+
+- name: 'Restart podman container {{ container.name }}'
+ shell:
+ cmd: "systemctl restart podman-{{ container.name }}"
+ when: 'deploy_podman_scripts.changed'
+
+- name: 'Start podman container {{ container.name }} and activate it on boot'
+ service:
+ name: 'podman-{{ container.name }}'
+ enabled: true
+ state: 'started'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+```
+
+### Files
+
+Here is the `cni-podman0` I deploy on Debian hosts. It is required for the bridge to be up on boot so that other services can bind ports on it. Without this, the bridge would only come up when the first container starts which is too late in the boot process.
+
+``` text
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+auto cni-podman0
+iface cni-podman0 inet static
+address 10.88.0.1/16
+pre-up brctl addbr cni-podman0
+post-down brctl delbr cni-podman0
+```
+
+Here is the JSON cni bridge configuration file I use, customized to add IPv6 support:
+
+``` json
+{
+ "cniVersion": "0.4.0",
+ "name": "podman",
+ "plugins": [
+ {
+ "type": "bridge",
+ "bridge": "cni-podman0",
+ "isGateway": true,
+ "ipMasq": true,
+ "hairpinMode": true,
+ "ipam": {
+ "type": "host-local",
+ "routes": [
+ {
+ "dst": "0.0.0.0/0"
+ }, {
+ "dst": "::/0"
+ }
+ ],
+ "ranges": [
+ [{
+ "subnet": "10.88.0.0/16",
+ "gateway": "10.88.0.1"
+ }], [{
+ "subnet": "fd42::/48",
+ "gateway": "fd42::1"
+ }]
+ ]
+ }
+ }, {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }, {
+ "type": "firewall"
+ }, {
+ "type": "tuning"
+ }
+ ]
+}
+```
+
+### Templates
+
+Here is the jinja templated start bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+podman rm -f {{ container.name }} || true
+rm -f /run/podman-{{ container.name }}.ctr-id
+
+exec podman run \
+ --rm \
+ --name={{ container.name }} \
+ --log-driver=journald \
+ --cidfile=/run/podman-{{ container.name }}.ctr-id \
+ --cgroups=no-conmon \
+ --sdnotify=conmon \
+ -d \
+{% for env_var in container.env_vars | default([]) %}
+ -e {{ env_var.name }}={{ env_var.value }} \
+{% endfor %}
+{% for publish in container.publishs | default([]) %}
+ -p {{ publish.ip }}:{{ publish.host_port }}:{{ publish.container_port }} \
+{% endfor %}
+{% for volume in container.volumes | default([]) %}
+ -v {{ volume.src }}:{{ volume.dest }} \
+{% endfor %}
+ {{ container.image }} {% for cmd in container.cmd | default([]) %}{{ cmd }} {% endfor %}
+```
+
+Here is the jinja templated stop bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+if [[ ! "$SERVICE_RESULT" = success ]]; then
+ podman stop --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+fi
+
+podman rm -f --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+```
+
+Here is the jinja templated systemd unit service:
+
+``` shell
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+After=network-online.target
+Description=Podman container {{ container.name }}
+
+[Service]
+ExecStart=/etc/podman/{{ container.name }}-start.sh
+ExecStop=/etc/podman/{{ container.name }}-stop.sh
+NotifyAccess=all
+Restart=always
+TimeoutStartSec=0
+TimeoutStopSec=120
+Type=notify
+
+[Install]
+WantedBy=multi-user.target
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on podman using a meta/main.yaml containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'podman'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'podman'
+ tasks_from: 'container'
+ vars:
+ container:
+ cmd: ['--config-path', '/srv/cfg/conf.php']
+ name: 'privatebin'
+ env_vars:
+ - name: 'PHP_TZ'
+ value: 'Europe/Paris'
+ - name: 'TZ'
+ value: 'Europe/Paris'
+ image: 'docker.io/privatebin/nginx-fpm-alpine:1.7.4'
+ publishs:
+ - container_port: '8080'
+ host_port: '8082'
+ ip: '127.0.0.1'
+ volumes:
+ - dest: '/srv/cfg/conf.php:ro'
+ src: '/etc/privatebin.conf.php'
+ - dest: '/srv/data'
+ src: '/srv/privatebin'
+```
+
+## Conclusion
+
+I enjoy this design, it works really well. I am missing a task for deprovisioning a container but I have not needed it yet.
diff --git a/content/blog/ansible/postgresql-ansible-role.md b/content/blog/ansible/postgresql-ansible-role.md
new file mode 100644
index 0000000..02614c0
--- /dev/null
+++ b/content/blog/ansible/postgresql-ansible-role.md
@@ -0,0 +1,261 @@
+---
+title: 'PostgreSQL ansible role'
+description: 'The ansible role I use to manage my PostgreSQL databases'
+date: '2024-10-09'
+tags:
+- ansible
+- PostgreSQL
+---
+
+## Introduction
+
+Before succumbing to nixos, I had been using an ansible role to manage my PostgreSQL databases. Now that I am in need of it again I refined it a bit: here is the result.
+
+## The role
+
+### Tasks
+
+My `main.yaml` relies on OS specific tasks:
+
+``` yaml
+---
+- name: 'Generate postgres user password'
+ include_tasks: 'generate_password.yaml'
+ vars:
+ name: 'postgres'
+ when: '(ansible_local["postgresql_postgres"]|default({})).password is undefined'
+
+- name: 'Run OS tasks'
+ include_tasks: '{{ ansible_distribution }}.yaml'
+
+- name: 'Start postgresql and activate it on boot'
+ service:
+ name: 'postgresql'
+ enabled: true
+ state: 'started'
+```
+
+Here is an example in `Debian.yaml`:
+
+``` yaml
+---
+- name: 'Install postgresql'
+ package:
+ name:
+ - 'postgresql'
+ - 'python3-psycopg2' # necessary for the ansible postgresql modules
+
+- name: 'Configure postgresql'
+ template:
+ src: 'pg_hba.conf'
+ dest: '/etc/postgresql/15/main/'
+ owner: 'root'
+ group: 'postgres'
+ mode: '0440'
+ notify: 'reload postgresql'
+
+- name: 'Configure postgresql (file that require a restart when modified)'
+ template:
+ src: 'postgresql.conf'
+ dest: '/etc/postgresql/15/main/'
+ owner: 'root'
+ group: 'postgres'
+ mode: '0440'
+ notify: 'restart postgresql'
+
+- meta: 'flush_handlers'
+
+- name: 'Set postgres admin password'
+ shell:
+ cmd: "printf \"ALTER USER postgres WITH PASSWORD '%s';\" \"{{ ansible_local.postgresql_postgres.password }}\" | su -c psql - postgres"
+ when: 'postgresql_password_postgres is defined'
+```
+
+My `generate_password.yaml` will persist a password with a custom fact:
+
+``` yaml
+---
+# Inputs:
+# name: string
+# Outputs:
+# ansible_local["postgresql_" + postgresql.name].password
+- name: 'Generate a password'
+ set_fact: { "postgresql_password_{{ name }}": "{{ lookup('password', '/dev/null length=32 chars=ascii_letters') }}" }
+
+- name: 'Deploy ansible fact to persist the password'
+ template:
+ src: 'postgresql.fact'
+ dest: '/etc/ansible/facts.d/postgresql_{{ name }}.fact'
+ owner: 'root'
+ mode: '0500'
+ vars:
+ password: "{{ lookup('vars', 'postgresql_password_' + name) }}"
+
+- name: 'reload ansible_local'
+ setup: 'filter=ansible_local'
+```
+
+The main entry point of the role is the `database.yaml` task:
+
+``` yaml
+---
+# Inputs:
+# postgresql:
+# name: string
+# extension: list
+# Outputs:
+# ansible_local["postgresql_" + postgresql.name].password
+- name: 'Generate {{ postgresql.name }} password'
+ include_tasks: 'generate_password.yaml'
+ vars:
+ name: '{{ postgresql.name }}'
+ when: '(ansible_local["postgresql_" + postgresql.name]|default({})).password is undefined'
+
+- name: 'Create {{ postgresql.name }} user'
+ community.postgresql.postgresql_user:
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ postgresql.name }}'
+ password: '{{ ansible_local["postgresql_" + postgresql.name].password }}'
+
+- name: 'Create {{ postgresql.name }} database'
+ community.postgresql.postgresql_db:
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ postgresql.name }}'
+ owner: '{{ postgresql.name }}'
+
+- name: 'Activate {{ postgres.name }} extensions'
+ community.postgresql.postgresql_ext:
+ db: '{{ postgresql.name }}'
+ login_host: 'localhost'
+ login_password: '{{ ansible_local.postgresql_postgres.password }}'
+ name: '{{ item }}'
+ loop: '{{ postgresql.extensions | default([]) }}'
+```
+
+### Handlers
+
+Here are the two handlers:
+
+``` yaml
+---
+- name: 'reload postgresql'
+ service:
+ name: 'postgresql'
+ state: 'reloaded'
+
+- name: 'restart postgresql'
+ service:
+ name: 'postgresql'
+ state: 'restarted'
+```
+
+### Templates
+
+Here is my usual `pg_hba.conf`:
+
+``` yaml
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+local all all peer #unix socket
+
+host all all 127.0.0.0/8 scram-sha-256
+host all all ::1/128 scram-sha-256
+host all all 10.88.0.0/16 scram-sha-256 # podman
+```
+
+Here is my `postgresql.conf` for Debian:
+
+``` yaml
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+data_directory = '/var/lib/postgresql/15/main' # use data in another directory
+hba_file = '/etc/postgresql/15/main/pg_hba.conf' # host-based authentication file
+ident_file = '/etc/postgresql/15/main/pg_ident.conf' # ident configuration file
+external_pid_file = '/var/run/postgresql/15-main.pid' # write an extra PID file
+
+port = 5432 # (change requires restart)
+max_connections = 100 # (change requires restart)
+
+unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
+listen_addresses = 'localhost,10.88.0.1'
+
+shared_buffers = 128MB # min 128kB
+dynamic_shared_memory_type = posix # the default is usually the first option
+max_wal_size = 1GB
+min_wal_size = 80MB
+log_line_prefix = '%m [%p] %q%u@%d ' # special values:
+log_timezone = 'Europe/Paris'
+cluster_name = '15/main' # added to process titles if nonempty
+datestyle = 'iso, mdy'
+timezone = 'Europe/Paris'
+lc_messages = 'en_US.UTF-8' # locale for system error message
+lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
+lc_numeric = 'en_US.UTF-8' # locale for number formatting
+lc_time = 'en_US.UTF-8' # locale for time formatting
+default_text_search_config = 'pg_catalog.english'
+include_dir = 'conf.d' # include files ending in '.conf' from
+```
+
+And here is the simple fact script:
+
+``` shell
+#!/bin/sh
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -eu
+
+printf '{"password": "%s"}' "{{ password }}"
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application's role that relies on postgresql using a `meta/main.yaml` containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'postgresql'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'postgresql'
+ tasks_from: 'database'
+ vars:
+ postgresql:
+ extensions:
+ - 'pgcrypto'
+ name: 'eventline'
+```
+
+Backup jobs can be setup with:
+
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'postgres'
+ command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d eventline'"
+ name: 'eventline'
+ server: '{{ eventline_adyxax_org.borg }}'
+```
+
+## Conclusion
+
+I enjoy this design, it has served me well.
diff --git a/content/blog/aws/ansible-fact-metadata.md b/content/blog/aws/ansible-fact-metadata.md
new file mode 100644
index 0000000..3c48f1c
--- /dev/null
+++ b/content/blog/aws/ansible-fact-metadata.md
@@ -0,0 +1,88 @@
+---
+title: 'Shell script for gathering imdsv2 instance metadata on AWS ec2'
+description: 'An ansible fact I wrote'
+date: '2024-10-12'
+tags:
+- ansible
+- aws
+---
+
+## Introduction
+
+I wrote a shell script to gather ec2 instance metadata with an ansible fact.
+
+## The script
+
+I am using POSIX `/bin/sh` because I wanted to support a variety of operating systems. Besides that, the only dependency is `curl`:
+
+``` shell
+#!/bin/sh
+set -eu
+
+metadata() {
+ local METHOD=$1
+ local URI_PATH=$2
+ local TOKEN="${3:-}"
+ local HEADER
+ if [ -z "${TOKEN}" ]; then
+ HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
+ else
+ HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
+ fi
+ curl -sSfL --request "${METHOD}" \
+ "http://169.254.169.254/latest${URI_PATH}" \
+ --header "${HEADER}"
+}
+
+METADATA_TOKEN=$(metadata PUT /api/token)
+KEYS=$(metadata GET /meta-data/tags/instance "${METADATA_TOKEN}")
+PREFIX='{'
+for KEY in $KEYS; do
+ VALUE=$(metadata GET "/meta-data/tags/instance/${KEY}" "${METADATA_TOKEN}")
+ printf '%s"%s":"%s"' "${PREFIX}" "${KEY}" "${VALUE}"
+ PREFIX=','
+done
+printf '}'
+```
+
+## Bonus version without depending on curl
+
+Depending on curl can be avoided. If you are willing to use netcat instead and be declared a madman by your colleagues, you can rewrite the function with:
+
+``` shell
+metadata() {
+ local METHOD=$1
+ local URI_PATH=$2
+ local TOKEN="${3:-}"
+ local HEADER
+ if [ -z "${TOKEN}" ]; then
+ HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
+ else
+ HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
+ fi
+ printf "${METHOD} /latest${URI_PATH} HTTP/1.0\r\n%s\r\n\r\n" \
+ "${HEADER}" \
+ | nc -w 5 169.254.169.254 80 | tail -n 1
+}
+```
+
+## Deploying an ansible fact
+
+I deploy the script this way:
+``` yaml
+- name: 'Deploy ec2 metadata fact gathering script'
+ copy:
+ src: 'ec2_metadata.sh'
+ dest: '/etc/ansible/facts.d/ec2_metadata.fact'
+ owner: 'root'
+ mode: '0500'
+ register: 'ec2_metadata_fact'
+
+- name: 'reload facts'
+ setup: 'filter=ansible_local'
+ when: 'ec2_metadata_fact.changed'
+```
+
+## Conclusion
+
+It works, is simple and I like it. I am happy!
diff --git a/content/blog/aws/defaults.md b/content/blog/aws/defaults.md
new file mode 100644
index 0000000..454b325
--- /dev/null
+++ b/content/blog/aws/defaults.md
@@ -0,0 +1,254 @@
+---
+title: Securing AWS default VPCs
+description: With terraform/OpenTofu
+date: 2024-09-10
+tags:
+- aws
+- OpenTofu
+- terraform
+---
+
+## Introduction
+
+AWS offers some network conveniences in the form of a default VPC, default security group (allowing access to the internet) and default routing table. These exist in all AWS regions your accounts have access to, even if never plan to deploy anything there. And yes most AWS regions cannot be disabled entirely, only the most recent ones can be.
+
+I feel the need to clean up these resources in order to prevent any misuse. Most people do not understand networking and some could inadvertently spawn instances with public IP addresses. By making the default VPC inoperative, these people need to come to someone more knowledgeable before they do anything foolish.
+
+## Module
+
+The special default variants of the following AWS terraform resources are quirky: defining them does not create anything but automatically import the built-in aws resources and then edit their attributes to match your configuration. Furthermore, destroying these resources would only remove them from your state.
+
+``` hcl
+resource "aws_default_vpc" "default" {
+ tags = { Name = "default" }
+}
+
+resource "aws_default_security_group" "default" {
+ ingress = []
+ egress = []
+ tags = { Name = "default" }
+ vpc_id = aws_default_vpc.default.id
+}
+
+resource "aws_default_route_table" "default" {
+ default_route_table_id = aws_default_vpc.default.default_route_table_id
+ route = []
+ tags = { Name = "default - empty" }
+}
+```
+
+The key here (and initial motivation for this article) is the `ingress = []` expression syntax (or `egress` or `route`): while these attributes are normally block attributes, you can also use them in a `= []` expression in order to express that you want to enforce the resource not having any ingress, egress or route rules. Defining the resources without any block rules would just leave these attributes untouched.
+
+## Iterating through all the default regions
+
+As I said, most AWS regions cannot be disabled entirely, only the most recent ones can be. It is currently not possible to instanciate terraform providers on the fly, but thankfully it is coming in a future OpenTofu release! In the meantime, we need to do these kinds of horrors:
+
+``` hcl
+provider "aws" {
+ alias = "ap-northeast-1"
+ profile = var.environment
+ region = "ap-northeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-2"
+ profile = var.environment
+ region = "ap-northeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-3"
+ profile = var.environment
+ region = "ap-northeast-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-south-1"
+ profile = var.environment
+ region = "ap-south-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-1"
+ profile = var.environment
+ region = "ap-southeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-2"
+ profile = var.environment
+ region = "ap-southeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ca-central-1"
+ profile = var.environment
+ region = "ca-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-central-1"
+ profile = var.environment
+ region = "eu-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-north-1"
+ profile = var.environment
+ region = "eu-north-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-1"
+ profile = var.environment
+ region = "eu-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-2"
+ profile = var.environment
+ region = "eu-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-3"
+ profile = var.environment
+ region = "eu-west-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "sa-east-1"
+ profile = var.environment
+ region = "sa-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-1"
+ profile = var.environment
+ region = "us-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-2"
+ profile = var.environment
+ region = "us-east-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-1"
+ profile = var.environment
+ region = "us-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-2"
+ profile = var.environment
+ region = "us-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+module "ap-northeast-1" {
+ providers = { aws = aws.ap-northeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-2" {
+ providers = { aws = aws.ap-northeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-3" {
+ providers = { aws = aws.ap-northeast-3 }
+ source = "../modules/defaults"
+}
+
+module "ap-south-1" {
+ providers = { aws = aws.ap-south-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-1" {
+ providers = { aws = aws.ap-southeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-2" {
+ providers = { aws = aws.ap-southeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ca-central-1" {
+ providers = { aws = aws.ca-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-central-1" {
+ providers = { aws = aws.eu-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-north-1" {
+ providers = { aws = aws.eu-north-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-1" {
+ providers = { aws = aws.eu-west-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-2" {
+ providers = { aws = aws.eu-west-2 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-3" {
+ providers = { aws = aws.eu-west-3 }
+ source = "../modules/defaults"
+}
+
+module "sa-east-1" {
+ providers = { aws = aws.sa-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-1" {
+ providers = { aws = aws.us-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-2" {
+ providers = { aws = aws.us-east-2 }
+ source = "../modules/defaults"
+}
+
+module "us-west-1" {
+ providers = { aws = aws.us-west-1 }
+ source = "../modules/defaults"
+}
+
+module "us-west-2" {
+ providers = { aws = aws.us-west-2 }
+ source = "../modules/defaults"
+}
+```
+
+## Conclusion
+
+Terraform is absolutely quirky at times, but it is not its fault here: the AWS provider and their magical default resources are.
diff --git a/content/blog/aws/secrets.md b/content/blog/aws/secrets.md
index 476d235..a25f9ef 100644
--- a/content/blog/aws/secrets.md
+++ b/content/blog/aws/secrets.md
@@ -1,10 +1,10 @@
---
title: Managing AWS secrets
-description: with the CLI and with terraform/opentofu
+description: with the CLI and with terraform/OpenTofu
date: 2024-08-13
tags:
- aws
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/cloudflare/importing-terraform.md b/content/blog/cloudflare/importing-terraform.md
index 7fc5dfd..1ddb635 100644
--- a/content/blog/cloudflare/importing-terraform.md
+++ b/content/blog/cloudflare/importing-terraform.md
@@ -1,16 +1,16 @@
---
-title: Importing cloudflare DNS records in terraform/opentofu
+title: Importing cloudflare DNS records in terraform/OpenTofu
description: a way to get the records IDs
date: 2024-07-16
tags:
- cloudflare
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-Managing cloudflare DNS records using terraform/opentofu is easy enough, but importing existing records into your automation is not straightforward.
+Managing cloudflare DNS records using terraform/OpenTofu is easy enough, but importing existing records into your automation is not straightforward.
## The problem
diff --git a/content/blog/debian/ovh-rescue.md b/content/blog/debian/ovh-rescue.md
new file mode 100644
index 0000000..0fefd4d
--- /dev/null
+++ b/content/blog/debian/ovh-rescue.md
@@ -0,0 +1,116 @@
+---
+title: 'Fixing an encrypted Debian system boot'
+description: 'From booting in UEFI mode to legacy BIOS mode'
+date: '2024-09-19'
+tags:
+- Debian
+---
+
+## Introduction
+
+Some time ago, I reinstalled one of my OVH vps instances. I used a virtual machine image of a Debian Linux that I initially prepared for a GCP host a few months ago. It was setup to boot with UEFI, and I discovered that OVH does not offer it (at least on its small VPS offering).
+
+It is a problem because this is a system with an encrypted root partition. In order to boot with an encrypted partition in BIOS mode, grub needs some extra space than it does not when in UEFI mode.
+
+I could rebuild an image from scratch, or I could hop onto an OVH rescue image and fix it. I took the later approach in order to refresh my rescue skills.
+
+## Mounting the partitions from the rescue image
+
+This system has an encrypted block device holding an LVM set of volumes. Since the rescue image does not have the necessary tools, I installed them with:
+``` shell
+apt update -qq
+apt install -y cryptsetup lvm2
+```
+
+I refreshed my knowledge of the layout with
+``` shell
+blkid
+fdisk -l /dev/sdb
+```
+
+Opening the encrypted block device is done with:
+``` shell
+cryptsetup luksOpen /dev/sdb3 sda3_crypt
+```
+
+Note that I am mounting a sdb device because we are in OVH rescue, but it was known as sda during the installation. I need to use the same name otherwise grub will mess up when I regenerate its configuration and the system will not reboot properly.
+
+The LVM subsystem now needs to be activated with:
+``` shell
+vgchange -ay vg
+```
+
+Now to mount the partitions and chroot into our system:
+
+``` shell
+mount /dev/vg/root /mnt
+cd /mnt
+mount -R /dev dev
+mount -R /proc proc
+mount -R /sys sys
+chroot ./
+mount /boot
+```
+
+## Replacing the EFI partition with a BIOS boot partition
+
+My system had an EFI partition in /dev/sdb1: this is not suitable for booting a grub2 system to an encrypted volume directly from BIOS. I replaced it with a BIOS boot partition with:
+``` shell
+fdisk /dev/sdb
+Command (m for help): d
+Partition number (1-3, default 3): 1
+Partition 1 has been deleted.
+
+Command (m for help): n
+Partition number (1,4-128, default 1): 1
+First sector (34-41943006, default 2048):
+Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1050623, default 1050623):
+
+Created a new partition 1 of type 'Linux filesystem' and of size 512 MiB.
+
+Command (m for help): t
+Partition number (1-3, default 3): 1
+Partition type or alias (type L to list all): 4
+w
+```
+
+Reinstalling grub was a matter of:
+``` shell
+apt install grub-pc
+update-grub
+grub-install /dev/sdb
+```
+
+I am not sure whether it was necessary or not but I rebuilt the initramfs in case the set of modules needed by the kernel would be different:
+``` shell
+update-initramfs -u
+```
+
+## Cleanup
+
+Close the chroot session with either `C-d` or the `exit` command. Then umount all partitions with:
+``` shell
+cd /
+umount -R -l /mnt
+```
+
+Deactivate the LVM subsystem with:
+``` shell
+vgchange -an
+```
+
+Close the luks volume with:
+``` shell
+cryptsetup luksClose sda3_crypt
+```
+
+Sync all data to disks just in case:
+``` shell
+sync
+```
+
+Then reboot in normal mode from the OVH management webui.
+
+## Conclusion
+
+This was a fun repair operation!
diff --git a/content/blog/kubernetes/dev-shm.md b/content/blog/kubernetes/dev-shm.md
new file mode 100644
index 0000000..9369052
--- /dev/null
+++ b/content/blog/kubernetes/dev-shm.md
@@ -0,0 +1,36 @@
+---
+title: 'How to increase /dev/shm size on kubernetes'
+description: "the equivalent to docker's shm-size flag"
+date: '2024-10-02'
+tags:
+- kubernetes
+---
+
+## Introduction
+
+Today I had to find a way to increase the size of the shared memory filesystem offered to containers for a specific workload. `/dev/shm` is a Linux specific `tmpfs` filesystem that some applications use for inter process communications. The defaults size of this filesystem on kubernetes nodes is 64MiB.
+
+Docker has a `--shm-size 1g` flag to specify that. Though kubernetes does not offer a direct equivalent, we can replicate this with volumes.
+
+## Configuration in pod specification
+
+Here are the relevant sections of the spec we need to set:
+``` yaml
+spec:
+ template:
+ spec:
+ container:
+ volume_mount:
+ mount_path = "/dev/shm"
+ name = "dev-shm"
+ read_only = false
+ volume:
+ empty_dir:
+ medium = "Memory"
+ size_limit = "1Gi"
+ name = "dev-shm"
+```
+
+## Conclusion
+
+Well it works!
diff --git a/content/blog/miscellaneous/generate-github-access-token-for-github-app.md b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
new file mode 100644
index 0000000..c08b92f
--- /dev/null
+++ b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
@@ -0,0 +1,67 @@
+---
+title: Generating a github access token for a github app in bash
+description: A useful script
+date: 2024-08-24
+tags:
+- bash
+- github
+---
+
+## Introduction
+
+Last week I had to find a way to generate a github access token for a github app.
+
+## The problem
+
+Github apps are the newest and recommended way to provide programmatic access to things that need to interact with github. You get some credentials that allow you to authenticate then generate some JWT which you can use to generate an access key... Lovely!
+
+When developping an "app", all this complexity mostly makes sense, but when all you want is to run some script it really gets in the way. From my research most people in this situation give up on github apps and either create a robot account, or bite the bullet and create personnal access tokens. The people who resist and try to do the right thing mostly end up with some nodejs and quite a few dependencies.
+
+I needed something simpler.
+
+## The script
+
+I took a lot of inspiration from [this script](https://github.com/Nastaliss/get-github-app-pat/blob/main/generate_github_access_token.sh), cleaned it up and ended up with:
+
+``` shell
+#!/usr/bin/env bash
+# This script generates a github access token. It Requires the following
+# environment variables:
+# - GITHUB_APP_ID
+# - GITHUB_APP_INSTALLATION_ID
+# - GITHUB_APP_PRIVATE_KEY
+set -euo pipefail
+
+b64enc() { openssl enc -base64 -A | tr '+/' '-_' | tr -d '='; }
+NOW=$(date +%s)
+
+HEADER=$(printf '{
+ "alg": "RS256",
+ "exp": %d,
+ "iat": %d,
+ "iss": "adyxax",
+ "kid": "0001",
+ "typ": "JWT"
+}' "$((NOW+10))" "${NOW}" | jq -r -c .)
+
+PAYLOAD=$(printf '{
+ "exp": %s,
+ "iat": %s,
+ "iss": %s
+}' "$((NOW + 10 * 59))" "$((NOW - 10))" "${GITHUB_APP_ID}" | jq -r -c .)
+
+SIGNED_CONTENT=$(printf '%s' "${HEADER}" | b64enc).$(printf '%s' "${PAYLOAD}" | b64enc)
+SIG=$(printf '%s' "${SIGNED_CONTENT}" | \
+ openssl dgst -binary -sha256 -sign <(printf "%s" "${GITHUB_APP_PRIVATE_KEY}") | b64enc)
+JWT=$(printf '%s.%s' "${SIGNED_CONTENT}" "${SIG}")
+
+curl -s --location --request POST \
+ "https://api.github.com/app/installations/${GITHUB_APP_INSTALLATION_ID}/access_tokens" \
+ --header "Authorization: Bearer $JWT" \
+ --header 'Accept: application/vnd.github+json' \
+ --header 'X-GitHub-Api-Version: 2022-11-28' | jq -r '.token'
+```
+
+## Conclusion
+
+It works, is simple and only requires bash, jq and openssl.
diff --git a/content/blog/terraform/acme.md b/content/blog/terraform/acme.md
index f19302b..37045fd 100644
--- a/content/blog/terraform/acme.md
+++ b/content/blog/terraform/acme.md
@@ -1,16 +1,16 @@
---
-title: Certificate management with opentofu and eventline
+title: Certificate management with OpenTofu and eventline
description: How I manage for my personal infrastructure
date: 2024-03-06
tags:
- Eventline
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-In this article, I will explain how I handle the management and automatic renewal of SSL certificates on my personal infrastructure using opentofu (the fork of terraform) and [eventline](https://www.exograd.com/products/eventline/). I chose to centralise the renewal on my single host running eventline and to generate a single wildcard certificate for each domain I manage.
+In this article, I will explain how I handle the management and automatic renewal of SSL certificates on my personal infrastructure using OpenTofu (the fork of terraform) and [eventline](https://www.exograd.com/products/eventline/). I chose to centralise the renewal on my single host running eventline and to generate a single wildcard certificate for each domain I manage.
## Wildcard certificates
diff --git a/content/blog/terraform/caa.md b/content/blog/terraform/caa.md
index defcd6a..ce6ff37 100644
--- a/content/blog/terraform/caa.md
+++ b/content/blog/terraform/caa.md
@@ -3,7 +3,7 @@ title: CAA DNS records with OpenTofu
description: How I manage which acme CA can issue certificates for me
date: 2024-05-27
tags:
-- opentofu
+- OpenTofu
- terraform
---
diff --git a/content/blog/terraform/chart-http-datasources.md b/content/blog/terraform/chart-http-datasources.md
index ebf0aba..5c4108d 100644
--- a/content/blog/terraform/chart-http-datasources.md
+++ b/content/blog/terraform/chart-http-datasources.md
@@ -1,18 +1,18 @@
---
-title: Manage helm charts extras with opentofu
+title: Manage helm charts extras with OpenTofu
description: a use case for the http datasource
date: 2024-04-25
tags:
- aws
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-When managing helm charts with opentofu (terraform), you often have to hard code correlated settings for versioning (like app version and chart version). Sometimes it goes even further and you need to fetch a policy or a manifest with some CRDs that the chart will depend on.
+When managing helm charts with OpenTofu (terraform), you often have to hard code correlated settings for versioning (like app version and chart version). Sometimes it goes even further and you need to fetch a policy or a manifest with some CRDs that the chart will depend on.
-Here is an example of how to manage that with opentofu and an http datasource for the AWS load balancer controller.
+Here is an example of how to manage that with OpenTofu and an http datasource for the AWS load balancer controller.
## A word about the AWS load balancer controller
diff --git a/content/blog/terraform/email-dns-unused-zone.md b/content/blog/terraform/email-dns-unused-zone.md
new file mode 100644
index 0000000..e1f9b81
--- /dev/null
+++ b/content/blog/terraform/email-dns-unused-zone.md
@@ -0,0 +1,104 @@
+---
+title: Email DNS records for zones that do not send emails
+description: Automated with terraform/OpenTofu
+date: 2024-09-03
+tags:
+- cloudflare
+- DNS
+- OpenTofu
+- terraform
+---
+
+## Introduction
+
+There are multiple DNS records one needs to configure in order to setup and securely use a domain to send or receive emails: MX, DKIM, DMARC and SPF.
+
+An often overlooked fact is that you also need to configure some of these records even if you do not intend to use a domain to send emails. If you do not, scammers will spoof your domain to send fraudulent emails and your domain's reputation will suffer.
+
+## DNS email records you need
+
+### SPF
+
+The most important and only required record you need is a TXT record on the apex of your domain that advertises the fact that no server can send emails from your domain:
+```
+"v=spf1 -all"
+```
+
+### MX
+
+If you do not intend to ever send emails, you certainly do not intend to receive emails either. Therefore you should consider removing all MX records on your zone. Oftentimes your registrar will provision some pointing to a free email infrastructure that they provide along with your domain.
+
+### DKIM
+
+You do not need DKIM records if you are not sending emails.
+
+### DMARC
+
+While not strictly necessary, I strongly recommend to set a DMARC record that instructs the world to explicitly reject all emails not matching the SPF policy:
+
+```
+"v=DMARC1;p=reject;sp=reject;pct=100"
+```
+
+## Terraform / OpenTofu code
+
+### Zones
+
+I use a map of simple objects to specify email profiles for my DNS zones:
+``` hcl
+locals {
+ zones = {
+ "adyxax.eu" = { emails = "adyxax" }
+ "adyxax.org" = { emails = "adyxax" }
+ "anne-so-et-julien.fr" = { emails = "no" }
+ }
+}
+
+data "cloudflare_zone" "main" {
+ for_each = local.zones
+
+ name = each.key
+}
+```
+
+### SPF
+
+Then I map each profile to spf records:
+``` hcl
+locals {
+ spf = {
+ "adyxax" = "v=spf1 mx -all"
+ "no" = "v=spf1 -all"
+ }
+}
+
+resource "cloudflare_record" "spf" {
+ for_each = local.zones
+
+ name = "@"
+ type = "TXT"
+ value = local.spf[each.value.emails]
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+### DMARC
+
+The same mapping system we had for spf can be used here too, but I choose to keep things simple and in the scope of this article. My real setup has some clever tricks to make dmarc notifications work centralized to a single domain that will be the subject another post:
+
+``` hcl
+resource "cloudflare_record" "dmarc" {
+ for_each = { for name, info in local.zones :
+ name => info if info.emails == "no"
+ }
+
+ name = "@"
+ type = "TXT"
+ value = "v=DMARC1;p=reject;sp=reject;pct=100"
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+## Conclusion
+
+Please keep your email DNS records tight and secure!
diff --git a/content/blog/terraform/tofu.md b/content/blog/terraform/tofu.md
index 48ec621..b52b97f 100644
--- a/content/blog/terraform/tofu.md
+++ b/content/blog/terraform/tofu.md
@@ -1,20 +1,20 @@
---
-title: Testing opentofu
+title: Testing OpenTofu
description: Little improvements and what it means for small providers like mine
date: 2024-01-31
tags:
- Eventline
-- opentofu
+- OpenTofu
- terraform
---
## Introduction
-This January, the opentofu project announced the general availability of their terraform fork. Not much changes for now between terraform and opentofu (and that is a good thing!), as far as I can tell the announcement was mostly about the new provider registry and of course the truly open source license.
+This January, the OpenTofu project announced the general availability of their terraform fork. Not much changes for now between terraform and OpenTofu (and that is a good thing!), as far as I can tell the announcement was mostly about the new provider registry and of course the truly open source license.
## Registry change
-The opentofu registry already has all the providers you are accustomed to, but your state will need to be migrated with:
+The OpenTofu registry already has all the providers you are accustomed to, but your state will need to be migrated with:
```sh
tofu init -upgrade`
```
@@ -24,19 +24,19 @@ For some providers you might encounter the following warning:
- Installed cloudflare/cloudflare v4.23.0. Signature validation was skipped due to the registry not containing GPG keys for this provider
```
-This is harmless and will resolve itself when the providers' developers provide the public GPG key used to sign their releases to the opentofu registry. The process is very simple thanks to their GitHub workflow automation.
+This is harmless and will resolve itself when the providers' developers provide the public GPG key used to sign their releases to the OpenTofu registry. The process is very simple thanks to their GitHub workflow automation.
## Little improvements
- `tofu init` seems significantly faster than `terraform init`.
-- You never could interrupt a terraform plan with `C-C`. I am so very glad to see that it is not a problem with opentofu! This really needs more advertising: proper Unix signal handling is like a superpower that is too often ignored by modern software.
-- `tofu test` can be used to assert things about your state and your configuration. I did not play with it yet but it opens [a whole new realm of possibilities](https://opentofu.org/docs/cli/commands/test/)!
+- You never could interrupt a terraform plan with `C-C`. I am so very glad to see that it is not a problem with OpenTofu! This really needs more advertising: proper Unix signal handling is like a superpower that is too often ignored by modern software.
+- `tofu test` can be used to assert things about your state and your configuration. I did not play with it yet but it opens [a whole new realm of possibilities](https://OpenTofu.org/docs/cli/commands/test/)!
- `tofu import` can use expressions referencing other values or resources attributes, this is a big deal when handling massive imports!
## Eventline terraform provider
-I did the required pull requests on the [opentofu registry](https://github.com/opentofu/registry) to have my [Eventline provider](https://github.com/adyxax/terraform-provider-eventline) all fixed up and ready to rock!
+I did the required pull requests on the [OpenTofu registry](https://github.com/OpenTofu/registry) to have my [Eventline provider](https://github.com/adyxax/terraform-provider-eventline) all fixed up and ready to rock!
## Conclusion
-I hope opentofu really takes off, the little improvements they made already feel like a breath of fresh air. Terraform could be so much more!
+I hope OpenTofu really takes off, the little improvements they made already feel like a breath of fresh air. Terraform could be so much more!