aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--assets/base.css2
-rw-r--r--content/blog/ansible/factorio.md265
-rw-r--r--content/blog/aws/defaults.md254
-rw-r--r--content/blog/aws/secrets.md136
-rw-r--r--content/blog/cloudflare/importing-terraform.md59
-rw-r--r--content/blog/debian/ovh-rescue.md116
-rw-r--r--content/blog/miscellaneous/generate-github-access-token-for-github-app.md67
-rw-r--r--content/blog/terraform/caa.md20
-rw-r--r--content/blog/terraform/email-dns-unused-zone.md104
-rw-r--r--content/books/misc/making-it-so.md7
-rw-r--r--content/books/stormlight_archive/the-way-of-kings-audiobook.md9
-rw-r--r--content/books/stormlight_archive/words-of-radiance-audiobook.md9
-rw-r--r--search/go.mod2
13 files changed, 1039 insertions, 11 deletions
diff --git a/assets/base.css b/assets/base.css
index 94cfb9c..24774b6 100644
--- a/assets/base.css
+++ b/assets/base.css
@@ -147,7 +147,9 @@ body header nav ul li a,
body header nav ul li a:visited,
a:hover {
color: var(--red);
+ text-wrap: balance;
}
h2, h3, h4, h5, h6 {
color: var(--green);
+ text-wrap: balance;
}
diff --git a/content/blog/ansible/factorio.md b/content/blog/ansible/factorio.md
new file mode 100644
index 0000000..08e2827
--- /dev/null
+++ b/content/blog/ansible/factorio.md
@@ -0,0 +1,265 @@
+---
+title: 'How to self host a Factorio headless server'
+description: 'Automated with ansible'
+date: '2024-09-25'
+tags:
+- ansible
+- Debian
+- Factorio
+---
+
+## Introduction
+
+With the upcoming v2.0 release next month, we decided to try a [seablock](https://mods.factorio.com/mod/SeaBlock) run with a friend and see how far we go in this time frame. Here is a the small ansible role I wrote to deploy this. It is for a Debian server but any Linux distribution with systemd will do. And if you ignore the service unit file, any Linux or even [FreeBSD](factorio-server-in-a-linux-jail.md) will do.
+
+## Tasks
+
+This role has a single `tasks/main.yaml` file containing the following.
+
+### User
+
+This is fairly standard:
+``` yaml
+- name: 'Create factorio group'
+ group:
+ name: 'factorio'
+ system: 'yes'
+
+- name: 'Create factorio user'
+ user:
+ name: 'factorio'
+ group: 'factorio'
+ shell: '/usr/bin/bash'
+ home: '/srv/factorio'
+ createhome: 'yes'
+ system: 'yes'
+ password: '*'
+```
+
+### Factorio
+
+Factorio has an API endpoint that provides information about its latest releases, I query and then parse it with:
+``` yaml
+- name: 'Retrieve factorio latest release number'
+ shell:
+ cmd: "curl -s https://factorio.com/api/latest-releases | jq -r '.stable.headless'"
+ register: 'factorio_version_info'
+ changed_when: False
+
+- set_fact:
+ factorio_version: '{{ factorio_version_info.stdout_lines[0] }}'
+```
+
+Afterwards, it is just a question of downloading and extracting factorio:
+``` yaml
+- name: 'Download factorio'
+ get_url:
+ url: "https://www.factorio.com/get-download/{{ factorio_version }}/headless/linux64"
+ dest: '/srv/factorio/headless-{{ factorio_version }}.zip'
+ mode: '0444'
+ register: 'factorio_downloaded'
+
+- name: 'Extract new factorio version'
+ ansible.builtin.unarchive:
+ src: '/srv/factorio/headless-{{ factorio_version }}.zip'
+ dest: '/srv/factorio'
+ owner: 'factorio'
+ group: 'factorio'
+ remote_src: 'yes'
+ notify: 'restart factorio'
+ when: 'factorio_downloaded.changed'
+```
+
+I also create the saves directory with:
+``` yaml
+- name: 'Make factorio saves directory'
+ file:
+ path: '/srv/factorio/factorio/saves'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0750'
+ state: 'directory'
+```
+
+### Configuration files
+
+There are two configuration files to copy from the `files` folder:
+``` yaml
+- name: 'Deploy configuration files'
+ copy:
+ src: '{{ item.src }}'
+ dest: '{{ item.dest }}'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0440'
+ notify:
+ - 'systemctl daemon-reload'
+ - 'restart factorio'
+ loop:
+ - { src: 'factorio.service', dest: '/etc/systemd/system/' }
+ - { src: 'server-adminlist.json', dest: '/srv/factorio/factorio/' }
+```
+
+The systemd service unit file contains:
+``` ini
+[Unit]
+Descripion=Factorio Headless Server
+After=network.target
+After=systemd-user-sessions.service
+After=network-online.target
+
+[Service]
+Type=simple
+User=factorio
+ExecStart=/srv/factorio/factorio/bin/x64/factorio --start-server game.zip
+WorkingDirectory=/srv/factorio/factorio
+
+[Install]
+WantedBy=multi-user.target
+```
+
+The admin list is simply:
+
+``` json
+["adyxax"]
+```
+
+I generate the factorio game password with terraform/OpenTofu using a resource like:
+
+``` hcl
+resource "random_password" "factorio" {
+ length = 16
+
+ lifecycle {
+ ignore_changes = [
+ length,
+ lower,
+ ]
+ }
+}
+```
+
+This allows me to have it persist in the terraform state which is a good thing. For simplification, let's say that this state (which is a json file) is in a local file that I can load with:
+``` yaml
+- name: 'Load the tofu state to read the factorio game password'
+ include_vars:
+ file: '../../../../adyxax.org/01-legacy/terraform.tfstate'
+ name: 'tofu_state_legacy'
+```
+
+Given this template file:
+``` json
+{
+ "name": "Normalians",
+ "description": "C'est sur ce serveur que jouent les beaux gosses",
+ "tags": ["game", "tags"],
+ "max_players": 0,
+ "visibility": {
+ "public": false,
+ "lan": false
+ },
+ "username": "",
+ "password": "",
+ "token": "",
+ "game_password": "{{ factorio_game_password[0] }}",
+ "require_user_verification": false,
+ "max_upload_in_kilobytes_per_second": 0,
+ "max_upload_slots": 5,
+ "minimum_latency_in_ticks": 0,
+ "max_heartbeats_per_second": 60,
+ "ignore_player_limit_for_returning_players": false,
+ "allow_commands": "admins-only",
+ "autosave_interval": 10,
+ "autosave_slots": 5,
+ "afk_autokick_interval": 0,
+ "auto_pause": true,
+ "only_admins_can_pause_the_game": true,
+ "autosave_only_on_server": true,
+ "non_blocking_saving": true,
+ "minimum_segment_size": 25,
+ "minimum_segment_size_peer_count": 20,
+ "maximum_segment_size": 100,
+ "maximum_segment_size_peer_count": 10
+}
+```
+
+Note the usage of `[0]` for the variable expansion: it is a disappointing trick that you have to remember when dealing with json query parsing using ansible's filters: these always return an array. The template invocation is:
+``` yaml
+- name: 'Deploy configuration templates'
+ template:
+ src: 'server-settings.json'
+ dest: '/srv/factorio/factorio/'
+ owner: 'factorio'
+ group: 'factorio'
+ mode: '0440'
+ notify: 'restart factorio'
+ vars:
+ factorio_game_password: "{{ tofu_state_legacy | json_query(\"resources[?type=='random_password'&&name=='factorio'].instances[0].attributes.result\") }}"
+```
+
+### Service
+
+Finally I start and activate the factorio service on boot:
+``` yaml
+- name: 'Start factorio and activate it on boot'
+ service:
+ name: 'factorio'
+ enabled: 'yes'
+ state: 'started'
+```
+
+### Backups
+
+I invoke a personal borg role to configure my backups. I will detail the workings of this role in a next article:
+``` yaml
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'save'
+ paths:
+ - '/srv/factorio/factorio/saves/game.zip'
+ name: 'factorio'
+ server: '{{ factorio.borg }}'
+```
+
+## Handlers
+
+I have these two handlers:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+
+- name: 'restart factorio'
+ service:
+ name: 'factorio'
+ state: 'restarted'
+```
+
+## Generating a map and starting the game
+
+If you just followed this guide factorio failed to start on the server because it does not have a map in its save folder. If that is not the case for you because you are coming back to this article after some time, remember to stop factorio with `systemctl stop factorio` before continuing. If you do not, when you later restart factorio will overwrite your newly uploaded save.
+
+Launch factorio locally, install any mod you want then go to single player and generate a new map with your chosen settings. Save the game then quit and go back to your terminal.
+
+Find the save file (if playing on steam it will be in `~/.factorio/saves/`) and upload it to `/srv/factorio/factorio/saves/game.zip`. If you are using mods, `rsync` the mods folder that leaves next to your saves directory to the server with:
+
+``` shell
+rsync -r ~/.factorio/mods/ root@factorio.adyxax.org:/srv/factorio/factorio/mods/`
+```
+
+Then give these files to the factorio user on your server before restarting the game:
+
+``` shell
+chown -R factorio:factorio /srv/factorio
+systemctl start factorio
+```
+
+## Conclusion
+
+Good luck and have fun!
diff --git a/content/blog/aws/defaults.md b/content/blog/aws/defaults.md
new file mode 100644
index 0000000..9fdbfa3
--- /dev/null
+++ b/content/blog/aws/defaults.md
@@ -0,0 +1,254 @@
+---
+title: Securing AWS default VPCs
+description: With terraform/opentofu
+date: 2024-09-10
+tags:
+- aws
+- opentofu
+- terraform
+---
+
+## Introduction
+
+AWS offers some network conveniences in the form of a default VPC, default security group (allowing access to the internet) and default routing table. These exist in all AWS regions your accounts have access to, even if never plan to deploy anything there. And yes most AWS regions cannot be disabled entirely, only the most recent ones can be.
+
+I feel the need to clean up these resources in order to prevent any misuse. Most people do not understand networking and some could inadvertently spawn instances with public IP addresses. By making the default VPC inoperative, these people need to come to someone more knowledgeable before they do anything foolish.
+
+## Module
+
+The special default variants of the following AWS terraform resources are quirky: defining them does not create anything but automatically import the built-in aws resources and then edit their attributes to match your configuration. Furthermore, destroying these resources would only remove them from your state.
+
+``` hcl
+resource "aws_default_vpc" "default" {
+ tags = { Name = "default" }
+}
+
+resource "aws_default_security_group" "default" {
+ ingress = []
+ egress = []
+ tags = { Name = "default" }
+ vpc_id = aws_default_vpc.default.id
+}
+
+resource "aws_default_route_table" "default" {
+ default_route_table_id = aws_default_vpc.default.default_route_table_id
+ route = []
+ tags = { Name = "default - empty" }
+}
+```
+
+The key here (and initial motivation for this article) is the `ingress = []` expression syntax (or `egress` or `route`): while these attributes are normally block attributes, you can also use them in a `= []` expression in order to express that you want to enforce the resource not having any ingress, egress or route rules. Defining the resources without any block rules would just leave these attributes untouched.
+
+## Iterating through all the default regions
+
+As I said, most AWS regions cannot be disabled entirely, only the most recent ones can be. It is currently not possible to instanciate terraform providers on the fly, but thankfully it is coming in a future OpenTofu release! In the meantime, we need to do these kinds of horrors:
+
+``` hcl
+provider "aws" {
+ alias = "ap-northeast-1"
+ profile = var.environment
+ region = "ap-northeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-2"
+ profile = var.environment
+ region = "ap-northeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-3"
+ profile = var.environment
+ region = "ap-northeast-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-south-1"
+ profile = var.environment
+ region = "ap-south-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-1"
+ profile = var.environment
+ region = "ap-southeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-2"
+ profile = var.environment
+ region = "ap-southeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ca-central-1"
+ profile = var.environment
+ region = "ca-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-central-1"
+ profile = var.environment
+ region = "eu-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-north-1"
+ profile = var.environment
+ region = "eu-north-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-1"
+ profile = var.environment
+ region = "eu-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-2"
+ profile = var.environment
+ region = "eu-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-3"
+ profile = var.environment
+ region = "eu-west-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "sa-east-1"
+ profile = var.environment
+ region = "sa-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-1"
+ profile = var.environment
+ region = "us-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-2"
+ profile = var.environment
+ region = "us-east-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-1"
+ profile = var.environment
+ region = "us-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-2"
+ profile = var.environment
+ region = "us-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+module "ap-northeast-1" {
+ providers = { aws = aws.ap-northeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-2" {
+ providers = { aws = aws.ap-northeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-3" {
+ providers = { aws = aws.ap-northeast-3 }
+ source = "../modules/defaults"
+}
+
+module "ap-south-1" {
+ providers = { aws = aws.ap-south-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-1" {
+ providers = { aws = aws.ap-southeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-2" {
+ providers = { aws = aws.ap-southeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ca-central-1" {
+ providers = { aws = aws.ca-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-central-1" {
+ providers = { aws = aws.eu-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-north-1" {
+ providers = { aws = aws.eu-north-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-1" {
+ providers = { aws = aws.eu-west-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-2" {
+ providers = { aws = aws.eu-west-2 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-3" {
+ providers = { aws = aws.eu-west-3 }
+ source = "../modules/defaults"
+}
+
+module "sa-east-1" {
+ providers = { aws = aws.sa-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-1" {
+ providers = { aws = aws.us-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-2" {
+ providers = { aws = aws.us-east-2 }
+ source = "../modules/defaults"
+}
+
+module "us-west-1" {
+ providers = { aws = aws.us-west-1 }
+ source = "../modules/defaults"
+}
+
+module "us-west-2" {
+ providers = { aws = aws.us-west-2 }
+ source = "../modules/defaults"
+}
+```
+
+## Conclusion
+
+Terraform is absolutely quirky at times, but it is not its fault here: the AWS provider and their magical default resources are.
diff --git a/content/blog/aws/secrets.md b/content/blog/aws/secrets.md
new file mode 100644
index 0000000..476d235
--- /dev/null
+++ b/content/blog/aws/secrets.md
@@ -0,0 +1,136 @@
+---
+title: Managing AWS secrets
+description: with the CLI and with terraform/opentofu
+date: 2024-08-13
+tags:
+- aws
+- opentofu
+- terraform
+---
+
+## Introduction
+
+Managing secrets in AWS is not an everyday task that allows me to naturally remember the specifics when I need them, especially the `--name` and `--secret-id` CLI inconsistency. I found I was lacking some simple notes that would prevent me from having to search the web in the future, here they are.
+
+## CLI
+
+### Creating secrets
+
+From a simple string:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-string \
+ --secret-string 'test'
+```
+
+From a text file:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-text \
+ --secret-string "$(cat ~/Downloads/adyxax.2024-07-31.private-key.pem)"
+```
+
+For binary file we `base64` encode the data:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-binary \
+ --secret-binary "$(cat ~/Downloads/some-blob|base64)"
+```
+
+### Updating secrets
+
+Beware that all the other aws secretsmanager commands use the `--secret-id` flag instead of the `--name` we needed when creating the secret.
+
+Update a secret string with:
+
+``` shell
+aws --profile common secretsmanager update-secret \
+ --secret-id test-string \
+ --secret-string 'test'
+```
+
+### Reading secrets
+
+Listing:
+
+``` shell
+aws --profile common secretsmanager list-secrets | jq -r '[.SecretList[].Name]'
+```
+
+Getting a secret value:
+
+``` shell
+aws --profile common secretsmanager get-secret-value --secret-id test-string
+```
+
+### Deleting secrets
+
+``` shell
+aws --profile common secretsmanager delete-secret --secret-id test-string
+```
+
+## Terraform
+
+### Resource
+
+Secret string:
+
+``` hcl
+resource "random_password" "main" {
+ length = 64
+ special = false
+ lifecycle {
+ ignore_changes = [special]
+ }
+}
+
+resource "aws_secretsmanager_secret" "main" {
+ name = "grafana-admin-password"
+}
+
+resource "aws_secretsmanager_secret_version" "main" {
+ secret_id = aws_secretsmanager_secret.main.id
+ secret_string = random_password.main.result
+}
+```
+
+Secret binary:
+
+``` hcl
+resource "random_bytes" "main" {
+ length = 32
+}
+
+resource "aws_secretsmanager_secret" "main" {
+ name = "data-encryption-key"
+}
+
+resource "aws_secretsmanager_secret_version" "main" {
+ secret_id = aws_secretsmanager_secret.main.id
+ secret_binary = random_bytes.main.base64
+}
+```
+
+### Datasource
+
+``` hcl
+data "aws_secretsmanager_secret_version" "main" {
+ secret_id = "test"
+}
+```
+
+Using the datasource differs if it contains a `secret_string` or a `secret_binary`. In most cases you will know your secret data therefore know which one to use. If for some reason you do not, this might be one of the rare legitimate use cases for the [try function](https://developer.hashicorp.com/terraform/language/functions/try):
+
+``` hcl
+try(
+ data.aws_secretsmanager_secret_version.main.secret_binary,
+ data.aws_secretsmanager_secret_version.main.secret_string,
+)
+```
+
+## Conclusion
+
+Once upon a time I wrote many small and short articles like this one but for some reason stopped. I will try to take on this habit again.
diff --git a/content/blog/cloudflare/importing-terraform.md b/content/blog/cloudflare/importing-terraform.md
new file mode 100644
index 0000000..7fc5dfd
--- /dev/null
+++ b/content/blog/cloudflare/importing-terraform.md
@@ -0,0 +1,59 @@
+---
+title: Importing cloudflare DNS records in terraform/opentofu
+description: a way to get the records IDs
+date: 2024-07-16
+tags:
+- cloudflare
+- opentofu
+- terraform
+---
+
+## Introduction
+
+Managing cloudflare DNS records using terraform/opentofu is easy enough, but importing existing records into your automation is not straightforward.
+
+## The problem
+
+Contrary to AWS, GCP and (I think) all other providers, a `cloudflare_record` terraform resource only specifies one potential value of the DNS record. Because of that, you cannot import the resource using a record's name since it can have multiple record values: you need a cloudflare record ID for that.
+
+Sadly these IDs are elusive and I did not find a way to get those from the webui dashboard. As best as I can tell, you have to query cloudflare's API to get this information.
+
+## Querying the API
+
+Most examples around the Internet make use of the old way of authenting with an email and an API key. The modern way is with an API token! An interesting fact is that while not straightforwardly specified, you can use it as a Bearer token. Here is the little script I wrote for this purpose:
+
+``` shell
+#!/usr/bin/env bash
+set -euo pipefail
+
+if [ "$#" -ne 3 ]; then
+ echo "usage: $(basename $0) <zone-name> <record-type> <record-name>"
+ exit 1
+else
+ ZONE_NAME="$1"
+ RECORD_TYPE="$2"
+ RECORD_NAME="$3"
+fi
+
+if [ -z "${CLOUDFLARE_API_TOKEN:-}" ]; then
+ echo "Please export a CLOUDFLARE_API_TOKEN environment variable prior to running this script" >&2
+ exit 1
+fi
+
+BASE_URL="https://api.cloudflare.com"
+
+get () {
+ REQUEST="$1"
+ curl -s -X GET "${BASE_URL}${REQUEST}" \
+ -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \
+ -H "Content-Type: application/json" | jq -r '.result[] | .id'
+}
+
+ZONE_ID=$(get "/client/v4/zones?name=${ZONE_NAME}")
+
+get "/client/v4/zones/${ZONE_ID}/dns_records?name=${RECORD_NAME}&type=${RECORD_TYPE}"
+```
+
+## Conclusion
+
+It works perfectly: with this script I managed to run my `tofu import cloudflare_record.factorio XXXX/YYYY` command and get on with my work.
diff --git a/content/blog/debian/ovh-rescue.md b/content/blog/debian/ovh-rescue.md
new file mode 100644
index 0000000..0fefd4d
--- /dev/null
+++ b/content/blog/debian/ovh-rescue.md
@@ -0,0 +1,116 @@
+---
+title: 'Fixing an encrypted Debian system boot'
+description: 'From booting in UEFI mode to legacy BIOS mode'
+date: '2024-09-19'
+tags:
+- Debian
+---
+
+## Introduction
+
+Some time ago, I reinstalled one of my OVH vps instances. I used a virtual machine image of a Debian Linux that I initially prepared for a GCP host a few months ago. It was setup to boot with UEFI, and I discovered that OVH does not offer it (at least on its small VPS offering).
+
+It is a problem because this is a system with an encrypted root partition. In order to boot with an encrypted partition in BIOS mode, grub needs some extra space than it does not when in UEFI mode.
+
+I could rebuild an image from scratch, or I could hop onto an OVH rescue image and fix it. I took the later approach in order to refresh my rescue skills.
+
+## Mounting the partitions from the rescue image
+
+This system has an encrypted block device holding an LVM set of volumes. Since the rescue image does not have the necessary tools, I installed them with:
+``` shell
+apt update -qq
+apt install -y cryptsetup lvm2
+```
+
+I refreshed my knowledge of the layout with
+``` shell
+blkid
+fdisk -l /dev/sdb
+```
+
+Opening the encrypted block device is done with:
+``` shell
+cryptsetup luksOpen /dev/sdb3 sda3_crypt
+```
+
+Note that I am mounting a sdb device because we are in OVH rescue, but it was known as sda during the installation. I need to use the same name otherwise grub will mess up when I regenerate its configuration and the system will not reboot properly.
+
+The LVM subsystem now needs to be activated with:
+``` shell
+vgchange -ay vg
+```
+
+Now to mount the partitions and chroot into our system:
+
+``` shell
+mount /dev/vg/root /mnt
+cd /mnt
+mount -R /dev dev
+mount -R /proc proc
+mount -R /sys sys
+chroot ./
+mount /boot
+```
+
+## Replacing the EFI partition with a BIOS boot partition
+
+My system had an EFI partition in /dev/sdb1: this is not suitable for booting a grub2 system to an encrypted volume directly from BIOS. I replaced it with a BIOS boot partition with:
+``` shell
+fdisk /dev/sdb
+Command (m for help): d
+Partition number (1-3, default 3): 1
+Partition 1 has been deleted.
+
+Command (m for help): n
+Partition number (1,4-128, default 1): 1
+First sector (34-41943006, default 2048):
+Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1050623, default 1050623):
+
+Created a new partition 1 of type 'Linux filesystem' and of size 512 MiB.
+
+Command (m for help): t
+Partition number (1-3, default 3): 1
+Partition type or alias (type L to list all): 4
+w
+```
+
+Reinstalling grub was a matter of:
+``` shell
+apt install grub-pc
+update-grub
+grub-install /dev/sdb
+```
+
+I am not sure whether it was necessary or not but I rebuilt the initramfs in case the set of modules needed by the kernel would be different:
+``` shell
+update-initramfs -u
+```
+
+## Cleanup
+
+Close the chroot session with either `C-d` or the `exit` command. Then umount all partitions with:
+``` shell
+cd /
+umount -R -l /mnt
+```
+
+Deactivate the LVM subsystem with:
+``` shell
+vgchange -an
+```
+
+Close the luks volume with:
+``` shell
+cryptsetup luksClose sda3_crypt
+```
+
+Sync all data to disks just in case:
+``` shell
+sync
+```
+
+Then reboot in normal mode from the OVH management webui.
+
+## Conclusion
+
+This was a fun repair operation!
diff --git a/content/blog/miscellaneous/generate-github-access-token-for-github-app.md b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
new file mode 100644
index 0000000..c08b92f
--- /dev/null
+++ b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
@@ -0,0 +1,67 @@
+---
+title: Generating a github access token for a github app in bash
+description: A useful script
+date: 2024-08-24
+tags:
+- bash
+- github
+---
+
+## Introduction
+
+Last week I had to find a way to generate a github access token for a github app.
+
+## The problem
+
+Github apps are the newest and recommended way to provide programmatic access to things that need to interact with github. You get some credentials that allow you to authenticate then generate some JWT which you can use to generate an access key... Lovely!
+
+When developping an "app", all this complexity mostly makes sense, but when all you want is to run some script it really gets in the way. From my research most people in this situation give up on github apps and either create a robot account, or bite the bullet and create personnal access tokens. The people who resist and try to do the right thing mostly end up with some nodejs and quite a few dependencies.
+
+I needed something simpler.
+
+## The script
+
+I took a lot of inspiration from [this script](https://github.com/Nastaliss/get-github-app-pat/blob/main/generate_github_access_token.sh), cleaned it up and ended up with:
+
+``` shell
+#!/usr/bin/env bash
+# This script generates a github access token. It Requires the following
+# environment variables:
+# - GITHUB_APP_ID
+# - GITHUB_APP_INSTALLATION_ID
+# - GITHUB_APP_PRIVATE_KEY
+set -euo pipefail
+
+b64enc() { openssl enc -base64 -A | tr '+/' '-_' | tr -d '='; }
+NOW=$(date +%s)
+
+HEADER=$(printf '{
+ "alg": "RS256",
+ "exp": %d,
+ "iat": %d,
+ "iss": "adyxax",
+ "kid": "0001",
+ "typ": "JWT"
+}' "$((NOW+10))" "${NOW}" | jq -r -c .)
+
+PAYLOAD=$(printf '{
+ "exp": %s,
+ "iat": %s,
+ "iss": %s
+}' "$((NOW + 10 * 59))" "$((NOW - 10))" "${GITHUB_APP_ID}" | jq -r -c .)
+
+SIGNED_CONTENT=$(printf '%s' "${HEADER}" | b64enc).$(printf '%s' "${PAYLOAD}" | b64enc)
+SIG=$(printf '%s' "${SIGNED_CONTENT}" | \
+ openssl dgst -binary -sha256 -sign <(printf "%s" "${GITHUB_APP_PRIVATE_KEY}") | b64enc)
+JWT=$(printf '%s.%s' "${SIGNED_CONTENT}" "${SIG}")
+
+curl -s --location --request POST \
+ "https://api.github.com/app/installations/${GITHUB_APP_INSTALLATION_ID}/access_tokens" \
+ --header "Authorization: Bearer $JWT" \
+ --header 'Accept: application/vnd.github+json' \
+ --header 'X-GitHub-Api-Version: 2022-11-28' | jq -r '.token'
+```
+
+## Conclusion
+
+It works, is simple and only requires bash, jq and openssl.
diff --git a/content/blog/terraform/caa.md b/content/blog/terraform/caa.md
index 2f3f9ad..defcd6a 100644
--- a/content/blog/terraform/caa.md
+++ b/content/blog/terraform/caa.md
@@ -7,15 +7,15 @@ tags:
- terraform
---
-# Introduction
+## Introduction
Certification Authority Authorization (CAA) are a type of DNS records that allows the owner of a domain to restrict which Certificate Authority (CA) can emit a certificate for the domain. This is a protection mechanism that is easy to setup and that has absolutely no drawbacks.
One good reason to use CAA records in our modern world of servers running in the cloud is that when you decommission or change a server you very often lose access to its IP address and get a new one. If you mess up cleaning the old IP address from your DNS records and have no CAA records, someone who grabs it could then start issuing certificates for your domain.
-# CAA records
+## CAA records
-## Basics
+### Basics
CAA record can be queried with your favorite DNS lookup utility (`dig`, `drill`, `nslookup`, etc). A basic example looks like this:
```
@@ -26,7 +26,7 @@ $ dig +short CAA adyxax.org
In this example, letsencrypt is authorized to issue both standard and wildcard certificates for the adyxax.org domain.
-## Getting notified of wrongful attempts
+### Getting notified of wrongful attempts
There are several bits of syntax in the RFC that can be of interest, especially if you want to be notified when someone tries to issue a certificate from an unauthorized CA:
@@ -37,7 +37,7 @@ $ dig +short CAA adyxax.org
0 issuewild "letsencrypt.org"
```
-## Securing a domain even further
+### Securing a domain even further
There are other extensions that allow domain owners to restrict even more things like which certificate validation method can be used. Just keep in mind that these extensions will vary from CA to CA and you will need to read the documentation of your CA of choice. A letsencrypt locked down certificate issuance to a specific account ID with a specific validation method looks like this:
@@ -49,15 +49,15 @@ $ dig +short CAA adyxax.org
With this configuration, I can be pretty sure only I will be able to generate a (wildcard, other types are not authorized) certificate for my domain.
-## Caveat
+### Caveat
Note that some DNS providers that offer hosting services will sometimes provision invisible CAA records on your behalf and it might not be obvious this is happening. For example if your domain is hosted on Cloudflare and you use their `pages` service, they will add CAA records to issue their certificates. You will be able to see these records using your lookup tool, but not if you look at your Cloudflare dashboard.
-# Opentofu code
+## Opentofu code
The following code examples will first feature a standard version (suitable for AWS, GCP and other providers), and one for Cloudflare. Cloudflare records are built different than other providers I know of because the Cloudflare terraform provider does some validation by itself while others simply rely on their APIs. Another important difference is that terraform resources use a list of records as input, while Cloudflare forces you to create one resource per value you need for a record. Yes this will clutter your terraform states!
-## Basic
+### Basic
Here is a simple definition for multiple zones managed the same way on AWS:
```hcl
@@ -133,7 +133,7 @@ resource "cloudflare_record" "caa" {
}
```
-## Advanced
+### Advanced
Here is a more advanced definition that handles zones that have different needs than others, as well as CAs that have multiple signing domains like AWS does:
```hcl
@@ -236,6 +236,6 @@ resource "cloudflare_record" "caa" {
}
```
-# Conclusion
+## Conclusion
I hope I showed you that CAA records are both useful and accessible. Please start protecting your domains with CAA records now!
diff --git a/content/blog/terraform/email-dns-unused-zone.md b/content/blog/terraform/email-dns-unused-zone.md
new file mode 100644
index 0000000..cc8dc77
--- /dev/null
+++ b/content/blog/terraform/email-dns-unused-zone.md
@@ -0,0 +1,104 @@
+---
+title: Email DNS records for zones that do not send emails
+description: Automated with terraform/opentofu
+date: 2024-09-03
+tags:
+- cloudflare
+- DNS
+- opentofu
+- terraform
+---
+
+## Introduction
+
+There are multiple DNS records one needs to configure in order to setup and securely use a domain to send or receive emails: MX, DKIM, DMARC and SPF.
+
+An often overlooked fact is that you also need to configure some of these records even if you do not intend to use a domain to send emails. If you do not, scammers will spoof your domain to send fraudulent emails and your domain's reputation will suffer.
+
+## DNS email records you need
+
+### SPF
+
+The most important and only required record you need is a TXT record on the apex of your domain that advertises the fact that no server can send emails from your domain:
+```
+"v=spf1 -all"
+```
+
+### MX
+
+If you do not intend to ever send emails, you certainly do not intend to receive emails either. Therefore you should consider removing all MX records on your zone. Oftentimes your registrar will provision some pointing to a free email infrastructure that they provide along with your domain.
+
+### DKIM
+
+You do not need DKIM records if you are not sending emails.
+
+### DMARC
+
+While not strictly necessary, I strongly recommend to set a DMARC record that instructs the world to explicitly reject all emails not matching the SPF policy:
+
+```
+"v=DMARC1;p=reject;sp=reject;pct=100"
+```
+
+## Terraform / OpenTofu code
+
+### Zones
+
+I use a map of simple objects to specify email profiles for my DNS zones:
+``` hcl
+locals {
+ zones = {
+ "adyxax.eu" = { emails = "adyxax" }
+ "adyxax.org" = { emails = "adyxax" }
+ "anne-so-et-julien.fr" = { emails = "no" }
+ }
+}
+
+data "cloudflare_zone" "main" {
+ for_each = local.zones
+
+ name = each.key
+}
+```
+
+### SPF
+
+Then I map each profile to spf records:
+``` hcl
+locals {
+ spf = {
+ "adyxax" = "v=spf1 mx -all"
+ "no" = "v=spf1 -all"
+ }
+}
+
+resource "cloudflare_record" "spf" {
+ for_each = local.zones
+
+ name = "@"
+ type = "TXT"
+ value = local.spf[each.value.emails]
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+### DMARC
+
+The same mapping system we had for spf can be used here too, but I choose to keep things simple and in the scope of this article. My real setup has some clever tricks to make dmarc notifications work centralized to a single domain that will be the subject another post:
+
+``` hcl
+resource "cloudflare_record" "dmarc" {
+ for_each = { for name, info in local.zones :
+ name => info if info.emails == "no"
+ }
+
+ name = "@"
+ type = "TXT"
+ value = "v=DMARC1;p=reject;sp=reject;pct=100"
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+## Conclusion
+
+Please keep your email DNS records tight and secure!
diff --git a/content/books/misc/making-it-so.md b/content/books/misc/making-it-so.md
new file mode 100644
index 0000000..33d20e3
--- /dev/null
+++ b/content/books/misc/making-it-so.md
@@ -0,0 +1,7 @@
+---
+title: "Making It So: A Memoir"
+description: Patrick Stewart
+date: 2024-06-17
+---
+
+This book was my first ever audio book! I could not resist a memoir written and read by an actor I really liked from Star Trek. The story of is life is really worth a read (or a listen), I thoroughly enjoyed it and recommend it.
diff --git a/content/books/stormlight_archive/the-way-of-kings-audiobook.md b/content/books/stormlight_archive/the-way-of-kings-audiobook.md
new file mode 100644
index 0000000..1b8e5e9
--- /dev/null
+++ b/content/books/stormlight_archive/the-way-of-kings-audiobook.md
@@ -0,0 +1,9 @@
+---
+title: "The Way of Kings"
+date: 2024-07-08
+description: Brandon Sanderson
+---
+
+I just finished listening to the [Graphics Audio](https://www.graphicaudiointernational.net/the-stormlight-archive-1-download-series-set.html) adaptation of [The Way of Kings]({{< ref "the-way-of-kings" >}}). I must say it was a fantastic experience that I highly recommend. These audiobooks are quite expensive but are really on another level of realization and play with many actors giving life to the characters and action.
+
+It was a joy to go through this book again three and a half years later. After reading all the Cosmere books, starting anew looking for details I missed on my first read was really engaging in particular thanks to this new format.
diff --git a/content/books/stormlight_archive/words-of-radiance-audiobook.md b/content/books/stormlight_archive/words-of-radiance-audiobook.md
new file mode 100644
index 0000000..03fa6dd
--- /dev/null
+++ b/content/books/stormlight_archive/words-of-radiance-audiobook.md
@@ -0,0 +1,9 @@
+---
+title: "Words of Radiance"
+date: 2024-08-14
+description: Brandon Sanderson
+---
+
+I just finished listening to the [Graphics Audio](https://www.graphicaudiointernational.net/the-stormlight-archive-2-download-series-set.html) adaptation of [Words of Radiance]({{< ref "words-of-radiance" >}}). Just like for [The Way of Kings]({{< ref "the-way-of-kings-audiobook" >}}), I must say it was a fantastic experience that I highly recommend. The level of realization is just as good, and they kept the same actors! I was afraid the voices might not be consistent from one book to the next, but they were!
+
+It was a joy to go through this book again three and a half years later. I got so many details and references that I missed on my first read, I really recommend engaging in such an experience with this new format.
diff --git a/search/go.mod b/search/go.mod
index 1828e5a..9389e8c 100644
--- a/search/go.mod
+++ b/search/go.mod
@@ -1,6 +1,6 @@
module git.adyxax.org/adyxax/www/search
-go 1.22.2
+go 1.23.1
require github.com/stretchr/testify v1.9.0