aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--content/blog/ansible/nginx-ansible-role.md336
-rw-r--r--content/blog/ansible/podman-ansible-role.md307
-rw-r--r--content/blog/ansible/postgresql-ansible-role.md2
-rw-r--r--content/blog/ansible/privatebin.md228
-rw-r--r--content/blog/kubernetes/dev-shm.md11
-rw-r--r--search/go.mod2
6 files changed, 878 insertions, 8 deletions
diff --git a/content/blog/ansible/nginx-ansible-role.md b/content/blog/ansible/nginx-ansible-role.md
new file mode 100644
index 0000000..0c465a9
--- /dev/null
+++ b/content/blog/ansible/nginx-ansible-role.md
@@ -0,0 +1,336 @@
+---
+title: 'Nginx ansible role'
+description: 'The ansible role I use to manage my nginx web servers'
+date: '2024-10-28'
+tags:
+- ansible
+- nginx
+---
+
+## Introduction
+
+Before succumbing to nixos, I had been using an ansible role to manage my nginx web servers. Now that I am in need of it again I refined it a bit: here is the result.
+
+## The role
+
+### Vars
+
+The role has OS specific vars in files named after the operating system. For example in `vars/Debian.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/etc/nginx'
+ pid_file: '/run/nginx.pid'
+ www_user: 'www-data'
+```
+
+While in `vars/FreeBSD.yaml` I have:
+
+``` yaml
+---
+nginx:
+ etc_dir: '/usr/local/etc/nginx'
+ pid_file: '/var/run/nginx.pid'
+ www_user: 'www'
+```
+
+### Tasks
+
+The main tasks file setups nginx and the global configuration common to all virtual hosts:
+
+``` yaml
+---
+- include_vars: '{{ ansible_distribution }}.yaml'
+
+- name: 'Install nginx'
+ package:
+ name:
+ - 'nginx'
+
+- name: 'Make nginx vhost directory'
+ file:
+ path: '{{ nginx.etc_dir }}/vhost.d'
+ mode: '0755'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy nginx configuration files'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/{{ item }}'
+ notify: 'reload nginx'
+ loop:
+ - 'headers_base.conf'
+ - 'headers_secure.conf'
+ - 'headers_static.conf'
+ - 'headers_unsafe_inline_csp.conf'
+
+- name: 'Deploy nginx configuration template'
+ template:
+ src: 'nginx.conf'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+
+- name: 'Deploy nginx certificates'
+ copy:
+ src: '{{ item }}'
+ dest: '{{ nginx.etc_dir }}/'
+ notify: 'reload nginx'
+ loop:
+ - 'adyxax.org.fullchain'
+ - 'adyxax.org.key'
+ - 'dh4096.pem'
+
+- name: 'Start nginx and activate it on boot'
+ service:
+ name: 'nginx'
+ enabled: true
+ state: 'started'
+```
+
+I have a `vhost.yaml` task file which currently simply deploys a file and reload nginx:
+
+``` yaml
+- name: 'Deploy {{ vhost.name }} vhost {{ vhost.path }}'
+ template:
+ src: '{{ vhost.path }}'
+ dest: '{{ nginx.etc_dir }}/vhost.d/{{ vhost.name }}.conf'
+ notify: 'reload nginx'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'reload nginx'
+ service:
+ name: 'nginx'
+ state: 'reloaded'
+```
+
+### Files
+
+I deploy four configuration files in this role. These are all variants of the same theme and their purpose is just to prevent duplicating statements in the virtual hosts configuration files.
+
+`headers_base.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+add_header X-Frame-Options deny;
+add_header X-XSS-Protection "1; mode=block";
+add_header X-Content-Type-Options nosniff;
+add_header Referrer-Policy strict-origin;
+add_header Cache-Control no-transform;
+add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";
+# 6 months HSTS pinning
+add_header Strict-Transport-Security max-age=16000000;
+```
+
+`headers_secure.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self'";
+```
+
+`headers_static.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_secure.conf;
+# Infinite caching
+add_header Cache-Control "public, max-age=31536000, immutable";
+```
+
+`headers_unsafe_inline_csp.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+include headers_base.conf;
+add_header Content-Security-Policy "script-src 'self' 'unsafe-inline'";
+```
+
+### Templates
+
+I have a single template for `nginx.conf`:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+user {{ nginx.www_user }};
+worker_processes auto;
+pid {{ nginx.pid_file }};
+error_log /var/log/nginx/error.log;
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include mime.types;
+ types_hash_max_size 4096;
+ sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+
+ ssl_protocols TLSv1.2 TLSv1.3;
+ ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
+ ssl_prefer_server_ciphers on;
+
+ gzip on;
+ gzip_static on;
+ gzip_vary on;
+ gzip_comp_level 5;
+ gzip_min_length 256;
+ gzip_proxied expired no-cache no-store private auth;
+ gzip_types application/atom+xml application/geo+json application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/vnd.ms-fontobject application/wasm application/x-rss+xml application/x-web-app-manifest+json application/xhtml+xml application/xliff+xml application/xml font/collection font/otf font/ttf image/bmp image/svg+xml image/vnd.microsoft.icon text/cache-manifest text/calendar text/css text/csv text/javascript text/markdown text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/xml;
+
+ proxy_redirect off;
+ proxy_connect_timeout 60s;
+ proxy_send_timeout 60s;
+ proxy_read_timeout 60s;
+ proxy_http_version 1.1;
+ proxy_set_header "Connection" "";
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Forwarded-Host $host;
+ proxy_set_header X-Forwarded-Server $host;
+
+ map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+ }
+
+ client_max_body_size 40M;
+ server_tokens off;
+ default_type application/octet-stream;
+ access_log /var/log/nginx/access.log;
+
+ fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
+ fastcgi_param QUERY_STRING $query_string;
+ fastcgi_param REQUEST_METHOD $request_method;
+ fastcgi_param CONTENT_TYPE $content_type;
+ fastcgi_param CONTENT_LENGTH $content_length;
+
+ fastcgi_param SCRIPT_NAME $fastcgi_script_name;
+ fastcgi_param REQUEST_URI $request_uri;
+ fastcgi_param DOCUMENT_URI $document_uri;
+ fastcgi_param DOCUMENT_ROOT $document_root;
+ fastcgi_param SERVER_PROTOCOL $server_protocol;
+ fastcgi_param REQUEST_SCHEME $scheme;
+ fastcgi_param HTTPS $https if_not_empty;
+
+ fastcgi_param GATEWAY_INTERFACE CGI/1.1;
+ fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
+
+ fastcgi_param REMOTE_ADDR $remote_addr;
+ fastcgi_param REMOTE_PORT $remote_port;
+ fastcgi_param REMOTE_USER $remote_user;
+ fastcgi_param SERVER_ADDR $server_addr;
+ fastcgi_param SERVER_PORT $server_port;
+ fastcgi_param SERVER_NAME $server_name;
+
+ # PHP only, required if PHP was built with --enable-force-cgi-redirect
+ fastcgi_param REDIRECT_STATUS 200;
+
+ uwsgi_param QUERY_STRING $query_string;
+ uwsgi_param REQUEST_METHOD $request_method;
+ uwsgi_param CONTENT_TYPE $content_type;
+ uwsgi_param CONTENT_LENGTH $content_length;
+
+ uwsgi_param REQUEST_URI $request_uri;
+ uwsgi_param PATH_INFO $document_uri;
+ uwsgi_param DOCUMENT_ROOT $document_root;
+ uwsgi_param SERVER_PROTOCOL $server_protocol;
+ uwsgi_param REQUEST_SCHEME $scheme;
+ uwsgi_param HTTPS $https if_not_empty;
+
+ uwsgi_param REMOTE_ADDR $remote_addr;
+ uwsgi_param REMOTE_PORT $remote_port;
+ uwsgi_param SERVER_PORT $server_port;
+ uwsgi_param SERVER_NAME $server_name;
+
+ ssl_dhparam dh4096.pem;
+ ssl_session_cache shared:SSL:2m;
+ ssl_session_timeout 1h;
+ ssl_session_tickets off;
+
+ server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ }
+
+ server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name _;
+ access_log off;
+ server_name_in_redirect off;
+ return 444;
+ ssl_certificate adyxax.org.fullchain;
+ ssl_certificate_key adyxax.org.key;
+ }
+
+ include vhost.d/*.conf;
+}
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application's role that relies on nginx using a `meta/main.yaml` containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'postgresql'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'nginx'
+ tasks_from: 'vhost'
+ vars:
+ vhost:
+ name: 'www'
+ path: 'roles/www.adyxax.org/files/nginx-vhost.conf'
+```
+
+I did not find an elegant way to pass a file path local to one role to another. Because of that, here I just specify the full vhost file path complete with the `roles/` prefix.
+
+### Conclusion
+
+I you have an elegant idea for passing the local file path from one role to another do not hesitate to ping me!
diff --git a/content/blog/ansible/podman-ansible-role.md b/content/blog/ansible/podman-ansible-role.md
new file mode 100644
index 0000000..37cdabf
--- /dev/null
+++ b/content/blog/ansible/podman-ansible-role.md
@@ -0,0 +1,307 @@
+---
+title: 'Podman ansible role'
+description: 'The ansible role I use to manage my podman containers'
+date: '2024-11-08'
+tags:
+- ansible
+- podman
+---
+
+## Introduction
+
+Before succumbing to nixos, I had was running all my containers on k3s. This time I am migrating things to podman and trying to achieve a lighter setup. This article presents the ansible role I wrote to manage podman containers.
+
+## The role
+
+### Tasks
+
+The main tasks file setups podman and the required network configurations with:
+
+``` yaml
+---
+- name: 'Run OS specific tasks for the podman role'
+ include_tasks: '{{ ansible_distribution }}.yaml'
+
+- name: 'Make podman scripts directory'
+ file:
+ path: '/etc/podman'
+ mode: '0700'
+ owner: 'root'
+ state: 'directory'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'cni-podman0'
+ dest: '/etc/network/interfaces.d/'
+ owner: 'root'
+ mode: '444'
+```
+
+My OS specific task file `Debian.yaml` looks like this:
+
+``` yaml
+---
+- name: 'Install podman dependencies'
+ ansible.builtin.apt:
+ name:
+ - 'buildah'
+ - 'podman'
+ - 'rootlesskit'
+ - 'slirp4netns'
+
+- name: 'Deploy podman configuration files'
+ copy:
+ src: 'podman-bridge.json'
+ dest: '/etc/cni/net.d/87-podman-bridge.conflist'
+ owner: 'root'
+ mode: '444'
+```
+
+The entrypoint tasks for this role is the `container.yaml` task file:
+
+``` yaml
+---
+# Inputs:
+# container:
+# cmd: optional(list(string))
+# env_vars: list(env_var)
+# image: string
+# name: string
+# publishs: list(publish)
+# volumes: list(volume)
+# With:
+# env_var:
+# name: string
+# value: string
+# publish:
+# container_port: string
+# host_port: string
+# ip: string
+# volume:
+# dest: string
+# src: string
+
+- name: 'Deploy podman systemd service for {{ container.name }}'
+ template:
+ src: 'container.service'
+ dest: '/etc/systemd/system/podman-{{ container.name }}.service'
+ owner: 'root'
+ mode: '0444'
+ notify: 'systemctl daemon-reload'
+
+- name: 'Deploy podman scripts for {{ container.name }}'
+ template:
+ src: 'container-{{ item }}.sh'
+ dest: '/etc/podman/{{ container.name }}-{{ item }}.sh'
+ owner: 'root'
+ mode: '0500'
+ register: 'deploy_podman_scripts'
+ loop:
+ - 'start'
+ - 'stop'
+
+- name: 'Restart podman container {{ container.name }}'
+ shell:
+ cmd: "systemctl restart podman-{{ container.name }}"
+ when: 'deploy_podman_scripts.changed'
+
+- name: 'Start podman container {{ container.name }} and activate it on boot'
+ service:
+ name: 'podman-{{ container.name }}'
+ enabled: true
+ state: 'started'
+```
+
+### Handlers
+
+There is a single `main.yaml` handler:
+
+``` yaml
+---
+- name: 'systemctl daemon-reload'
+ shell:
+ cmd: 'systemctl daemon-reload'
+```
+
+### Files
+
+Here is the `cni-podman0` I deploy on Debian hosts. It is required for the bridge to be up on boot so that other services can bind ports on it. Without this, the bridge would only come up when the first container starts which is too late in the boot process.
+
+``` text
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+auto cni-podman0
+iface cni-podman0 inet static
+address 10.88.0.1/16
+pre-up brctl addbr cni-podman0
+post-down brctl delbr cni-podman0
+```
+
+Here is the JSON cni bridge configuration file I use, customized to add IPv6 support:
+
+``` json
+{
+ "cniVersion": "0.4.0",
+ "name": "podman",
+ "plugins": [
+ {
+ "type": "bridge",
+ "bridge": "cni-podman0",
+ "isGateway": true,
+ "ipMasq": true,
+ "hairpinMode": true,
+ "ipam": {
+ "type": "host-local",
+ "routes": [
+ {
+ "dst": "0.0.0.0/0"
+ }, {
+ "dst": "::/0"
+ }
+ ],
+ "ranges": [
+ [{
+ "subnet": "10.88.0.0/16",
+ "gateway": "10.88.0.1"
+ }], [{
+ "subnet": "fd42::/48",
+ "gateway": "fd42::1"
+ }]
+ ]
+ }
+ }, {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }, {
+ "type": "firewall"
+ }, {
+ "type": "tuning"
+ }
+ ]
+}
+```
+
+### Templates
+
+Here is the jinja templated start bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+podman rm -f {{ container.name }} || true
+rm -f /run/podman-{{ container.name }}.ctr-id
+
+exec podman run \
+ --rm \
+ --name={{ container.name }} \
+ --log-driver=journald \
+ --cidfile=/run/podman-{{ container.name }}.ctr-id \
+ --cgroups=no-conmon \
+ --sdnotify=conmon \
+ -d \
+{% for env_var in container.env_vars | default([]) %}
+ -e {{ env_var.name }}={{ env_var.value }} \
+{% endfor %}
+{% for publish in container.publishs | default([]) %}
+ -p {{ publish.ip }}:{{ publish.host_port }}:{{ publish.container_port }} \
+{% endfor %}
+{% for volume in container.volumes | default([]) %}
+ -v {{ volume.src }}:{{ volume.dest }} \
+{% endfor %}
+ {{ container.image }} {% for cmd in container.cmd | default([]) %}{{ cmd }} {% endfor %}
+```
+
+Here is the jinja templated stop bash script:
+
+``` shell
+#!/usr/bin/env bash
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+set -euo pipefail
+
+if [[ ! "$SERVICE_RESULT" = success ]]; then
+ podman stop --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+fi
+
+podman rm -f --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
+```
+
+Here is the jinja templated systemd unit service:
+
+``` shell
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+[Unit]
+After=network-online.target
+Description=Podman container {{ container.name }}
+
+[Service]
+ExecStart=/etc/podman/{{ container.name }}-start.sh
+ExecStop=/etc/podman/{{ container.name }}-stop.sh
+NotifyAccess=all
+Restart=always
+TimeoutStartSec=0
+TimeoutStopSec=120
+Type=notify
+
+[Install]
+WantedBy=multi-user.target
+```
+
+## Usage example
+
+I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on podman using a meta/main.yaml containing something like:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'podman'
+```
+
+Then from a tasks file:
+
+``` yaml
+- include_role:
+ name: 'podman'
+ tasks_from: 'container'
+ vars:
+ container:
+ cmd: ['--config-path', '/srv/cfg/conf.php']
+ name: 'privatebin'
+ env_vars:
+ - name: 'PHP_TZ'
+ value: 'Europe/Paris'
+ - name: 'TZ'
+ value: 'Europe/Paris'
+ image: 'docker.io/privatebin/nginx-fpm-alpine:1.7.4'
+ publishs:
+ - container_port: '8080'
+ host_port: '8082'
+ ip: '127.0.0.1'
+ volumes:
+ - dest: '/srv/cfg/conf.php:ro'
+ src: '/etc/privatebin.conf.php'
+ - dest: '/srv/data'
+ src: '/srv/privatebin'
+```
+
+## Conclusion
+
+I enjoy this design, it works really well. I am missing a task for deprovisioning a container but I have not needed it yet.
diff --git a/content/blog/ansible/postgresql-ansible-role.md b/content/blog/ansible/postgresql-ansible-role.md
index 848e206..02614c0 100644
--- a/content/blog/ansible/postgresql-ansible-role.md
+++ b/content/blog/ansible/postgresql-ansible-role.md
@@ -224,7 +224,7 @@ I do not call the role from a playbook, I prefer running the setup from an appli
``` yaml
---
dependencies:
- - role: 'borg
+ - role: 'borg'
- role: 'postgresql'
```
diff --git a/content/blog/ansible/privatebin.md b/content/blog/ansible/privatebin.md
new file mode 100644
index 0000000..abbf527
--- /dev/null
+++ b/content/blog/ansible/privatebin.md
@@ -0,0 +1,228 @@
+---
+title: 'Migrating privatebin from NixOS to Debian'
+description: 'How I deploy privatebin with ansible'
+date: '2024-11-17'
+tags:
+- ansible
+- privatebin
+---
+
+## Introduction
+
+I am migrating several services from a NixOS server (myth.adyxax.org) to a Debian server (lore.adyxax.org). Here is how I performed the operation for my self hosted [privatebin](https://privatebin.info/) served from paste.adyxax.org.
+
+## Ansible role
+
+### Meta
+
+The `meta/main.yaml` contains the role dependencies:
+
+``` yaml
+---
+dependencies:
+ - role: 'borg'
+ - role: 'nginx'
+ - role: 'podman'
+```
+
+### Tasks
+
+The `tasks/main.yaml` file only creates a data directory and drops a configuration file. All the heavy lifting is then done by calling other roles:
+
+``` yaml
+---
+- name: 'Make privatebin data directory'
+ file:
+ path: '/srv/privatebin'
+ owner: '65534'
+ group: '65534'
+ mode: '0750'
+ state: 'directory'
+
+- name: 'Deploy privatebin configuration file'
+ copy:
+ src: 'privatebin.conf.php'
+ dest: '/etc/'
+ owner: 'root'
+ mode: '0444'
+ notify: 'restart privatebin'
+
+- include_role:
+ name: 'podman'
+ tasks_from: 'container'
+ vars:
+ container:
+ cmd: ['--config-path', '/srv/cfg/conf.php']
+ name: 'privatebin'
+ env_vars:
+ - name: 'PHP_TZ'
+ value: 'Europe/Paris'
+ - name: 'TZ'
+ value: 'Europe/Paris'
+ image: '{{ versions.privatebin.image }}:{{ versions.privatebin.tag }}'
+ publishs:
+ - container_port: '8080'
+ host_port: '8082'
+ ip: '127.0.0.1'
+ volumes:
+ - dest: '/srv/cfg/conf.php:ro'
+ src: '/etc/privatebin.conf.php'
+ - dest: '/srv/data'
+ src: '/srv/privatebin'
+
+- include_role:
+ name: 'nginx'
+ tasks_from: 'vhost'
+ vars:
+ vhost:
+ name: 'privatebin'
+ path: 'roles/paste.adyxax.org/files/nginx-vhost.conf'
+
+- include_role:
+ name: 'borg'
+ tasks_from: 'client'
+ vars:
+ client:
+ jobs:
+ - name: 'data'
+ paths:
+ - '/srv/privatebin'
+ name: 'privatebin'
+ server: '{{ paste_adyxax_org.borg }}'
+```
+
+### Handlers
+
+There is a single handler:
+
+``` yaml
+---
+- name: 'restart privatebin'
+ service:
+ name: 'podman-privatebin'
+ state: 'restarted'
+```
+
+### Files
+
+First there is my privatebin configuration, fairly simple:
+
+``` php
+;###############################################################################
+;# \_o< WARNING : This file is being managed by ansible! >o_/ #
+;# ~~~~ ~~~~ #
+;###############################################################################
+
+[main]
+discussion = true
+opendiscussion = false
+password = true
+fileupload = true
+burnafterreadingselected = false
+defaultformatter = "plaintext"
+sizelimit = 10000000
+template = "bootstrap"
+notice = "Note: This is a personal sharing service: Data may be deleted anytime. Don't share illegal, unethical or morally reprehensible content."
+languageselection = true
+zerobincompatibility = false
+[expire]
+default = "1week"
+[expire_options]
+5min = 300
+10min = 600
+1hour = 3600
+1day = 86400
+1week = 604800
+1month = 2592000
+1year = 31536000
+[formatter_options]
+plaintext = "Plain Text"
+syntaxhighlighting = "Source Code"
+markdown = "Markdown"
+[traffic]
+limit = 10
+header = "X_FORWARDED_FOR"
+dir = PATH "data"
+[purge]
+limit = 300
+batchsize = 10
+dir = PATH "data"
+[model]
+class = Filesystem
+[model_options]
+dir = PATH "data"
+```
+
+Then the nginx vhost file, fairly straightforward too:
+
+``` nginx
+###############################################################################
+# \_o< WARNING : This file is being managed by ansible! >o_/ #
+# ~~~~ ~~~~ #
+###############################################################################
+
+server {
+ listen 80;
+ listen [::]:80;
+ server_name paste.adyxax.org;
+ location / {
+ return 308 https://$server_name$request_uri;
+ }
+}
+
+server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name paste.adyxax.org;
+
+ location / {
+ proxy_pass http://127.0.0.1:8082;
+ }
+ ssl_certificate adyxax.org.fullchain;
+ ssl_certificate_key adyxax.org.key;
+}
+```
+
+## Migration process
+
+The first step is to deploy this new configuration to the server:
+
+``` shell
+make run limit=lore.adyxax.org tags=paste.adyxax.org
+```
+
+After that I log in and manually migrate the privatebin data folder. On the old server I make a backup with:
+
+``` shell
+systemctl stop podman-privatebin
+tar czf /tmp/privatebin.tar.gz /srv/privatebin/
+```
+
+I retrieve this backup on my laptop and send it to the new server with:
+
+``` shell
+scp root@myth.adyxax.org:/tmp/privatebin.tar.gz .
+scp privatebin.tar.gz root@lore.adyxax.org:
+```
+
+On the new server, I restore the backup with:
+
+``` shell
+systemctl stop podman-privatebin
+tar -xzf privatebin.tar.gz -C /srv/privatebin/
+chown -R 65534:65534 /srv/privatebin
+chmod -R u=rwX /srv/privatebin
+systemctl start podman-privatebin
+```
+
+I then test the new server by setting the record in my `/etc/hosts` file. Since all works well, I rollback my change to `/etc/hosts` and update the DNS record using OpenTofu. I then clean up by running this on my laptop:
+
+``` shell
+rm privatebin.tar.gz
+ssh root@myth.adyxax.org 'rm /tmp/privatebin.tar.gz'
+ssh root@lore.adyxax.org 'rm privatebin.tar.gz'
+```
+
+## Conclusion
+
+I did all this in early October, my backlog of blog articles is only growing!
diff --git a/content/blog/kubernetes/dev-shm.md b/content/blog/kubernetes/dev-shm.md
index 9369052..9587261 100644
--- a/content/blog/kubernetes/dev-shm.md
+++ b/content/blog/kubernetes/dev-shm.md
@@ -21,14 +21,13 @@ spec:
spec:
container:
volume_mount:
- mount_path = "/dev/shm"
- name = "dev-shm"
- read_only = false
+ mount_path: "/dev/shm"
+ name: "dev-shm"
volume:
empty_dir:
- medium = "Memory"
- size_limit = "1Gi"
- name = "dev-shm"
+ medium: "Memory"
+ size_limit: "1Gi"
+ name: "dev-shm"
```
## Conclusion
diff --git a/search/go.mod b/search/go.mod
index 1e943c1..a0c4721 100644
--- a/search/go.mod
+++ b/search/go.mod
@@ -1,6 +1,6 @@
module git.adyxax.org/adyxax/www/search
-go 1.23.2
+go 1.23.3
require github.com/stretchr/testify v1.9.0