aboutsummaryrefslogtreecommitdiff
path: root/content/blog
diff options
context:
space:
mode:
authorJulien Dessaux2021-03-11 18:53:14 +0100
committerJulien Dessaux2021-03-11 18:53:14 +0100
commit60d3abc6ecdc21b4ab921d34a55b4af48690f55a (patch)
tree1ee530dd7367d743fb619f341100e41df22e1985 /content/blog
parentUpdated docsy theme (diff)
downloadwww-60d3abc6ecdc21b4ab921d34a55b4af48690f55a.tar.gz
www-60d3abc6ecdc21b4ab921d34a55b4af48690f55a.tar.bz2
www-60d3abc6ecdc21b4ab921d34a55b4af48690f55a.zip
Rewrote the whole website to get rid on a heavy theme
Diffstat (limited to 'content/blog')
-rw-r--r--content/blog/_index.md6
-rw-r--r--content/blog/ansible/_index.md5
-rw-r--r--content/blog/ansible/ansible-vault-example.md36
-rw-r--r--content/blog/ansible/custom-fact.md89
-rw-r--r--content/blog/ansible/dump-all-vars.md38
-rw-r--r--content/blog/cfengine/_index.md5
-rw-r--r--content/blog/cfengine/leveraging-yaml.md153
-rw-r--r--content/blog/commands/_index.md5
-rw-r--r--content/blog/commands/asterisk-call-you.md11
-rw-r--r--content/blog/commands/asterisk-list-active-calls.md14
-rw-r--r--content/blog/commands/busybox-web-server.md13
-rw-r--r--content/blog/commands/capture-desktop-video.md13
-rw-r--r--content/blog/commands/clean-conntrack-states.md17
-rw-r--r--content/blog/commands/date.md14
-rw-r--r--content/blog/commands/dmidecode.md20
-rw-r--r--content/blog/commands/find-hardlinks.md12
-rw-r--r--content/blog/commands/find-inodes-used.md12
-rw-r--r--content/blog/commands/git-import-commits.md13
-rw-r--r--content/blog/commands/git-rewrite-commit-history.md13
-rw-r--r--content/blog/commands/ipmi.md19
-rw-r--r--content/blog/commands/mdadm.md42
-rw-r--r--content/blog/commands/megacli.md11
-rw-r--r--content/blog/commands/omreport.md20
-rw-r--r--content/blog/commands/qemu-nbd.md17
-rw-r--r--content/blog/commands/qemu.md31
-rw-r--r--content/blog/commands/rrdtool.md21
-rw-r--r--content/blog/debian/_index.md5
-rw-r--r--content/blog/debian/error-during-signature-verification.md15
-rw-r--r--content/blog/debian/force-package-removal.md14
-rw-r--r--content/blog/debian/no-public-key-error.md12
-rw-r--r--content/blog/docker/_index.md5
-rw-r--r--content/blog/docker/cleaning.md12
-rw-r--r--content/blog/docker/docker-compose-bridge.md31
-rw-r--r--content/blog/docker/migrate-data-volume.md15
-rw-r--r--content/blog/docker/shell-usage-in-dockerfile.md16
-rw-r--r--content/blog/freebsd/_index.md5
-rw-r--r--content/blog/freebsd/activate-the-serial-console.md11
-rw-r--r--content/blog/freebsd/change-the-ip-address-of-a-running-jail.md13
-rw-r--r--content/blog/freebsd/clean-install-does-not-boot.md14
-rw-r--r--content/blog/gentoo/_index.md5
-rw-r--r--content/blog/gentoo/get-zoom-to-work.md24
-rw-r--r--content/blog/gentoo/steam.md13
-rw-r--r--content/blog/kubernetes/_index.md5
-rw-r--r--content/blog/kubernetes/get_key_and_certificae.md10
-rw-r--r--content/blog/kubernetes/pg_dump_restore.md24
-rw-r--r--content/blog/miscellaneous/_index.md5
-rw-r--r--content/blog/miscellaneous/bacula-bareos.md38
-rw-r--r--content/blog/miscellaneous/bash-tcp-client.md15
-rw-r--r--content/blog/miscellaneous/boot-from-initramfs.md16
-rw-r--r--content/blog/miscellaneous/building-rpms.md29
-rw-r--r--content/blog/miscellaneous/clean-old-centos-kernels.md11
-rw-r--r--content/blog/miscellaneous/debug-disk-usage-postgresql.md14
-rw-r--r--content/blog/miscellaneous/etc-update-alpine.md38
-rw-r--r--content/blog/miscellaneous/fstab.md9
-rw-r--r--content/blog/miscellaneous/i3dropdown.md32
-rw-r--r--content/blog/miscellaneous/libreoffice.md9
-rw-r--r--content/blog/miscellaneous/link-deleted-inode.md10
-rw-r--r--content/blog/miscellaneous/make.md10
-rw-r--r--content/blog/miscellaneous/mencoder.md21
-rw-r--r--content/blog/miscellaneous/mssql-centos-7.md29
-rw-r--r--content/blog/miscellaneous/my-postgresql-role-cannot-login.md12
-rw-r--r--content/blog/miscellaneous/nginx-ldap.md25
-rw-r--r--content/blog/miscellaneous/osm-overlay-example.md19
-rw-r--r--content/blog/miscellaneous/pleroma.md117
-rw-r--r--content/blog/miscellaneous/postgresql-read-only.md17
-rw-r--r--content/blog/miscellaneous/postgresql-reassign.md18
-rw-r--r--content/blog/miscellaneous/pulseaudio.md11
-rw-r--r--content/blog/miscellaneous/purge-postfix-queue-based-content.md13
-rw-r--r--content/blog/miscellaneous/qmail.md21
-rw-r--r--content/blog/miscellaneous/rocketchat.md18
-rw-r--r--content/blog/miscellaneous/screen-cannot-open-terminal.md17
-rw-r--r--content/blog/miscellaneous/seti-at-home.md18
-rw-r--r--content/blog/miscellaneous/sqlite-pretty-print.md16
-rw-r--r--content/blog/miscellaneous/switching-to-hugo.md58
-rw-r--r--content/blog/netapp/_index.md5
-rw-r--r--content/blog/netapp/investigate-memory-errors.md12
-rw-r--r--content/blog/travels/_index.md5
-rw-r--r--content/blog/travels/new-zealand.md7
78 files changed, 1594 insertions, 0 deletions
diff --git a/content/blog/_index.md b/content/blog/_index.md
new file mode 100644
index 0000000..098f984
--- /dev/null
+++ b/content/blog/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Blog"
+menu:
+ main:
+ weight: 2
+---
diff --git a/content/blog/ansible/_index.md b/content/blog/ansible/_index.md
new file mode 100644
index 0000000..3730fd7
--- /dev/null
+++ b/content/blog/ansible/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Ansible"
+linkTitle: "Ansible"
+weight: 30
+---
diff --git a/content/blog/ansible/ansible-vault-example.md b/content/blog/ansible/ansible-vault-example.md
new file mode 100644
index 0000000..fb6ef45
--- /dev/null
+++ b/content/blog/ansible/ansible-vault-example.md
@@ -0,0 +1,36 @@
+---
+title: "Ansible vault example"
+linkTitle: "Ansible vault example"
+date: 2018-02-21
+description: >
+ Ansible vault example
+---
+
+Here is how to edit a vault protected file :
+{{< highlight sh >}}
+ansible-vault edit hostvars/blah.yml
+{{< / highlight >}}
+
+Here is how to put a multiline entry like a private key in vault (for a simple value, just don't use a `|`):
+
+{{< highlight yaml >}}
+ssl_key : |
+ ----- BEGIN PRIVATE KEY -----
+ blahblahblah
+ blahblahblah
+ ----- END PRIVATE KEY -----
+{{< /highlight >}}
+
+And here is how to use it in a task :
+{{< highlight yaml >}}
+- copy:
+ path: /etc/ssl/private.key
+ mode: 0400
+ content: '{{ ssl_key }}'
+{{< / highlight >}}
+
+To run a playbook, you will need to pass the `--ask-vault` argument or to export a `ANSIBLE_VAULT_PASSWORD_FILE=/home/julien/.vault_pass.txt` variable (the file needs to contain a single line with your vault password here).
+
+## Ressources
+
+ * how to break long lines in ansible : https://watson-wilson.ca/blog/2018/07/11/ansible-tips/
diff --git a/content/blog/ansible/custom-fact.md b/content/blog/ansible/custom-fact.md
new file mode 100644
index 0000000..21e3300
--- /dev/null
+++ b/content/blog/ansible/custom-fact.md
@@ -0,0 +1,89 @@
+---
+title: "Ansible custom facts"
+linkTitle: "Ansible custom facts"
+date: 2018-09-25
+description: >
+ How to write custom facte with ansible
+---
+
+Custom facts are actually quite easy to implement despite the lack of documentation about it.
+
+## How they work
+
+On any Ansible controlled host — that is, the remote machine that is being controlled and not the machine on which the playbook is run — you just need to create a directory at
+`/etc/ansible/facts.d`. Inside this directory, you can place one or more `*.fact` files. These are files that return JSON data, which will then be included in the raft of facts that
+Ansible gathers.
+
+The facts will be available to ansible at `hostvars.host.ansible_local.<fact_name>`.
+
+## A simple example
+
+Here is the simplest example of a fact, let's suppose we make it `/etc/ansible/facts.d/mysql.fact` :
+{{< highlight sh >}}
+#!/bin/sh
+set -eu
+
+echo '{"password": "xxxxxx"}'
+{{< /highlight >}}
+
+This will give you the fact `hostvars.host.ansible_local.mysql.password` for this machine.
+
+## A more complex example
+
+A more interesting example is something I use with small webapps. In the container that hosts the frontent I use a small ansible role to generate a mysql password on its first run, and
+provision a database with a user that has access to it on a mysql server. This fact ensures that on subsequent runs we will stay idempotents. Here is how it works.
+
+First the fact from before, only slightly modified :
+{{< highlight sh >}}
+#!/bin/sh
+set -eu
+
+echo '{"password": "{{mysql_password}}"}'
+{{< /highlight >}}
+
+This fact is deployed with the following tasks :
+{{< highlight yaml >}}
+- name: Generate a password for mysql database connections if there is none
+ set_fact: mysql_password="{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
+ when: (ansible_local.mysql_client|default({})).password is undefined
+
+- name: Deploy mysql client ansible fact to handle the password
+ template:
+ src: ../templates/mysql_client.fact
+ dest: /etc/ansible/facts.d/
+ owner: root
+ mode: 0500
+ when: (ansible_local.mysql_client|default({})).password is undefined
+
+- name: reload ansible_local
+ setup: filter=ansible_local
+ when: (ansible_local.mysql_client|default({})).password is undefined
+
+- name: Ensures mysql database exists
+ mysql_db:
+ name: '{{ansible_hostname}}'
+ state: present
+ delegate_to: "{{mysql_server}}"
+
+- name: Ensures mysql user exists
+ mysql_user:
+ name: '{{ansible_hostname}}'
+ host: '{{ansible_hostname}}'
+ priv: '{{ansible_hostname}}.*:ALL'
+ password: '{{ansible_local.mysql_client.password}}'
+ state: present
+ delegate_to: '{{mysql_server}}'
+{{< /highlight >}}
+
+## Caveat : a fact you deploy is not immediately available
+
+Note that installing a fact does not make it exist before the next inventory run on the host. This can be problematic especially if you rely on facts caching to speed up ansible. Here
+is how to make ansible reload facts using the setup tasks (If you paid attention you already saw me use it above).
+{{< highlight yaml >}}
+- name: reload ansible_local
+ setup: filter=ansible_local
+{{< /highlight >}}
+
+## References
+
+- https://medium.com/@jezhalford/ansible-custom-facts-1e1d1bf65db8
diff --git a/content/blog/ansible/dump-all-vars.md b/content/blog/ansible/dump-all-vars.md
new file mode 100644
index 0000000..d5991a3
--- /dev/null
+++ b/content/blog/ansible/dump-all-vars.md
@@ -0,0 +1,38 @@
+---
+title: "Dump all ansible variables"
+linkTitle: "Dump all ansible variables"
+date: 2019-10-15
+description: >
+ How to dump all variables used by ansible
+---
+
+Here is the task to use in order to achieve that :
+
+{{< highlight yaml >}}
+- name: Dump all vars
+ action: template src=dumpall.j2 dest=ansible.all
+{{< /highlight >}}
+
+And here is the template to use with it :
+
+{{< highlight jinja >}}
+Module Variables ("vars"):
+--------------------------------
+{{ vars | to_nice_json }}
+
+Environment Variables ("environment"):
+--------------------------------
+{{ environment | to_nice_json }}
+
+GROUP NAMES Variables ("group_names"):
+--------------------------------
+{{ group_names | to_nice_json }}
+
+GROUPS Variables ("groups"):
+--------------------------------
+{{ groups | to_nice_json }}
+
+HOST Variables ("hostvars"):
+--------------------------------
+{{ hostvars | to_nice_json }}
+{{< /highlight >}}
diff --git a/content/blog/cfengine/_index.md b/content/blog/cfengine/_index.md
new file mode 100644
index 0000000..8b5885c
--- /dev/null
+++ b/content/blog/cfengine/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Cfengine"
+linkTitle: "Cfengine"
+weight: 40
+---
diff --git a/content/blog/cfengine/leveraging-yaml.md b/content/blog/cfengine/leveraging-yaml.md
new file mode 100644
index 0000000..c1132b2
--- /dev/null
+++ b/content/blog/cfengine/leveraging-yaml.md
@@ -0,0 +1,153 @@
+---
+title: "Leveraging yaml with cfengine"
+linkTitle: "Leveraging yaml with cfengine"
+date: 2018-09-25
+description: >
+ How to leverage yaml inventory files with cfengine
+---
+
+CFEngine has core support for JSON and YAML. You can use this support to read, access, and merge JSON and YAML files and use these to keep policy files internal and simple. You
+access the data using the usual cfengine standard primitives.
+
+The use case bellow lacks a bit or error control with argument validation, it will fail miserably if the YAML file is invalid.
+
+## Example yaml
+
+In `cmdb/hosts/andromeda.yaml` we describe some properties of a host named andromeda:
+
+{{< highlight yaml >}}
+domain: adyxax.org
+host_interface: dummy0
+host_ip: "10.1.0.255"
+
+tunnels:
+ collab:
+ port: 1195
+ ip: "10.1.0.15"
+ peer: "10.1.0.14"
+ remote_host: collab.example.net
+ remote_port: 1199
+ legend:
+ port: 1194
+ ip: "10.1.0.3"
+ peer: "10.1.0.2"
+ remote_host: legend.adyxax.org
+ remote_port: 1195
+{{< /highlight >}}
+
+## Reading the yaml
+
+I am bundling the values in a common bundle, accessible globally. This is one of the first bundles processed in the order my policy files are loaded. This is just an extract, you can load multiple files and merge them to distribute common
+settings :
+{{< highlight yaml >}}
+bundle common g
+{
+ vars:
+ has_host_data::
+ "host_data" data => readyaml("$(sys.inputdir)/cmdb/hosts/$(sys.host).yaml", 100k);
+ classes:
+ any::
+ "has_host_data" expression => fileexists("$(sys.inputdir)/cmdb/hosts/$(sys.host).yaml");
+}
+{{< /highlight >}}
+
+## Using the data
+
+### Cfengine agent bundle
+
+We access the data using the global g.host_data variable, here is a complete example :
+{{< highlight yaml >}}
+bundle agent openvpn
+{
+ vars:
+ any::
+ "tunnels" slist => getindices("g.host_data[tunnels]");
+ files:
+ any::
+ "/etc/openvpn/common.key"
+ create => "true",
+ edit_defaults => empty,
+ perms => system_owned("440"),
+ copy_from => local_dcp("$(sys.inputdir)/templates/openvpn/common.key.cftpl"),
+ classes => if_repaired("openvpn_common_key_repaired");
+ methods:
+ any::
+ "any" usebundle => install_package("$(this.bundle)", "openvpn");
+ "any" usebundle => openvpn_tunnel("$(tunnels)");
+ services:
+ linux::
+ "openvpn@$(tunnels)"
+ service_policy => "start",
+ classes => if_repaired("tunnel_$(tunnels)_service_repaired");
+ commands:
+ any::
+ "/usr/sbin/service openvpn@$(tunnels) restart" classes => if_repaired("tunnel_$(tunnels)_service_repaired"), ifvarclass => "openvpn_common_key_repaired";
+ reports:
+ any::
+ "$(this.bundle): common.key repaired" ifvarclass => "openvpn_common_key_repaired";
+ "$(this.bundle): $(tunnels) service repaired" ifvarclass => "tunnel_$(tunnels)_service_repaired";
+}
+
+bundle agent openvpn_tunnel(tunnel)
+{
+ classes:
+ any::
+ "has_remote" and => { isvariable("g.host_data[tunnels][$(tunnel)][remote_host]"), isvariable("g.host_data[tunnels][$(tunnel)][remote_port]") };
+ files:
+ any::
+ "/etc/openvpn/$(tunnel).conf"
+ create => "true",
+ edit_defaults => empty,
+ perms => system_owned("440"),
+ edit_template => "$(sys.inputdir)/templates/openvpn/tunnel.conf.cftpl",
+ template_method => "cfengine",
+ classes => if_repaired("openvpn_$(tunnel)_conf_repaired");
+ commands:
+ any::
+ "/usr/sbin/service openvpn@$(tunnel) restart" classes => if_repaired("tunnel_$(tunnel)_service_repaired"), ifvarclass => "openvpn_$(tunnel)_conf_repaired";
+ reports:
+ any::
+ "$(this.bundle): $(tunnel).conf repaired" ifvarclass => "openvpn_$(tunnel)_conf_repaired";
+ "$(this.bundle): $(tunnel) service repaired" ifvarclass => "tunnel_$(tunnel)_service_repaired";
+}
+{{< /highlight >}}
+
+### Template file
+
+Templates can reference the g.host_data too, like in the following :
+{{< highlight cfg >}}
+[%CFEngine BEGIN %]
+proto udp
+port $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][port])
+dev-type tun
+dev tun_$(openvpn_tunnel.tunnel)
+comp-lzo
+script-security 2
+
+ping 10
+ping-restart 20
+ping-timer-rem
+persist-tun
+persist-key
+
+cipher AES-128-CBC
+
+secret /etc/openvpn/common.key
+ifconfig $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][ip]) $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][peer])
+
+user nobody
+[%CFEngine centos:: %]
+group nobody
+[%CFEngine ubuntu:: %]
+group nogroup
+
+[%CFEngine has_remote:: %]
+remote $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_host]) $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_port])
+
+[%CFEngine END %]
+{{< /highlight >}}
+
+## References
+- https://docs.cfengine.com/docs/master/examples-tutorials-json-yaml-support-in-cfengine.html
+- https://docs.cfengine.com/docs/3.10/reference-functions-readyaml.html
+- https://docs.cfengine.com/docs/3.10/reference-functions-mergedata.html
diff --git a/content/blog/commands/_index.md b/content/blog/commands/_index.md
new file mode 100644
index 0000000..c061e46
--- /dev/null
+++ b/content/blog/commands/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Commands"
+linkTitle: "Commands"
+weight: 40
+---
diff --git a/content/blog/commands/asterisk-call-you.md b/content/blog/commands/asterisk-call-you.md
new file mode 100644
index 0000000..7dd65f3
--- /dev/null
+++ b/content/blog/commands/asterisk-call-you.md
@@ -0,0 +1,11 @@
+---
+title: "List active calls on asterisk"
+linkTitle: "List active calls on asterisk"
+date: 2018-09-25
+description: >
+ How to show active calls on an asterisk system
+---
+
+{{< highlight yaml >}}
+watch -d -n1 'asterisk -rx “core show channels”'
+{{< /highlight >}}
diff --git a/content/blog/commands/asterisk-list-active-calls.md b/content/blog/commands/asterisk-list-active-calls.md
new file mode 100644
index 0000000..73c712e
--- /dev/null
+++ b/content/blog/commands/asterisk-list-active-calls.md
@@ -0,0 +1,14 @@
+---
+title: "How to have asterisk call you into a meeting"
+linkTitle: "How to have asterisk call you into a meeting"
+date: 2018-09-25
+description: >
+ How to have asterisk call you itself into a meeting
+---
+
+At alterway we sometimes have DTMF problems that prevent my mobile from joining a conference room. Here is something I use to have asterisk call me
+and place me inside the room :
+
+{{< highlight yaml >}}
+channel originate SIP/numlog/06XXXXXXXX application MeetMe 85224,M,secret
+{{< /highlight >}}
diff --git a/content/blog/commands/busybox-web-server.md b/content/blog/commands/busybox-web-server.md
new file mode 100644
index 0000000..37f9ac6
--- /dev/null
+++ b/content/blog/commands/busybox-web-server.md
@@ -0,0 +1,13 @@
+---
+title: "Busybox web server"
+linkTitle: "Busybox web server"
+date: 2019-04-16
+description: >
+ Busybox web server
+---
+
+If you have been using things like `python -m SimpleHTTPServer`, here is something even more simple and lightweight to use :
+
+{{< highlight sh >}}
+busybox httpd -vfp 80
+{{< /highlight >}}
diff --git a/content/blog/commands/capture-desktop-video.md b/content/blog/commands/capture-desktop-video.md
new file mode 100644
index 0000000..f56572a
--- /dev/null
+++ b/content/blog/commands/capture-desktop-video.md
@@ -0,0 +1,13 @@
+---
+title: "Capture a video of your desktop"
+linkTitle: "Capture a video of your desktop"
+date: 2011-11-20
+description: >
+ Capture a video of your desktop
+---
+
+You can capture a video of your linux desktop with ffmpeg :
+
+{{< highlight sh >}}
+ffmpeg -f x11grab -s xga -r 25 -i :0.0 -sameq /tmp/out.mpg
+{{< /highlight >}}
diff --git a/content/blog/commands/clean-conntrack-states.md b/content/blog/commands/clean-conntrack-states.md
new file mode 100644
index 0000000..8a78930
--- /dev/null
+++ b/content/blog/commands/clean-conntrack-states.md
@@ -0,0 +1,17 @@
+---
+title: "Clean conntrack states"
+linkTitle: "Clean conntrack states"
+date: 2018-03-02
+description: >
+ Clean conntrack states
+---
+
+Here is an example of how to clean conntrack states that match a specific query on a linux firewall :
+
+{{< highlight sh >}}
+conntrack -L conntrack -p tcp –orig-dport 65372 | \
+while read _ _ _ _ src dst sport dport _; do
+ conntrack -D conntrack –proto tcp –orig-src ${src#*=} –orig-dst ${dst#*=} \
+ –sport ${sport#*=} –dport ${dport#*=}
+ done
+{{< /highlight >}}
diff --git a/content/blog/commands/date.md b/content/blog/commands/date.md
new file mode 100644
index 0000000..e0b2bcc
--- /dev/null
+++ b/content/blog/commands/date.md
@@ -0,0 +1,14 @@
+---
+title: "Convert unix timestamp to readable date"
+linkTitle: "Convert unix timestamp to readable date"
+date: 2011-01-06
+description: >
+ Convert unix timestamp to readable date
+---
+
+As I somehow have a hard time remembering this simple date flags as I rarely need it, I decided to write it down here :
+
+{{< highlight sh >}}
+$ date -d @1294319676
+Thu Jan 6 13:14:36 GMT 2011
+{{< /highlight >}}
diff --git a/content/blog/commands/dmidecode.md b/content/blog/commands/dmidecode.md
new file mode 100644
index 0000000..c7bcc1f
--- /dev/null
+++ b/content/blog/commands/dmidecode.md
@@ -0,0 +1,20 @@
+---
+title: "DMIdecode"
+linkTitle: "DMIdecode"
+date: 2011-02-16
+description: >
+ DMIdecode
+---
+
+DMIdecode to obtain Hardware informations.
+
+## Mose useful commands
+
+- System informations: `dmidecode -t1`
+- Chassis informations: `dmidecode -t4`
+- CPU informations: `dmidecode -t4`
+- RAM informations: `dmidecode -t17`
+
+## Sources
+
+- `man 8 dmidecode`
diff --git a/content/blog/commands/find-hardlinks.md b/content/blog/commands/find-hardlinks.md
new file mode 100644
index 0000000..dd1b424
--- /dev/null
+++ b/content/blog/commands/find-hardlinks.md
@@ -0,0 +1,12 @@
+---
+title: "Find hardlinks to a same file"
+linkTitle: "Find hardlinks to a same file"
+date: 2018-03-02
+description: >
+ Find hardlinks to a same file
+---
+
+{{< highlight sh >}}
+find . -samefile /path/to/file
+{{< /highlight >}}
+
diff --git a/content/blog/commands/find-inodes-used.md b/content/blog/commands/find-inodes-used.md
new file mode 100644
index 0000000..d9965a4
--- /dev/null
+++ b/content/blog/commands/find-inodes-used.md
@@ -0,0 +1,12 @@
+---
+title: "Find where inodes are used"
+linkTitle: "Find where inodes are used"
+date: 2018-04-25
+description: >
+ Find where inodes are used
+---
+
+{{< highlight sh >}}
+find . -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
+{{< /highlight >}}
+
diff --git a/content/blog/commands/git-import-commits.md b/content/blog/commands/git-import-commits.md
new file mode 100644
index 0000000..5ec2bc1
--- /dev/null
+++ b/content/blog/commands/git-import-commits.md
@@ -0,0 +1,13 @@
+---
+title: "Import commits from one git repo to another"
+linkTitle: "Import commits from one git repo to another"
+date: 2018-09-25
+description: >
+ Import commits from one git repo to another
+---
+
+This imports commits from a repo in the `../masterfiles` folder and applies them to the repository inside the current folder :
+{{< highlight sh >}}
+(cd ../masterfiles/; git format-patch –stdout origin/master) | git am
+{{< /highlight >}}
+
diff --git a/content/blog/commands/git-rewrite-commit-history.md b/content/blog/commands/git-rewrite-commit-history.md
new file mode 100644
index 0000000..6d241ed
--- /dev/null
+++ b/content/blog/commands/git-rewrite-commit-history.md
@@ -0,0 +1,13 @@
+---
+title: "Rewrite a git commit history"
+linkTitle: "Rewrite a git commit history"
+date: 2018-03-05
+description: >
+ Rewrite a git commit history
+---
+
+Here is how to rewrite a git commit history, for example to remove a file :
+{{< highlight sh >}}
+git filter-branch –index-filter "git rm --cached --ignore-unmatch ${file}" --prune-empty --tag-name-filter cat - -all
+{{< /highlight >}}
+
diff --git a/content/blog/commands/ipmi.md b/content/blog/commands/ipmi.md
new file mode 100644
index 0000000..93ca26d
--- /dev/null
+++ b/content/blog/commands/ipmi.md
@@ -0,0 +1,19 @@
+---
+title: "ipmitool"
+linkTitle: "ipmitool"
+date: 2018-03-05
+description: >
+ ipmitool
+---
+
+- launch ipmi shell : `ipmitool -H XX.XX.XX.XX -C3 -I lanplus -U <ipmi_user> shell`
+- launch ipmi remote text console : `ipmitool -H XX.XX.XX.XX -C3 -I lanplus -U <ipmi_user> sol activate`
+- Show local ipmi lan configuration : `ipmitool lan print`
+- Update local ipmi lan configuration :
+{{< highlight sh >}}
+ipmitool lan set 1 ipsrc static
+ipmitool lan set 1 ipaddr 10.31.149.39
+ipmitool lan set 1 netmask 255.255.255.0
+mc reset cold
+{{< /highlight >}}
+
diff --git a/content/blog/commands/mdadm.md b/content/blog/commands/mdadm.md
new file mode 100644
index 0000000..1dbc3f8
--- /dev/null
+++ b/content/blog/commands/mdadm.md
@@ -0,0 +1,42 @@
+---
+title: "mdadm"
+linkTitle: "mdadm"
+date: 2011-11-15
+description: >
+ mdadm
+---
+
+## Watch the array status
+
+{{< highlight sh >}}
+watch -d -n10 mdadm --detail /dev/md127
+{{< /highlight >}}
+
+## Recovery from livecd
+
+{{< highlight sh >}}
+mdadm --examine --scan >> /etc/mdadm.conf
+mdadm --assemble --scan /dev/md/root
+mount /dev/md127 /mnt # or vgscan...
+{{< /highlight >}}
+
+If auto detection does not work, you can still assemble an array manually :
+{{< highlight sh >}}
+mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
+{{< /highlight >}}
+
+## Resync an array
+
+First rigorously check the output of `cat /proc/mdstat`
+{{< highlight sh >}}
+mdadm --manage --re-add /dev/md0 /dev/sdb1
+{{< /highlight >}}
+
+## Destroy an array
+
+{{< highlight sh >}}
+mdadm --stop /dev/md0
+mdadm --zero-superblock /dev/sda
+mdadm --zero-superblock /dev/sdb
+{{< /highlight >}}
+
diff --git a/content/blog/commands/megacli.md b/content/blog/commands/megacli.md
new file mode 100644
index 0000000..8eb32a8
--- /dev/null
+++ b/content/blog/commands/megacli.md
@@ -0,0 +1,11 @@
+---
+title: "MegaCLI"
+linkTitle: "MegaCLI"
+date: 2018-03-05
+description: >
+ MegaCLI for dell hardware investigations
+---
+
+- `megacli -LDInfo -LALL -aALL|grep state`
+- `MegaCli -PDlist -a0|less`
+
diff --git a/content/blog/commands/omreport.md b/content/blog/commands/omreport.md
new file mode 100644
index 0000000..b3d0ffd
--- /dev/null
+++ b/content/blog/commands/omreport.md
@@ -0,0 +1,20 @@
+---
+title: "omreport"
+linkTitle: "omreport"
+date: 2018-03-05
+description: >
+ omreport
+---
+
+## Your raid status at a glance
+
+- `omreport storage pdisk controller=0 vdisk=0|grep -E '^ID|State|Capacity|Part Number'|grep -B1 -A2 Failed`
+
+## Other commands
+
+{{< highlight sh >}}
+omreport storage vdisk
+omreport storage pdisk controller=0 vdisk=0
+omreport storage pdisk controller=0 pdisk=0:0:4
+{{< /highlight >}}
+
diff --git a/content/blog/commands/qemu-nbd.md b/content/blog/commands/qemu-nbd.md
new file mode 100644
index 0000000..ea09658
--- /dev/null
+++ b/content/blog/commands/qemu-nbd.md
@@ -0,0 +1,17 @@
+---
+title: "qemu-nbd"
+linkTitle: "qemu-nbd"
+date: 2019-07-01
+description: >
+ qemu-nbd
+---
+
+{{< highlight sh >}}
+modprobe nbd max_part=8
+qemu-nbd -c /dev/nbd0 image.img
+mount /dev/nbd0p1 /mnt # or vgscan && vgchange -ay
+[...]
+umount /mnt
+qemu-nbd -d /dev/nbd0
+{{< /highlight >}}
+
diff --git a/content/blog/commands/qemu.md b/content/blog/commands/qemu.md
new file mode 100644
index 0000000..b3beb2c
--- /dev/null
+++ b/content/blog/commands/qemu.md
@@ -0,0 +1,31 @@
+---
+title: "Qemu"
+linkTitle: "Qemu"
+date: 2019-06-10
+description: >
+ Qemu
+---
+
+## Quickly launch a qemu vm with local qcow as hard drive
+
+In this example I am using the docker0 bridge because I do not want to have to modify my shorewall config, but any proper bridge would do :
+{{< highlight sh >}}
+ip tuntap add tap0 mode tap
+brctl addif docker0 tap0
+qemu-img create -f qcow2 obsd.qcow2 10G
+qemu-system-x86_64 -curses -drive file=install65.fs,format=raw -drive file=obsd.qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
+qemu-system-x86_64 -curses -drive file=obsd.qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
+{{< /highlight >}}
+
+The first qemu command runs the installer, the second one just runs the vm.
+
+## Launch a qemu vm with your local hard drive
+
+My use case for this is to install openbsd on a server from a hosting provider that doesn't provide an openbsd installer :
+{{< highlight sh >}}
+qemu-system-x86_64 -curses -drive file=miniroot65.fs -drive file=/dev/sda -net nic -net user
+{{< /highlight >}}
+
+## Ressources
+
+- https://github.com/dodoritfort/OpenBSD/wiki/Installer-OpenBSD-sur-votre-serveur-Kimsufi
diff --git a/content/blog/commands/rrdtool.md b/content/blog/commands/rrdtool.md
new file mode 100644
index 0000000..33f54dc
--- /dev/null
+++ b/content/blog/commands/rrdtool.md
@@ -0,0 +1,21 @@
+---
+title: "rrdtool"
+linkTitle: "rrdtool"
+date: 2018-09-25
+description: >
+ rrdtool
+---
+
+## Graph manually
+
+{{< highlight sh >}}
+for i in `ls`; do
+ rrdtool graph $i.png -w 1024 -h 768 -a PNG --slope-mode --font DEFAULT:7: \
+ --start -3days --end now DEF:in=$i:netin:MAX DEF:out=$i:netout:MAX \
+ LINE1:in#0000FF:"in" LINE1:out#00FF00:"out"
+done
+{{< /highlight >}}
+
+## References
+
+- https://calomel.org/rrdtool.html
diff --git a/content/blog/debian/_index.md b/content/blog/debian/_index.md
new file mode 100644
index 0000000..4a0403f
--- /dev/null
+++ b/content/blog/debian/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Debian"
+linkTitle: "Debian"
+weight: 40
+---
diff --git a/content/blog/debian/error-during-signature-verification.md b/content/blog/debian/error-during-signature-verification.md
new file mode 100644
index 0000000..87d7e3a
--- /dev/null
+++ b/content/blog/debian/error-during-signature-verification.md
@@ -0,0 +1,15 @@
+---
+title: "Error occured during the signature verification"
+linkTitle: "Error occured during the signature verification"
+date: 2015-02-27
+description: >
+ Error occured during the signature verification
+---
+
+Here is how to fix the apt-get “Error occured during the signature verification” :
+{{< highlight sh >}}
+cd /var/lib/apt
+mv lists lists.old
+mkdir -p lists/partial
+aptitude update
+{{< /highlight >}}
diff --git a/content/blog/debian/force-package-removal.md b/content/blog/debian/force-package-removal.md
new file mode 100644
index 0000000..c1a4862
--- /dev/null
+++ b/content/blog/debian/force-package-removal.md
@@ -0,0 +1,14 @@
+---
+title: "Force package removal"
+linkTitle: "Force package removal"
+date: 2015-01-27
+description: >
+ Force package removal
+---
+
+Here is how to force package removal when post-uninstall script fails :
+{{< highlight sh >}}
+dpkg --purge --force-all <package>
+{{< /highlight >}}
+
+There is another option if you need to be smarter or if it is a pre-uninstall script that fails. Look at `/var/lib/dpkg/info/<package>.*inst`, locate the line that fails, comment it out and try to purge again. Repeat until success!
diff --git a/content/blog/debian/no-public-key-error.md b/content/blog/debian/no-public-key-error.md
new file mode 100644
index 0000000..15e9a01
--- /dev/null
+++ b/content/blog/debian/no-public-key-error.md
@@ -0,0 +1,12 @@
+---
+title: "Fix the no public key available error"
+linkTitle: "Fix the no public key available error"
+date: 2016-01-27
+description: >
+ Fix the no public key available error
+---
+
+Here is how to fix the no public key available error :
+{{< highlight sh >}}
+apt-key adv --keyserver keyserver.ubuntu.com --recv-keys KEYID
+{{< /highlight >}}
diff --git a/content/blog/docker/_index.md b/content/blog/docker/_index.md
new file mode 100644
index 0000000..18c3c33
--- /dev/null
+++ b/content/blog/docker/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Docker"
+linkTitle: "Docker"
+weight: 40
+---
diff --git a/content/blog/docker/cleaning.md b/content/blog/docker/cleaning.md
new file mode 100644
index 0000000..f36bbd7
--- /dev/null
+++ b/content/blog/docker/cleaning.md
@@ -0,0 +1,12 @@
+---
+title: "Cleaning a docker host"
+linkTitle: "Cleaning a docker host"
+date: 2018-01-29
+description: >
+ How to retrieve storage space by cleaning a docker host
+---
+
+Be carefull that this will delete any stopped container and remove any locally unused image and tags :
+{{< highlight sh >}}
+docker system prune -f -a
+{{< /highlight >}}
diff --git a/content/blog/docker/docker-compose-bridge.md b/content/blog/docker/docker-compose-bridge.md
new file mode 100644
index 0000000..16a823d
--- /dev/null
+++ b/content/blog/docker/docker-compose-bridge.md
@@ -0,0 +1,31 @@
+---
+title: "Docker compose predictable bridge"
+linkTitle: "Docker compose predictable bridge"
+date: 2018-09-25
+description: >
+ How to use a predefined bridge with docker compose
+---
+
+By default, docker-compose will create a network with a randomly named bridge. If you are like me using a strict firewall on all your machines, this just cannot work.
+
+You need to put your services in `network_mode: “bridge”` and add a custom `network` entry like :
+
+{{< highlight yaml >}}
+version: '3.0'
+
+services:
+ sshportal:
+ image: moul/sshportal
+ environment:
+ - SSHPORTAL_DEFAULT_ADMIN_INVITE_TOKEN=integration
+ command: server --debug
+ depends_on:
+ - testserver
+ ports:
+ - 2222
+ network_mode: "bridge"
+networks:
+ default:
+ external:
+ name: bridge
+{{< /highlight >}}
diff --git a/content/blog/docker/migrate-data-volume.md b/content/blog/docker/migrate-data-volume.md
new file mode 100644
index 0000000..4f54394
--- /dev/null
+++ b/content/blog/docker/migrate-data-volume.md
@@ -0,0 +1,15 @@
+---
+title: "Migrate a data volume"
+linkTitle: "Migrate a data volume"
+date: 2018-01-30
+description: >
+ How to migrate a data volume
+---
+
+Here is how to migrate a data volume between two of your hosts. A rsync of the proper `/var/lib/docker/volumes` subfolder would work just as well, but is here a fun way to do it with docker and pipes :
+{{< highlight sh >}}
+export VOLUME=tiddlywiki
+export DEST=10.1.0.242
+docker run --rm -v $VOLUME:/from alpine ash -c "cd /from ; tar -cpf - . " \
+| ssh $DEST "docker run --rm -i -v $VOLUME:/to alpine ash -c 'cd /to ; tar -xfp - ' "
+{{< /highlight >}}
diff --git a/content/blog/docker/shell-usage-in-dockerfile.md b/content/blog/docker/shell-usage-in-dockerfile.md
new file mode 100644
index 0000000..868fe21
--- /dev/null
+++ b/content/blog/docker/shell-usage-in-dockerfile.md
@@ -0,0 +1,16 @@
+---
+title: "Shell usage in dockerfile"
+linkTitle: "Shell usage in dockerfile"
+date: 2019-02-04
+description: >
+ How to use a proper shell in a dockerfile
+---
+
+The default shell is `[“/bin/sh”, “-c”]`, which doesn't handle pipe fails when chaining commands. To process errors when using pipes use this :
+
+{{< highlight sh >}}
+SHELL ["/bin/bash", "-eux", "-o", "pipefail", "-c"]
+{{< /highlight >}}
+
+## References
+- https://bearstech.com/societe/blog/securiser-et-optimiser-notre-liste-des-bonnes-pratiques-liees-aux-dockerfiles/
diff --git a/content/blog/freebsd/_index.md b/content/blog/freebsd/_index.md
new file mode 100644
index 0000000..b93f302
--- /dev/null
+++ b/content/blog/freebsd/_index.md
@@ -0,0 +1,5 @@
+---
+title: "FreeBSD"
+linkTitle: "FreeBSD"
+weight: 40
+---
diff --git a/content/blog/freebsd/activate-the-serial-console.md b/content/blog/freebsd/activate-the-serial-console.md
new file mode 100644
index 0000000..210bacc
--- /dev/null
+++ b/content/blog/freebsd/activate-the-serial-console.md
@@ -0,0 +1,11 @@
+---
+title: "Activate the serial console"
+linkTitle: "Activate the serial console"
+date: 2018-01-03
+description: >
+ How to activate the serial console
+---
+
+Here is how to activate the serial console on a FreeBSD server :
+- Append `console=“comconsole”` to `/boot/loader.conf`
+- Append or update existing line with `ttyd0` in `/etc/ttys` to : `ttyd0 “/usr/libexec/getty std.9600” vt100 on secure`
diff --git a/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md b/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md
new file mode 100644
index 0000000..db1f6fe
--- /dev/null
+++ b/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md
@@ -0,0 +1,13 @@
+---
+title: "Change the ip address of a running jail"
+linkTitle: "Change the ip address of a running jail"
+date: 2018-09-25
+description: >
+ How to change the ip address of a running jail
+---
+
+Here is how to change the ip address of a running jail :
+
+{{< highlight sh >}}
+jail -m ip4.addr=“192.168.1.87,192.168.1.88” jid=1
+{{< /highlight >}}
diff --git a/content/blog/freebsd/clean-install-does-not-boot.md b/content/blog/freebsd/clean-install-does-not-boot.md
new file mode 100644
index 0000000..1252a28
--- /dev/null
+++ b/content/blog/freebsd/clean-install-does-not-boot.md
@@ -0,0 +1,14 @@
+---
+title: "Clean install does not boot"
+linkTitle: "Clean install does not boot"
+date: 2018-01-02
+description: >
+ How to fix a clean install that refuses to boot
+---
+
+I installed a fresh FreeBSD server today, and to my surprise it refused to boot. I had to do the following from my liveUSB :
+
+{{< highlight yaml >}}
+gpart set -a active /dev/ada0
+gpart set -a bootme -i 1 /dev/ada0
+{{< /highlight >}}
diff --git a/content/blog/gentoo/_index.md b/content/blog/gentoo/_index.md
new file mode 100644
index 0000000..1eee11b
--- /dev/null
+++ b/content/blog/gentoo/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Gentoo"
+linkTitle: "Gentoo"
+weight: 40
+---
diff --git a/content/blog/gentoo/get-zoom-to-work.md b/content/blog/gentoo/get-zoom-to-work.md
new file mode 100644
index 0000000..6e4697e
--- /dev/null
+++ b/content/blog/gentoo/get-zoom-to-work.md
@@ -0,0 +1,24 @@
+---
+title: "Get zoom to work"
+linkTitle: "Get zoom to work"
+date: 2018-01-02
+description: >
+ How to get the zoom video conferencing tool to work on gentoo
+---
+
+The zoom video conderencing tool works on gentoo, but since it is not integrated in a desktop environment on my machine (I am running an i3 window manager) I cannot authenticate on the google corporate domain where I work. Here is how to work
+around that.
+
+## Running the client
+
+{{< highlight yaml >}}
+./ZoomLauncher
+{{< /highlight >}}
+
+## Working around the "zoommtg address not understood" error
+
+When you try to authenticate you will have your web browser pop up with a link it cannot interpret. You need to get the `zoommtg://.*` thing and run it in another ZoomLauncher (do not close the zoom process that spawned this authentication link
+or the authentication will fail :
+{{< highlight yaml >}}
+./ZoomLauncher 'zoommtg://zoom.us/google?code=XXXXXXXX'
+{{< /highlight >}}
diff --git a/content/blog/gentoo/steam.md b/content/blog/gentoo/steam.md
new file mode 100644
index 0000000..b952205
--- /dev/null
+++ b/content/blog/gentoo/steam.md
@@ -0,0 +1,13 @@
+---
+title: "Steam"
+linkTitle: "Steam"
+date: 2019-02-16
+description: >
+ How to make steam work seamlessly on gentoo with a chroot
+---
+
+I am not using a multilib profile on gentoo (I use amd64 only everywhere), so when the time came to install steam I had to get a little creative. Overall I believe this is the perfect
+way to install and use steam as it self contains it cleanly while not limiting the functionalities. In particular sound works, as does the hardware acceleration in games. I tried to
+achieve that with containers but didn't quite made it work as well as this chroot setup.
+
+[Here is the link to the full article describing how I achieved that.]({{< relref "/docs/gentoo/steam.md" >}})
diff --git a/content/blog/kubernetes/_index.md b/content/blog/kubernetes/_index.md
new file mode 100644
index 0000000..3545b68
--- /dev/null
+++ b/content/blog/kubernetes/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Kubernetes"
+linkTitle: "Kubernetes"
+weight: 40
+---
diff --git a/content/blog/kubernetes/get_key_and_certificae.md b/content/blog/kubernetes/get_key_and_certificae.md
new file mode 100644
index 0000000..c66cac8
--- /dev/null
+++ b/content/blog/kubernetes/get_key_and_certificae.md
@@ -0,0 +1,10 @@
+---
+title: "Get tls certificate and key from a kubernetes secret"
+date: 2020-08-06
+---
+
+My use case is to deploy a wildcard certificate that was previously handled by an acme.sh on my legacy lxd containers. Since moving to kubernetes parts of my services I have been using cert-manager to issue letsencrypt certificates. Since I am not done yet I looked into a way of getting a certificate out of kubernetes. Assuming we are working with a secret named `wild.adyxax.org-cert` and our namespace is named `legacy` :
+{{< highlight sh >}}
+kubectl -n legacy get secret wild.adyxax.org-cert -o json -o=jsonpath="{.data.tls\.crt}" | base64 -d > fullchain.cer
+kubectl -n legacy get secret wild.adyxax.org-cert -o json -o=jsonpath="{.data.tls\.key}" | base64 -d > adyxax.org.key
+{{< /highlight >}}
diff --git a/content/blog/kubernetes/pg_dump_restore.md b/content/blog/kubernetes/pg_dump_restore.md
new file mode 100644
index 0000000..9aafb63
--- /dev/null
+++ b/content/blog/kubernetes/pg_dump_restore.md
@@ -0,0 +1,24 @@
+---
+title: "Dump and restore a postgresql database on kubernetes"
+linkTitle: "Dump and restore a postgresql database"
+date: 2020-06-25
+---
+
+## Dumping
+Assuming we are working with a postgresql statefulset, our namespace is named `miniflux` and our master pod is named `db-postgresql-0`, trying to
+dump a database named `miniflux`:
+{{< highlight sh >}}
+export POSTGRES_PASSWORD=$(kubectl get secret --namespace miniflux db-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
+kubectl run db-postgresql-client --rm --tty -i --restart='Never' --namespace miniflux --image docker.io/bitnami/postgresql:11.8.0-debian-10-r19 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- pg_dump --host db-postgresql -U postgres -d miniflux > miniflux.sql-2020062501
+{{< /highlight >}}
+
+## Restoring
+
+Assuming we are working with a postgresql statefulset, our namespace is named `miniflux` and our master pod is named `db-postgresql-0`, trying to
+restore a database named `miniflux`:
+{{< highlight sh >}}
+kubectl -n miniflux cp miniflux.sql-2020062501 db-postgresql-0:/tmp/miniflux.sql
+kubectl -n miniflux exec -ti db-postgresql-0 -- psql -U postgres -d miniflux
+miniflux=# \i /tmp/miniflux.sql
+kubectl -n miniflux exec -ti db-postgresql-0 -- rm /tmp/miniflux.sql
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/_index.md b/content/blog/miscellaneous/_index.md
new file mode 100644
index 0000000..806622d
--- /dev/null
+++ b/content/blog/miscellaneous/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Miscellaneous"
+linkTitle: "Miscellaneous"
+weight: 40
+---
diff --git a/content/blog/miscellaneous/bacula-bareos.md b/content/blog/miscellaneous/bacula-bareos.md
new file mode 100644
index 0000000..a5fd0be
--- /dev/null
+++ b/content/blog/miscellaneous/bacula-bareos.md
@@ -0,0 +1,38 @@
+---
+title: "Some bacula/bareos commands"
+linkTitle: "Some bacula/bareos commands"
+date: 2018-01-10
+description: >
+ Some bacula/bareos commands
+---
+
+Bacula is a backup software, bareos is a fork of it. Here are some tips and solutions to specific problems.
+
+## Adjust an existing volume for pool configuration changes
+
+In bconsole, run the following commands and follow the prompts :
+{{< highlight sh >}}
+update pool from resource
+update all volumes in pool
+{{< /highlight >}}
+
+## Using bextract
+
+On the sd you need to have a valid device name with the path to your tape, then run :
+{{< highlight sh >}}
+bextract -V <volume names separated by |> <device-name>
+<directory-to-store-files>
+{{< /highlight >}}
+
+## Integer out of range sql error
+
+If you get an sql error `integer out of range` for an insert query in the catalog, check the id sequence for the table which had the error. For
+example with the basefiles table :
+{{< highlight sql >}}
+select nextval('basefiles_baseid_seq');
+{{< /highlight >}}
+
+You can then fix it with :
+{{< highlight sql >}}
+alter table BaseFiles alter column baseid set data type bigint;
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/bash-tcp-client.md b/content/blog/miscellaneous/bash-tcp-client.md
new file mode 100644
index 0000000..f10f22b
--- /dev/null
+++ b/content/blog/miscellaneous/bash-tcp-client.md
@@ -0,0 +1,15 @@
+---
+title: "Bash tcp client"
+linkTitle: "Bash tcp client"
+date: 2018-03-21
+description: >
+ Bash tcp client
+---
+
+There are somea fun toys in bash. I would not rely on it for a production script, but here is one such things :
+
+{{< highlight sh >}}
+exec 5<>/dev/tcp/10.1.0.254/8080
+bash$ echo -e "GET / HTTP/1.0\n" >&5
+bash$ cat <&5
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/boot-from-initramfs.md b/content/blog/miscellaneous/boot-from-initramfs.md
new file mode 100644
index 0000000..3d5c55e
--- /dev/null
+++ b/content/blog/miscellaneous/boot-from-initramfs.md
@@ -0,0 +1,16 @@
+---
+title: "Boot from initramfs shell"
+linkTitle: "Boot from initramfs shell"
+date: 2014-01-24
+description: >
+ Boot from initramfs shell
+---
+
+I had to finish booting from an initramfs shell, here is how I used `switch_root` to do so :
+
+{{< highlight sh >}}
+lvm vgscan
+lvm vgchange -ay vg
+mount -t ext4 /dev/mapper/vg-root /root
+exec switch_root -c /dev/console /root /sbin/init
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/building-rpms.md b/content/blog/miscellaneous/building-rpms.md
new file mode 100644
index 0000000..99667eb
--- /dev/null
+++ b/content/blog/miscellaneous/building-rpms.md
@@ -0,0 +1,29 @@
+---
+title: "Building rpm packages"
+linkTitle: "Building rpm packages"
+date: 2016-02-22
+description: >
+ Building rpm packages
+---
+
+Here is how to build locally an rpm package. Tested at the time on a centos 7.
+
+## Setup your environment
+
+First of all, you have to use a non-root account.
+
+ - Create the necessary directories : `mkdir -p ~/rpmbuild/{BUILD,RPMS,S{OURCE,PEC,RPM}S}`
+ - Tell rpmbuild where to build by adding the following in your `.rpmmacros` file : `echo -e “%_topdir\t$HOME/rpmbuild” » ~/.rpmmacros`
+
+## Building package
+
+There are several ways to build a rpm, depending on what kind of stuff you have to deal with.
+
+### Building from a tar.gz archive containing a .spec file
+
+Run the following on you .tar.gz archive : `rpmbuild -tb memcached-1.4.0.tar.gz`. When the building process ends, you will find your package in a `$HOME/rpmbuild/RPMS/x86_64/` like directory, depending on your architecture.
+
+### Building from a spec file
+
+ - `rpmbuild -v -bb ./contrib/redhat/collectd.spec`
+ - If you are missing some dependencies : `rpmbuild -v -bb ./contrib/redhat/collectd.spec 2>&1 |awk '/is needed/ {print $1;}'|xargs yum install -y`
diff --git a/content/blog/miscellaneous/clean-old-centos-kernels.md b/content/blog/miscellaneous/clean-old-centos-kernels.md
new file mode 100644
index 0000000..eb49269
--- /dev/null
+++ b/content/blog/miscellaneous/clean-old-centos-kernels.md
@@ -0,0 +1,11 @@
+---
+title: "Clean old centos kernels"
+linkTitle: "Clean old centos kernels"
+date: 2016-02-03
+description: >
+ Clean old centos kernels
+---
+
+There is a setting in `/etc/yum.conf` that does exactly that : `installonly_limit=`. The value of this setting is the number of older kernels that are kept when a new kernel is installed by yum. If the number of installed kernels becomes greater than this, the oldest one gets removed at the same time a new one is installed.
+
+This cleaning can also be done manually with a command that belongs to the yum-utils package : `package-cleanup –oldkernels –count=2`
diff --git a/content/blog/miscellaneous/debug-disk-usage-postgresql.md b/content/blog/miscellaneous/debug-disk-usage-postgresql.md
new file mode 100644
index 0000000..827d69f
--- /dev/null
+++ b/content/blog/miscellaneous/debug-disk-usage-postgresql.md
@@ -0,0 +1,14 @@
+---
+title: "Investigate postgresql disk usage"
+linkTitle: "Investigate postgresql disk usage"
+date: 2015-11-24
+description: >
+ Investigate postgresql disk usage
+---
+
+## How to debug disk occupation in postgresql
+
+- get a database oid number from `ncdu` in `/var/lib/postgresql`
+- reconcile oid number and db name with : `select oid,datname from pg_database where oid=18595;`
+- Then in database : `select table_name,pg_relation_size(quote_ident(table_name)) from information_schema.tables where table_schema = 'public' order by 2;`
+
diff --git a/content/blog/miscellaneous/etc-update-alpine.md b/content/blog/miscellaneous/etc-update-alpine.md
new file mode 100644
index 0000000..dbc0824
--- /dev/null
+++ b/content/blog/miscellaneous/etc-update-alpine.md
@@ -0,0 +1,38 @@
+---
+title: "etc-update script for alpine linux"
+linkTitle: "etc-update script for alpine linux"
+date: 2019-04-02
+description: >
+ etc-update script for alpine linux
+---
+
+Alpine linux doesn't seem to have a tool to merge pending configuration changes, so I wrote one :
+{{< highlight sh >}}
+#!/bin/sh
+set -eu
+
+for new_file in $(find /etc -iname '*.apk-new'); do
+ current_file=${new_file%.apk-new}
+ echo "===== New config file version for $current_file ====="
+ diff ${current_file} ${new_file} || true
+ while true; do
+ echo "===== (r)eplace file with update? (d)iscard update? (m)erge files? (i)gnore ====="
+ PS2="k/d/m/i? "
+ read choice
+ case ${choice} in
+ r)
+ mv ${new_file} ${current_file}
+ break;;
+ d)
+ rm -f ${new_file}
+ break;;
+ m)
+ vimdiff ${new_file} ${current_file}
+ break;;
+ i)
+ break;;
+ esac
+ done
+done
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/fstab.md b/content/blog/miscellaneous/fstab.md
new file mode 100644
index 0000000..3b7cded
--- /dev/null
+++ b/content/blog/miscellaneous/fstab.md
@@ -0,0 +1,9 @@
+---
+title: "Use spaces in fstab"
+linkTitle: "Use spaces in fstab"
+date: 2011-09-29
+description: >
+ How to use spaces in a folder name in fstab
+---
+
+Here is how to use spaces in a folder name in fstab : you put `\040` where you want a space.
diff --git a/content/blog/miscellaneous/i3dropdown.md b/content/blog/miscellaneous/i3dropdown.md
new file mode 100644
index 0000000..52262ec
--- /dev/null
+++ b/content/blog/miscellaneous/i3dropdown.md
@@ -0,0 +1,32 @@
+---
+title: "i3dropdown"
+linkTitle: "i3dropdown"
+date: 2020-01-23
+description: >
+ i3dropdown
+---
+
+i3dropdown is a tool to make any X application drop down from the top of the screen, in the famous quake console style back in the day.
+
+## Compilation
+
+First of all, you have get i3dropdown and compile it. It does not have any dependencies so it is really easy :
+{{< highlight sh >}}
+git clone https://gitlab.com/exrok/i3dropdown
+cd i3dropdown
+make
+cp build/i3dropdown ~/bin/
+{{< /highlight >}}
+
+## i3 configuration
+
+Here is a working example of the pavucontrol app, a volume mixer I use :
+{{< highlight conf >}}
+exec --no-startup-id i3 --get-socketpath > /tmp/i3wm-socket-path
+for_window [instance="^pavucontrol"] floating enable
+bindsym Mod4+shift+p exec /home/julien/bin/i3dropdown -W 90 -H 50 pavucontrol pavucontrol-qt
+{{< /highlight >}}
+
+To work properly, i3dropdown needs to have the path to the i3 socket. Because the command to get the socketpath from i3 is a little slow, it is best to cache it somewhere. By default
+i3dropdown recognises `/tmp/i3wm-socket-path`. Then each window managed by i3dropdown needs to be floating. The last line bind a key to invoke or mask the app.
+
diff --git a/content/blog/miscellaneous/libreoffice.md b/content/blog/miscellaneous/libreoffice.md
new file mode 100644
index 0000000..29b8541
--- /dev/null
+++ b/content/blog/miscellaneous/libreoffice.md
@@ -0,0 +1,9 @@
+---
+title: "Removing libreoffice write protection"
+linkTitle: "Removing libreoffice write protection"
+date: 2018-03-05
+description: >
+ Removing libreoffice write protection
+---
+
+You can choose to ignore write-protection by setting `Tools > Options > libreOffice Writer > Formatting Aids > Protected Areas > Ignore protection`.
diff --git a/content/blog/miscellaneous/link-deleted-inode.md b/content/blog/miscellaneous/link-deleted-inode.md
new file mode 100644
index 0000000..45f0417
--- /dev/null
+++ b/content/blog/miscellaneous/link-deleted-inode.md
@@ -0,0 +1,10 @@
+---
+title: "Link to a deleted inode"
+linkTitle: "Link to a deleted inode"
+date: 2018-03-05
+description: >
+ Link to a deleted inode
+---
+
+Get the inode number from `lsof`, then run `debugfs -w /dev/mapper/vg-home -R 'link <16008> /some/path'` where 16008 is the inode number (the < > are important, they tell debugfs you manipulate an inode). The path is relative to the root of the block device you are restoring onto.
+
diff --git a/content/blog/miscellaneous/make.md b/content/blog/miscellaneous/make.md
new file mode 100644
index 0000000..0795127
--- /dev/null
+++ b/content/blog/miscellaneous/make.md
@@ -0,0 +1,10 @@
+---
+title: "Understanding make"
+linkTitle: "Understanding make"
+date: 2018-01-30
+description: >
+ Understanding make
+---
+
+http://gromnitsky.users.sourceforge.net/articles/notes-for-new-make-users/
+
diff --git a/content/blog/miscellaneous/mencoder.md b/content/blog/miscellaneous/mencoder.md
new file mode 100644
index 0000000..4bb8fd0
--- /dev/null
+++ b/content/blog/miscellaneous/mencoder.md
@@ -0,0 +1,21 @@
+---
+title: "Aggregate images into a video with mencoder"
+linkTitle: "Aggregate images into a video with mencoder"
+date: 2018-04-30
+description: >
+ Aggregate images into a video withmencoder
+---
+
+## Aggregate png images into a video
+{{< highlight sh >}}
+mencoder mf://*.png -mf w=1400:h=700:fps=1:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi
+{{< /highlight >}}
+
+You should use the following to specify a list of files instead of `*.png`:
+{{< highlight sh >}}
+mf://@list.txt
+{{< /highlight >}}
+
+## References
+
+- http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-enc-images.html
diff --git a/content/blog/miscellaneous/mssql-centos-7.md b/content/blog/miscellaneous/mssql-centos-7.md
new file mode 100644
index 0000000..019f442
--- /dev/null
+++ b/content/blog/miscellaneous/mssql-centos-7.md
@@ -0,0 +1,29 @@
+---
+title: "Installing mssql on centos 7"
+linkTitle: "Installing mssql on centos 7"
+date: 2019-07-09
+description: >
+ Installing mssql on centos 7
+---
+
+{{< highlight sh >}}
+vi /etc/sysconfig/network-scripts/ifcfg-eth0
+vi /etc/resolv.conf
+curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo
+curl -o /etc/yum.repos.d/mssql-prod.repo https://packages.microsoft.com/config/rhel/7/prod.repo
+yum update
+yum install -y mssql-server mssql-tools
+yum install -y sudo
+localectl set-locale LANG=en_US.utf8
+echo "export LANG=en_US.UTF-8" >> /etc/profile.d/locale.sh
+echo "export LANGUAGE=en_US.UTF-8" >> /etc/profile.d/locale.sh
+yum install -y openssh-server
+systemctl enable sshd
+systemctl start sshd
+passwd
+/opt/mssql/bin/mssql-conf setup
+rm -f /etc/localtime
+ln -s /usr/share/zoneinfo/Europe/Paris /etc/localtime
+/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -p
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/my-postgresql-role-cannot-login.md b/content/blog/miscellaneous/my-postgresql-role-cannot-login.md
new file mode 100644
index 0000000..2b958bf
--- /dev/null
+++ b/content/blog/miscellaneous/my-postgresql-role-cannot-login.md
@@ -0,0 +1,12 @@
+---
+title: "Cannot login role into postgresql"
+linkTitle: "Cannot login role into postgresql"
+date: 2015-11-24
+description: >
+ Cannot login role into postgresql
+---
+
+{{< highlight sh >}}
+ALTER ROLE "user" LOGIN;
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/nginx-ldap.md b/content/blog/miscellaneous/nginx-ldap.md
new file mode 100644
index 0000000..b480943
--- /dev/null
+++ b/content/blog/miscellaneous/nginx-ldap.md
@@ -0,0 +1,25 @@
+---
+title: "LDAP auth with nginx"
+linkTitle: "LDAP auth with nginx"
+date: 2018-03-05
+description: >
+ LDAP auth with nginx
+---
+
+{{< highlight sh >}}
+ldap_server ldap {
+ auth_ldap_cache_enabled on;
+ auth_ldap_cache_expiration_time 10000;
+ auth_ldap_cache_size 1000;
+
+ url "ldaps://ldapslave.adyxax.org/ou=Users,dc=adyxax,dc=org?uid?sub?(objectClass=posixAccount)";
+ binddn "cn=admin,dc=adyxax,dc=org";
+ binddn_passwd secret;
+ group_attribute memberUid;
+ group_attribute_is_dn off;
+ satisfy any;
+ require valid_user;
+ #require group "cn=admins,ou=groups,dc=adyxax,dc=org";
+}
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/osm-overlay-example.md b/content/blog/miscellaneous/osm-overlay-example.md
new file mode 100644
index 0000000..2787a6e
--- /dev/null
+++ b/content/blog/miscellaneous/osm-overlay-example.md
@@ -0,0 +1,19 @@
+---
+title: "OpenStreetMap overlay example"
+linkTitle: "OpenStreetMap overlay example"
+date: 2020-05-19
+description: >
+ An example of how to query things visually on OpenStreetMap
+---
+
+http://overpass-turbo.eu/
+{{< highlight html >}}
+<osm-script>
+ <query type="node">
+ <has-kv k="amenity" v="recycling"/>
+ <bbox-query {{bbox}}/>
+ </query>
+ <!-- print results -->
+ <print mode="body"/>
+</osm-script>
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/pleroma.md b/content/blog/miscellaneous/pleroma.md
new file mode 100644
index 0000000..91c10f8
--- /dev/null
+++ b/content/blog/miscellaneous/pleroma.md
@@ -0,0 +1,117 @@
+---
+title: "Pleroma installation notes"
+linkTitle: "Pleroma installation notes"
+date: 2018-11-16
+description: >
+ Pleroma installation notes
+---
+
+This article is about my installation of pleroma in a standard alpine linux lxd container.
+
+## Installation notes
+{{< highlight sh >}}
+apk add elixir nginx postgresql postgresql-contrib git sudo erlang-ssl erlang-xmerl erlang-parsetools erlang-runtime-tools make gcc build-base vim vimdiff htop curl
+/etc/init.d/postgresql start
+rc-update add postgresql default
+cd /srv
+git clone https://git.pleroma.social/pleroma/pleroma
+cd pleroma/
+mix deps.get
+mix generate_config
+cp config/generated_config.exs config/prod.secret.exs
+cat config/setup_db.psql
+{{< /highlight >}}
+
+At this stage you are supposed to execute these setup_db commands in your postgres. Instead of chmoding and stuff detailed in the official documentation I execute it manually from psql shell :
+{{< highlight sh >}}
+su - postgres
+psql
+CREATE USER pleroma WITH ENCRYPTED PASSWORD 'XXXXXXXXXXXXXXXXXXX';
+CREATE DATABASE pleroma_dev OWNER pleroma;
+\c pleroma_dev;
+CREATE EXTENSION IF NOT EXISTS citext;
+CREATE EXTENSION IF NOT EXISTS pg_trgm;
+{{< /highlight >}}
+
+Now back to pleroma :
+{{< highlight sh >}}
+MIX_ENV=prod mix ecto.migrate
+MIX_ENV=prod mix phx.server
+{{< /highlight >}}
+
+If this last command runs without error your pleroma will be available and you can test it with :
+{{< highlight sh >}}
+curl http://localhost:4000/api/v1/instance
+{{< /highlight >}}
+
+If this works, you can shut it down with two C-c and we can configure nginx. This article doesn't really cover my setup since my nginx doesn't run there, and I am using letsencrypt wildcard certificates fetched somewhere else unrelated, so to simplify I only paste the vhost part of the configuration :
+{{< highlight sh >}}
+### in nginx.conf inside the container ###
+# {{{ pleroma
+proxy_cache_path /tmp/pleroma-media-cache levels=1:2 keys_zone=pleroma_media_cache:10m max_size=500m inactive=200m use_temp_path=off;
+ssl_session_cache shared:ssl_session_cache:10m;
+server {
+ listen 80;
+ listen [::]:80;
+ server_name social.adyxax.org;
+ return 301 https://$server_name$request_uri;
+}
+server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name social.adyxax.org;
+ root /usr/share/nginx/html;
+
+ include /etc/nginx/vhost.d/social.conf;
+ ssl_certificate /etc/nginx/fullchain;
+ ssl_certificate_key /etc/nginx/privkey;
+}
+# }}}
+
+### in a vhost.d/social.conf ###
+location / {
+ proxy_set_header Host $http_host;
+ proxy_set_header X-Forwarded-Host $http_host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_pass http://172.16.1.8:4000/;
+
+ add_header 'Access-Control-Allow-Origin' '*';
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
+
+ allow all;
+}
+
+location /proxy {
+ proxy_cache pleroma_media_cache;
+ proxy_cache_lock on;
+ proxy_pass http://172.16.1.8:4000$request_uri;
+}
+
+client_max_body_size 20M;
+{{< /highlight >}}
+
+Now add the phx.server on boot. I run pleroma has plemora user to completely limit the permissions of the server software. The official documentation has all files belong to the user running the server, I prefer that only the uploads directory does. Since I don't run nginx from this container I also edit this out :
+{{< highlight sh >}}
+adduser -s /sbin/nologin -D -h /srv/pleroma pleroma
+cp -a /root/.hex/ /srv/pleroma/.
+cp -a /root/.mix /srv/pleroma/.
+chown -R pleroma:pleroma /srv/pleroma/uploads
+cp installation/init.d/pleroma /etc/init.d
+sed -i /etc/init.d/pleroma -e '/^directory=/s/=.*/=\/srv\/pleroma/'
+sed -i /etc/init.d/pleroma -e '/^command_user=/s/=.*/=nobody:nobody/'
+sed -i /etc/init.d/pleroma -e 's/nginx //'
+rc-update add pleroma default
+rc-update add pleroma start
+{{< /highlight >}}
+
+You should be good to go and access your instance from any web browser. After creating your account in a web browser come back to the cli and set yourself as moderator :
+{{< highlight sh >}}
+mix set_moderator adyxax
+{{< /highlight >}}
+
+## References
+
+- https://git.pleroma.social/pleroma/pleroma
diff --git a/content/blog/miscellaneous/postgresql-read-only.md b/content/blog/miscellaneous/postgresql-read-only.md
new file mode 100644
index 0000000..c064e97
--- /dev/null
+++ b/content/blog/miscellaneous/postgresql-read-only.md
@@ -0,0 +1,17 @@
+---
+title: "Grant postgresql read only access"
+linkTitle: "Grant postgresql read only access"
+date: 2015-11-24
+description: >
+ Grant postgresql read only access
+---
+
+{{< highlight sh >}}
+GRANT CONNECT ON DATABASE "db" TO "user";
+\c db
+GRANT USAGE ON SCHEMA public TO "user";
+GRANT SELECT ON ALL TABLES IN SCHEMA public TO "user";
+ALTER DEFAULT PRIVILEGES IN SCHEMA public
+ GRANT SELECT ON TABLES TO "user";
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/postgresql-reassign.md b/content/blog/miscellaneous/postgresql-reassign.md
new file mode 100644
index 0000000..7779c40
--- /dev/null
+++ b/content/blog/miscellaneous/postgresql-reassign.md
@@ -0,0 +1,18 @@
+---
+title: "Change owner on a postgresql database and all tables"
+linkTitle: "Change owner on a postgresql database and all tables"
+date: 2012-04-20
+description: >
+ Change owner on a postgresql database and all tables
+---
+
+{{< highlight sh >}}
+ALTER DATABASE name OWNER TO new_owner
+for tbl in `psql -qAt -c "select tablename from pg_tables where schemaname = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
+for tbl in `psql -qAt -c "select sequence_name from information_schema.sequences where sequence_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
+for tbl in `psql -qAt -c "select table_name from information_schema.views where table_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
+{{< /highlight >}}
+
+{{< highlight sh >}}
+reassign owned by "support" to "test-support";
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/pulseaudio.md b/content/blog/miscellaneous/pulseaudio.md
new file mode 100644
index 0000000..c656275
--- /dev/null
+++ b/content/blog/miscellaneous/pulseaudio.md
@@ -0,0 +1,11 @@
+---
+title: "Pulseaudio"
+linkTitle: "Pulseaudio"
+date: 2018-09-25
+description: >
+ Pulseaudio
+---
+
+- List outputs : `pacmd list-sinks | grep -e 'name:' -e 'index'`
+- Select a new one : `pacmd set-default-sink alsa_output.usb-C-Media_Electronics_Inc._USB_PnP_Sound_Device-00.analog-stereo`
+
diff --git a/content/blog/miscellaneous/purge-postfix-queue-based-content.md b/content/blog/miscellaneous/purge-postfix-queue-based-content.md
new file mode 100644
index 0000000..2db52ac
--- /dev/null
+++ b/content/blog/miscellaneous/purge-postfix-queue-based-content.md
@@ -0,0 +1,13 @@
+---
+title: "Purge postfix queue based on email contents"
+linkTitle: "Purge postfix queue based on email contents"
+date: 2009-04-27
+description: >
+ Purge postfix queue based on email contents
+---
+
+
+{{< highlight sh >}}
+find /var/spool/postfix/deferred/ -type f -exec grep -li 'XXX' '{}' \; | xargs -n1 basename | xargs -n1 postsuper -d
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/qmail.md b/content/blog/miscellaneous/qmail.md
new file mode 100644
index 0000000..6a28f1a
--- /dev/null
+++ b/content/blog/miscellaneous/qmail.md
@@ -0,0 +1,21 @@
+---
+title: "Qmail"
+linkTitle: "Qmail"
+date: 2018-03-05
+description: >
+ Qmail
+---
+
+## Commands
+
+- Get statistics : `qmail-qstat`
+- list queued mails : `qmail-qread`
+- Read an email in the queue (NNNN is the #id from qmail-qread) : `find /var/qmail/queue -name NNNN| xargs cat | less`
+- Change queue lifetime for qmail in seconds (example here for 15 days) : `echo 1296000 > /var/qmail/control/queuelifetime`
+
+## References
+
+- http://www.lifewithqmail.org/lwq.html
+- http://www.fileformat.info/tip/linux/qmailnow.htm
+- https://www.hivelocity.net/kb/how-to-change-queue-lifetime-for-qmail/
+
diff --git a/content/blog/miscellaneous/rocketchat.md b/content/blog/miscellaneous/rocketchat.md
new file mode 100644
index 0000000..072658d
--- /dev/null
+++ b/content/blog/miscellaneous/rocketchat.md
@@ -0,0 +1,18 @@
+---
+title: "RocketChat"
+linkTitle: "RocketChat"
+date: 2019-08-06
+description: >
+ RocketChat
+---
+
+Docker simple install :
+{{< highlight sh >}}
+docker run --name db -d mongo --smallfiles --replSet hurricane
+
+docker exec -ti db mongo
+> rs.initiate()
+
+docker run -p 3000:3000 --name rocketchat --env ROOT_URL=http://hurricane --env MONGO_OPLOG_URL=mongodb://db:27017/local?replSet=hurricane --link db -d rocket.chat
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/screen-cannot-open-terminal.md b/content/blog/miscellaneous/screen-cannot-open-terminal.md
new file mode 100644
index 0000000..7622191
--- /dev/null
+++ b/content/blog/miscellaneous/screen-cannot-open-terminal.md
@@ -0,0 +1,17 @@
+---
+title: "Screen cannot open terminal error"
+linkTitle: "Screen cannot open terminal error"
+date: 2018-07-03
+description: >
+ Screen cannot open terminal error
+---
+
+If you encounter :
+{{< highlight sh >}}
+Cannot open your terminal '/dev/pts/0' - please check.
+{{< /highlight >}}
+
+Then you did not open the shell with the user you logged in with. You can make screen happy by running :
+{{< highlight sh >}}
+script /dev/null
+{{< /highlight >}}
diff --git a/content/blog/miscellaneous/seti-at-home.md b/content/blog/miscellaneous/seti-at-home.md
new file mode 100644
index 0000000..a8d1cf8
--- /dev/null
+++ b/content/blog/miscellaneous/seti-at-home.md
@@ -0,0 +1,18 @@
+---
+title: "Seti@Home"
+linkTitle: "Seti@Home"
+date: 2018-03-05
+description: >
+ Seti@Home
+---
+
+{{< highlight sh >}}
+apt install boinc
+echo "graou" > /var/lib/boinc-client/gui_rpc_auth.cfg
+systemctl restart boinc-client
+boinccmd --host localhost --passwd graou --get_messages 0
+boinccmd --host localhost --passwd graou --get_state|less
+boinccmd --host localhost --passwd graou --lookup_account http://setiathome.berkeley.edu <EMAIL> XXXXXX
+boinccmd --host localhost --passwd graou --project_attach http://setiathome.berkeley.edu <ACCOUNT_KEY>
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/sqlite-pretty-print.md b/content/blog/miscellaneous/sqlite-pretty-print.md
new file mode 100644
index 0000000..08bcec6
--- /dev/null
+++ b/content/blog/miscellaneous/sqlite-pretty-print.md
@@ -0,0 +1,16 @@
+---
+title: "Sqlite pretty print"
+linkTitle: "Sqlite pretty print"
+date: 2019-06-19
+description: >
+ Sqlite pretty print
+---
+
+- In ~/.sqliterc :
+{{< highlight sh >}}
+.mode column
+.headers on
+.separator ROW "\n"
+.nullvalue NULL
+{{< /highlight >}}
+
diff --git a/content/blog/miscellaneous/switching-to-hugo.md b/content/blog/miscellaneous/switching-to-hugo.md
new file mode 100644
index 0000000..739b36d
--- /dev/null
+++ b/content/blog/miscellaneous/switching-to-hugo.md
@@ -0,0 +1,58 @@
+---
+title: "Switching to Hugo"
+linkTitle: "Switching to Hugo"
+date: 2019-12-19
+description: >
+ I switched my personal wiki from dokuwiki to Hugo
+---
+
+This is the website you are currently reading. It is a static website built using hugo. This article details how I installed hugo, how I initialised this website and how I manage it. I often refer to it as wiki.adyxax.org because I hosted a unique dokuwiki for a long time as my main website (and a pmwiki before that), but with hugo it has become more than that. It is now a mix of wiki, blog and showcase of my work and interests.
+
+## Installing hugo
+
+{{< highlight sh >}}
+go get github.com/gohugoio/hugo
+{{< / highlight >}}
+
+You probably won't encounter this issue but this command failed at the time I installed hugo because the master branch in one of the dependencies was
+tainted. I fixed it with by using a stable tag for this project and continue installing hugo from there:
+{{< highlight sh >}}
+cd go/src/github.com/tdewolff/minify/
+tig --all
+git checkout v2.6.1
+go get github.com/gohugoio/hugo
+{{< / highlight >}}
+
+This did not build me the extended version of hugo that I need for the [docsy](https://github.com/google/docsy) theme I chose, so I had to get it by doing :
+{{< highlight sh >}}
+cd ~/go/src/github.com/gohugoio/hugo/
+go get --tags extended
+go install --tags extended
+{{< / highlight >}}
+
+## Bootstraping this site
+
+{{< highlight sh >}}
+hugo new site www
+cd www
+git init
+git submodule add https://github.com/google/docsy themes/docsy
+{{< / highlight >}}
+
+The docsy theme requires two nodejs programs to run :
+{{< highlight sh >}}
+npm install -D --save autoprefixer
+npm install -D --save postcss-cli
+{{< / highlight >}}
+
+## hugo commands
+
+To spin up the live server for automatic rebuilding the website when writing articles :
+{{< highlight sh >}}
+hugo server --bind 0.0.0.0 --minify --disableFastRender
+{{< / highlight >}}
+
+To publish the website in the `public` folder :
+{{< highlight sh >}}
+hugo --minify
+{{< / highlight >}}
diff --git a/content/blog/netapp/_index.md b/content/blog/netapp/_index.md
new file mode 100644
index 0000000..55bab1b
--- /dev/null
+++ b/content/blog/netapp/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Netapp"
+linkTitle: "Netapp"
+weight: 30
+---
diff --git a/content/blog/netapp/investigate-memory-errors.md b/content/blog/netapp/investigate-memory-errors.md
new file mode 100644
index 0000000..0e6d665
--- /dev/null
+++ b/content/blog/netapp/investigate-memory-errors.md
@@ -0,0 +1,12 @@
+---
+title: "Investigate memory errors"
+linkTitle: "Investigate memory errors"
+date: 2018-07-06
+description: >
+ How to investigate memory errors on a data ONTAP system
+---
+
+{{< highlight sh >}}
+set adv
+system node show-memory-errors -node <cluster_node>
+{{< / highlight >}}
diff --git a/content/blog/travels/_index.md b/content/blog/travels/_index.md
new file mode 100644
index 0000000..4ffe08a
--- /dev/null
+++ b/content/blog/travels/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Travels"
+linkTitle: "Travels"
+weight: 20
+---
diff --git a/content/blog/travels/new-zealand.md b/content/blog/travels/new-zealand.md
new file mode 100644
index 0000000..ae71661
--- /dev/null
+++ b/content/blog/travels/new-zealand.md
@@ -0,0 +1,7 @@
+---
+title: "I am back from New Zealand"
+linkTitle: "Back from New Zealand"
+date: 2019-12-08
+description: >
+ I am back from New Zealand, after three and a half weeks over there.
+---