aboutsummaryrefslogtreecommitdiff
path: root/content
diff options
context:
space:
mode:
authorJulien Dessaux2023-04-23 22:33:49 +0200
committerJulien Dessaux2023-04-23 22:34:10 +0200
commitea435049b3a3f5057b3a894040df3cf4f3256d9e (patch)
tree9046430870fa050e6568fcfbe409f8a8d295d0b3 /content
parentDocument the second gotosocial backup job (diff)
downloadwww-ea435049b3a3f5057b3a894040df3cf4f3256d9e.tar.gz
www-ea435049b3a3f5057b3a894040df3cf4f3256d9e.tar.bz2
www-ea435049b3a3f5057b3a894040df3cf4f3256d9e.zip
Refactored syntax highlighting shortcodes into markdown
Diffstat (limited to 'content')
-rw-r--r--content/blog/OpenBSD/relayd-httpd-example.md8
-rw-r--r--content/blog/OpenBSD/softraid_monitoring.md20
-rw-r--r--content/blog/OpenBSD/wireguard-firewall.md8
-rw-r--r--content/blog/ansible/ansible-vault-example.md12
-rw-r--r--content/blog/ansible/custom-fact.md16
-rw-r--r--content/blog/ansible/dump-all-vars.md8
-rw-r--r--content/blog/cfengine/leveraging-yaml.md18
-rw-r--r--content/blog/commands/asterisk-call-you.md4
-rw-r--r--content/blog/commands/asterisk-list-active-calls.md4
-rw-r--r--content/blog/commands/busybox-web-server.md4
-rw-r--r--content/blog/commands/capture-desktop-video.md4
-rw-r--r--content/blog/commands/clean-conntrack-states.md4
-rw-r--r--content/blog/commands/date.md4
-rw-r--r--content/blog/commands/find-hardlinks.md4
-rw-r--r--content/blog/commands/find-inodes-used.md4
-rw-r--r--content/blog/commands/git-import-commits.md4
-rw-r--r--content/blog/commands/git-rewrite-commit-history.md4
-rw-r--r--content/blog/commands/ipmi.md4
-rw-r--r--content/blog/commands/mdadm.md20
-rw-r--r--content/blog/commands/omreport.md4
-rw-r--r--content/blog/commands/qemu-nbd.md4
-rw-r--r--content/blog/commands/qemu.md8
-rw-r--r--content/blog/commands/rrdtool.md4
-rw-r--r--content/blog/debian/error-during-signature-verification.md4
-rw-r--r--content/blog/debian/force-package-removal.md4
-rw-r--r--content/blog/debian/no-public-key-error.md4
-rw-r--r--content/blog/docker/cleaning.md4
-rw-r--r--content/blog/docker/docker-compose-bridge.md4
-rw-r--r--content/blog/docker/migrate-data-volume.md4
-rw-r--r--content/blog/docker/shell-usage-in-dockerfile.md4
-rw-r--r--content/blog/freebsd/change-the-ip-address-of-a-running-jail.md4
-rw-r--r--content/blog/freebsd/clean-install-does-not-boot.md4
-rw-r--r--content/blog/gentoo/get-zoom-to-work.md8
-rw-r--r--content/blog/hugo/adding-custom-shortcode-age.md12
-rw-r--r--content/blog/hugo/switching-to-hugo.md28
-rw-r--r--content/blog/kubernetes/get_key_and_certificae.md4
-rw-r--r--content/blog/kubernetes/pg_dump_restore.md8
-rw-r--r--content/blog/kubernetes/single-node-cluster-taint.md8
-rw-r--r--content/blog/miscellaneous/bacula-bareos.md16
-rw-r--r--content/blog/miscellaneous/bash-tcp-client.md4
-rw-r--r--content/blog/miscellaneous/boot-from-initramfs.md4
-rw-r--r--content/blog/miscellaneous/etc-update-alpine.md4
-rw-r--r--content/blog/miscellaneous/i3dropdown.md8
-rw-r--r--content/blog/miscellaneous/link-deleted-inode.md4
-rw-r--r--content/blog/miscellaneous/mencoder.md8
-rw-r--r--content/blog/miscellaneous/mirroring-to-github.md4
-rw-r--r--content/blog/miscellaneous/mssql-centos-7.md4
-rw-r--r--content/blog/miscellaneous/my-postgresql-role-cannot-login.md4
-rw-r--r--content/blog/miscellaneous/nginx-ldap.md4
-rw-r--r--content/blog/miscellaneous/nginx-rewrite-break-last.md16
-rw-r--r--content/blog/miscellaneous/osm-overlay-example.md4
-rw-r--r--content/blog/miscellaneous/pleroma.md28
-rw-r--r--content/blog/miscellaneous/postgresql-read-only.md4
-rw-r--r--content/blog/miscellaneous/postgresql-reassign.md8
-rw-r--r--content/blog/miscellaneous/purge-postfix-queue-based-content.md4
-rw-r--r--content/blog/miscellaneous/reusing-ssh-connections.md4
-rw-r--r--content/blog/miscellaneous/rocketchat.md4
-rw-r--r--content/blog/miscellaneous/screen-cannot-open-terminal.md8
-rw-r--r--content/blog/miscellaneous/seti-at-home.md4
-rw-r--r--content/blog/miscellaneous/sqlite-pretty-print.md4
-rw-r--r--content/blog/miscellaneous/tc.md4
-rw-r--r--content/blog/netapp/investigate-memory-errors.md4
-rw-r--r--content/docs/adyxax.org/nethack.md24
-rw-r--r--content/docs/gentoo/installation.md72
-rw-r--r--content/docs/gentoo/kernel_upgrades.md12
-rw-r--r--content/docs/gentoo/lxd.md8
-rw-r--r--content/docs/gentoo/steam.md8
-rw-r--r--content/docs/gentoo/upgrades.md12
-rw-r--r--content/docs/openbsd/install_from_linux.md8
-rw-r--r--content/docs/openbsd/pf.md4
-rw-r--r--content/docs/openbsd/smtpd.md8
71 files changed, 297 insertions, 297 deletions
diff --git a/content/blog/OpenBSD/relayd-httpd-example.md b/content/blog/OpenBSD/relayd-httpd-example.md
index 6d5b6ab..832285b 100644
--- a/content/blog/OpenBSD/relayd-httpd-example.md
+++ b/content/blog/OpenBSD/relayd-httpd-example.md
@@ -14,7 +14,7 @@ The goal was to have a relayd configuration that would serve urls like `https://
## The httpd configuration
-{{< highlight txt >}}
+```nginx
prefork 5
server "example.com" {
@@ -35,11 +35,11 @@ server "example.com" {
root "/htdocs/www/public/"
}
}
-{{< /highlight >}}
+```
## The relayd configuration
-{{< highlight txt >}}
+```cfg
log state changes
log connection errors
prefork 5
@@ -93,4 +93,4 @@ relay "wwwsecure6" {
forward to <httpd> port 8080
forward to <synapse> port 8008
}
-{{< /highlight >}}
+```
diff --git a/content/blog/OpenBSD/softraid_monitoring.md b/content/blog/OpenBSD/softraid_monitoring.md
index 77adfc3..8df879e 100644
--- a/content/blog/OpenBSD/softraid_monitoring.md
+++ b/content/blog/OpenBSD/softraid_monitoring.md
@@ -13,32 +13,32 @@ I have reinstalled my nas recently from gentoo to OpenBSD and was amazed once ag
## Softraid monitoring
I had a hard time figuring out how to properly monitor the state of the array without relying on parsing the output of `bioctl` but at last here it is in all its elegance :
-{{< highlight sh >}}
+```sh
root@nas:~# sysctl hw.sensors.softraid0
hw.sensors.softraid0.drive0=online (sd4), OK
-{{< /highlight >}}
+```
I manually failed one drive (with `bioctl -O /dev/sd2a sd4`) then rebuilt it (with `bioctl -R /dev/sd2a sd4)`... then failed two drives in order to have examples of all possible outputs. Here they are if you are interested :
-{{< highlight sh >}}
+```sh
root@nas:~# sysctl hw.sensors.softraid0
hw.sensors.softraid0.drive0=degraded (sd4), WARNING
-{{< /highlight >}}
+```
-{{< highlight sh >}}
+```sh
root@nas:~# sysctl hw.sensors.softraid0
hw.sensors.softraid0.drive0=rebuilding (sd4), WARNING
-{{< /highlight >}}
+```
-{{< highlight sh >}}
+```sh
root@nas:~# sysctl -a |grep -i softraid
hw.sensors.softraid0.drive0=failed (sd4), CRITICAL
-{{< /highlight >}}
+```
## Nagios check
I am still using nagios on my personal infrastructure, here is the check I wrote if you are interested :
-{{< highlight perl >}}
+```perl
#!/usr/bin/env perl
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
@@ -71,4 +71,4 @@ if (`uname` eq "OpenBSD\n") {
print $output{status};
exit $output{code};
-{{< /highlight >}}
+```
diff --git a/content/blog/OpenBSD/wireguard-firewall.md b/content/blog/OpenBSD/wireguard-firewall.md
index 7a2e0b2..b7b381d 100644
--- a/content/blog/OpenBSD/wireguard-firewall.md
+++ b/content/blog/OpenBSD/wireguard-firewall.md
@@ -13,7 +13,7 @@ tage:
Now that we covered wireguard configurations and routing, let's consider your firewall configuration in several scenarios. This first article will focus on OpenBSD.
## Template for this article
-```
+```cfg
table <myself> const { self }
table <private> const { 10/8, 172.16/12, 192.168/16, fd00::/8 fe80::/10 }
table <internet> const { 0.0.0.0/0, !10/8, !172.16/12, !192.168/16, ::/0, fe80::/10, !fd00::/8 }
@@ -48,7 +48,7 @@ With our template, you can already use your wireguard vpn as a client without an
## Reachable client
To make your client reachable over wireguard, add the following:
-```
+```cfg
pass in on wg0 from <private> to <myself>
```
@@ -59,7 +59,7 @@ In this example I use the `<private>` pf table that I find both very convenient
## Server
A server's configuration just need to accept wireguard connections in addition of the previous rule:
-```
+```cfg
pass in on egress proto udp from <internet> to <myself> port 342
pass in on wg0 from <private> to <myself>
```
@@ -67,7 +67,7 @@ pass in on wg0 from <private> to <myself>
## Hub
As seen in the previous routing article, a hub is a server that can route traffic to another one over wireguard:
-```
+```cfg
pass in on egress proto udp from <internet> to <myself> port 342
pass in on wg0 from <private> to <private>
```
diff --git a/content/blog/ansible/ansible-vault-example.md b/content/blog/ansible/ansible-vault-example.md
index ac68feb..cd8567a 100644
--- a/content/blog/ansible/ansible-vault-example.md
+++ b/content/blog/ansible/ansible-vault-example.md
@@ -9,31 +9,31 @@ tags:
## Editing a protected file
Here is how to edit a vault protected file :
-{{< highlight sh >}}
+```sh
ansible-vault edit hostvars/blah.yml
-{{< / highlight >}}
+```
## Using a vault entry in a task or a jinja template
It is as simple as using any variable :
-{{< highlight yaml >}}
+```yaml
- copy:
path: /etc/ssl/private.key
mode: 0400
content: '{{ ssl_key }}'
-{{< / highlight >}}
+```
## How to specify multiple lines entries
This is actually a yaml question, not a vault one but since I ask myself this frequently in this context here is how to put a multiple lines entry like a private key in vault (for a simple value, just don't use a `|`):
-{{< highlight yaml >}}
+```yaml
ssl_key : |
----- BEGIN PRIVATE KEY -----
blahblahblah
blahblahblah
----- END PRIVATE KEY -----
-{{< /highlight >}}
+```
## How to run playbooks when vault values are needed
diff --git a/content/blog/ansible/custom-fact.md b/content/blog/ansible/custom-fact.md
index 10ab6bc..48a5a2e 100644
--- a/content/blog/ansible/custom-fact.md
+++ b/content/blog/ansible/custom-fact.md
@@ -21,12 +21,12 @@ The facts will be available to ansible at `hostvars.host.ansible_local.<fact_nam
## A simple example
Here is the simplest example of a fact, let's suppose we make it `/etc/ansible/facts.d/mysql.fact` :
-{{< highlight sh >}}
+```sh
#!/bin/sh
set -eu
echo '{"password": "xxxxxx"}'
-{{< /highlight >}}
+```
This will give you the fact `hostvars.host.ansible_local.mysql.password` for this machine.
@@ -36,15 +36,15 @@ A more interesting example is something I use with small webapps. In the contain
provision a database with a user that has access to it on a mysql server. This fact ensures that on subsequent runs we will stay idempotent.
First the fact from before, only slightly modified :
-{{< highlight sh >}}
+```sh
#!/bin/sh
set -eu
echo '{"password": "{{mysql_password}}"}'
-{{< /highlight >}}
+```
This fact is deployed with the following tasks :
-{{< highlight yaml >}}
+```yaml
- name: Generate a password for mysql database connections if there is none
set_fact: mysql_password="{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
when: (ansible_local.mysql_client|default({})).password is undefined
@@ -75,16 +75,16 @@ This fact is deployed with the following tasks :
password: '{{ansible_local.mysql_client.password}}'
state: present
delegate_to: '{{mysql_server}}'
-{{< /highlight >}}
+```
## Caveat : a fact you deploy is not immediately available
Note that installing a fact does not make it exist before the next inventory run on the host. This can be problematic especially if you rely on facts caching to speed up ansible. Here
is how to make ansible reload facts using the setup tasks (If you paid attention you already saw me use it above).
-{{< highlight yaml >}}
+```yaml
- name: reload ansible_local
setup: filter=ansible_local
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/ansible/dump-all-vars.md b/content/blog/ansible/dump-all-vars.md
index e1dea05..61914c1 100644
--- a/content/blog/ansible/dump-all-vars.md
+++ b/content/blog/ansible/dump-all-vars.md
@@ -10,16 +10,16 @@ tags:
Here is the task to use in order to achieve that :
-{{< highlight yaml >}}
+```yaml
- name: Dump all vars
action: template src=dumpall.j2 dest=ansible.all
-{{< /highlight >}}
+```
## Associated template
And here is the template to use with it :
-{{< highlight jinja >}}
+```jinja
Module Variables ("vars"):
--------------------------------
{{ vars | to_nice_json }}
@@ -39,7 +39,7 @@ GROUPS Variables ("groups"):
HOST Variables ("hostvars"):
--------------------------------
{{ hostvars | to_nice_json }}
-{{< /highlight >}}
+```
## Output
diff --git a/content/blog/cfengine/leveraging-yaml.md b/content/blog/cfengine/leveraging-yaml.md
index e773325..494a41c 100644
--- a/content/blog/cfengine/leveraging-yaml.md
+++ b/content/blog/cfengine/leveraging-yaml.md
@@ -17,7 +17,7 @@ The use case bellow lacks a bit or error control with argument validation, it wi
In `cmdb/hosts/andromeda.yaml` we describe some properties of a host named andromeda:
-{{< highlight yaml >}}
+```yaml
domain: adyxax.org
host_interface: dummy0
host_ip: "10.1.0.255"
@@ -35,13 +35,13 @@ tunnels:
peer: "10.1.0.2"
remote_host: legend.adyxax.org
remote_port: 1195
-{{< /highlight >}}
+```
## Reading the yaml
I am bundling the values in a common bundle, accessible globally. This is one of the first bundles processed in the order my policy files are loaded. This is just an extract, you can load multiple files and merge them to distribute common
settings :
-{{< highlight yaml >}}
+```yaml
bundle common g
{
vars:
@@ -51,14 +51,14 @@ bundle common g
any::
"has_host_data" expression => fileexists("$(sys.inputdir)/cmdb/hosts/$(sys.host).yaml");
}
-{{< /highlight >}}
+```
## Using the data
### Cfengine agent bundle
We access the data using the global g.host_data variable, here is a complete example :
-{{< highlight yaml >}}
+```yaml
bundle agent openvpn
{
vars:
@@ -91,7 +91,7 @@ bundle agent openvpn
"$(this.bundle): common.key repaired" ifvarclass => "openvpn_common_key_repaired";
"$(this.bundle): $(tunnels) service repaired" ifvarclass => "tunnel_$(tunnels)_service_repaired";
}
-
+
bundle agent openvpn_tunnel(tunnel)
{
classes:
@@ -117,12 +117,12 @@ bundle agent openvpn_tunnel(tunnel)
"$(this.bundle): $(tunnel).conf repaired" ifvarclass => "openvpn_$(tunnel)_conf_repaired";
"$(this.bundle): $(tunnel) service repaired" ifvarclass => "tunnel_$(tunnel)_service_repaired";
}
-{{< /highlight >}}
+```
### Template file
Templates can reference the g.host_data too, like in the following :
-{{< highlight cfg >}}
+```cfg
[%CFEngine BEGIN %]
proto udp
port $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][port])
@@ -152,7 +152,7 @@ group nogroup
remote $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_host]) $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_port])
[%CFEngine END %]
-{{< /highlight >}}
+```
## References
- https://docs.cfengine.com/docs/master/examples-tutorials-json-yaml-support-in-cfengine.html
diff --git a/content/blog/commands/asterisk-call-you.md b/content/blog/commands/asterisk-call-you.md
index 75d642b..ce62556 100644
--- a/content/blog/commands/asterisk-call-you.md
+++ b/content/blog/commands/asterisk-call-you.md
@@ -8,6 +8,6 @@ tags:
## Using the cli
-{{< highlight yaml >}}
+```sh
watch -d -n1 'asterisk -rx “core show channels”'
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/asterisk-list-active-calls.md b/content/blog/commands/asterisk-list-active-calls.md
index 285d330..e9723e7 100644
--- a/content/blog/commands/asterisk-list-active-calls.md
+++ b/content/blog/commands/asterisk-list-active-calls.md
@@ -11,6 +11,6 @@ tags:
At alterway we sometimes have DTMF problems that prevent my mobile from joining a conference room. Here is something I use to have asterisk call me
and place me inside the room :
-{{< highlight yaml >}}
+```
channel originate SIP/numlog/06XXXXXXXX application MeetMe 85224,M,secret
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/busybox-web-server.md b/content/blog/commands/busybox-web-server.md
index 60cc1be..14470fa 100644
--- a/content/blog/commands/busybox-web-server.md
+++ b/content/blog/commands/busybox-web-server.md
@@ -11,6 +11,6 @@ tags:
If you have been using things like `python -m SimpleHTTPServer` to serve static files in a pinch, here is something even more simple and lightweight to use :
-{{< highlight sh >}}
+```sh
busybox httpd -vfp 80
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/capture-desktop-video.md b/content/blog/commands/capture-desktop-video.md
index 3bc0c38..8318c48 100644
--- a/content/blog/commands/capture-desktop-video.md
+++ b/content/blog/commands/capture-desktop-video.md
@@ -10,6 +10,6 @@ tags:
You can capture a video of your linux desktop very easily with ffmpeg :
-{{< highlight sh >}}
+```sh
ffmpeg -f x11grab -s xga -r 25 -i :0.0 -sameq /tmp/out.mpg
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/clean-conntrack-states.md b/content/blog/commands/clean-conntrack-states.md
index eee4da9..3621dfe 100644
--- a/content/blog/commands/clean-conntrack-states.md
+++ b/content/blog/commands/clean-conntrack-states.md
@@ -10,10 +10,10 @@ tags:
Firewalling on linux is messy, here is an example of how to clean conntrack states that match a specific query on a linux firewall :
-{{< highlight sh >}}
+```sh
conntrack -L conntrack -p tcp –orig-dport 65372 | \
while read _ _ _ _ src dst sport dport _; do
conntrack -D conntrack –proto tcp –orig-src ${src#*=} –orig-dst ${dst#*=} \
–sport ${sport#*=} –dport ${dport#*=}
done
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/date.md b/content/blog/commands/date.md
index 1472940..9612124 100644
--- a/content/blog/commands/date.md
+++ b/content/blog/commands/date.md
@@ -10,7 +10,7 @@ tags:
I somehow have a hard time remembering this simple date flags *(probably because I rarely get to practice it), I decided to write it down here :
-{{< highlight sh >}}
+```sh
$ date -d @1294319676
Thu Jan 6 13:14:36 GMT 2011
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/find-hardlinks.md b/content/blog/commands/find-hardlinks.md
index d418cc3..e8ebbea 100644
--- a/content/blog/commands/find-hardlinks.md
+++ b/content/blog/commands/find-hardlinks.md
@@ -10,6 +10,6 @@ tags:
## The command
-{{< highlight sh >}}
+```sh
find . -samefile /path/to/file
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/find-inodes-used.md b/content/blog/commands/find-inodes-used.md
index 4936c70..4efad9d 100644
--- a/content/blog/commands/find-inodes-used.md
+++ b/content/blog/commands/find-inodes-used.md
@@ -10,6 +10,6 @@ tags:
## The command
-{{< highlight sh >}}
+```sh
find . -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/git-import-commits.md b/content/blog/commands/git-import-commits.md
index 0286282..bb92b26 100644
--- a/content/blog/commands/git-import-commits.md
+++ b/content/blog/commands/git-import-commits.md
@@ -9,6 +9,6 @@ tags:
## The trick
In an ideal world there should never be a need to do this, but here is how to do it properly if you ever walk into this bizarre problem. This command imports commits from a repo in the `../masterfiles` folder and applies them to the repository inside the current folder :
-{{< highlight sh >}}
+```sh
(cd ../masterfiles/; git format-patch –stdout origin/master) | git am
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/git-rewrite-commit-history.md b/content/blog/commands/git-rewrite-commit-history.md
index 8378a9c..4176c82 100644
--- a/content/blog/commands/git-rewrite-commit-history.md
+++ b/content/blog/commands/git-rewrite-commit-history.md
@@ -9,6 +9,6 @@ tags:
## git filter-branch
Here is how to rewrite a git commit history, for example to remove a file :
-{{< highlight sh >}}
+```sh
git filter-branch –index-filter "git rm --cached --ignore-unmatch ${file}" --prune-empty --tag-name-filter cat - -all
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/ipmi.md b/content/blog/commands/ipmi.md
index 4e00be1..a45879d 100644
--- a/content/blog/commands/ipmi.md
+++ b/content/blog/commands/ipmi.md
@@ -11,9 +11,9 @@ tags:
- launch ipmi remote text console : `ipmitool -H XX.XX.XX.XX -C3 -I lanplus -U <ipmi_user> sol activate`
- Show local ipmi lan configuration : `ipmitool lan print`
- Update local ipmi lan configuration :
-{{< highlight sh >}}
+```sh
ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr 10.31.149.39
ipmitool lan set 1 netmask 255.255.255.0
mc reset cold
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/mdadm.md b/content/blog/commands/mdadm.md
index da15041..a2825f5 100644
--- a/content/blog/commands/mdadm.md
+++ b/content/blog/commands/mdadm.md
@@ -9,34 +9,34 @@ tags:
## Watch the array status
-{{< highlight sh >}}
+```sh
watch -d -n10 mdadm --detail /dev/md127
-{{< /highlight >}}
+```
## Recovery from livecd
-{{< highlight sh >}}
+```sh
mdadm --examine --scan >> /etc/mdadm.conf
mdadm --assemble --scan /dev/md/root
mount /dev/md127 /mnt # or vgscan...
-{{< /highlight >}}
+```
If auto detection does not work, you can still assemble an array manually :
-{{< highlight sh >}}
+```sh
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
-{{< /highlight >}}
+```
## Resync an array
First rigorously check the output of `cat /proc/mdstat`
-{{< highlight sh >}}
+```sh
mdadm --manage --re-add /dev/md0 /dev/sdb1
-{{< /highlight >}}
+```
## Destroy an array
-{{< highlight sh >}}
+```sh
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdb
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/omreport.md b/content/blog/commands/omreport.md
index a5d90e5..de46c8a 100644
--- a/content/blog/commands/omreport.md
+++ b/content/blog/commands/omreport.md
@@ -12,8 +12,8 @@ tags:
## Other commands
-{{< highlight sh >}}
+```sh
omreport storage vdisk
omreport storage pdisk controller=0 vdisk=0
omreport storage pdisk controller=0 pdisk=0:0:4
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/qemu-nbd.md b/content/blog/commands/qemu-nbd.md
index 0402876..a9a5ceb 100644
--- a/content/blog/commands/qemu-nbd.md
+++ b/content/blog/commands/qemu-nbd.md
@@ -9,11 +9,11 @@ tags:
## Usage example
-{{< highlight sh >}}
+```sh
modprobe nbd max_part=8
qemu-nbd -c /dev/nbd0 image.img
mount /dev/nbd0p1 /mnt # or vgscan && vgchange -ay
[...]
umount /mnt
qemu-nbd -d /dev/nbd0
-{{< /highlight >}}
+```
diff --git a/content/blog/commands/qemu.md b/content/blog/commands/qemu.md
index 294c9a9..b4301a8 100644
--- a/content/blog/commands/qemu.md
+++ b/content/blog/commands/qemu.md
@@ -10,23 +10,23 @@ tags:
## Quickly launch a qemu vm with local qcow as hard drive
In this example I am using the docker0 bridge because I do not want to have to modify my shorewall config, but any proper bridge would do :
-{{< highlight sh >}}
+```sh
ip tuntap add tap0 mode tap
brctl addif docker0 tap0
qemu-img create -f qcow2 obsd.qcow2 10G
qemu-system-x86_64 -curses -drive file=install65.fs,format=raw -drive file=obsd.qcow2 \
-net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
qemu-system-x86_64 -curses -drive file=obsd.qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
-{{< /highlight >}}
+```
The first qemu command runs the installer, the second one just runs the vm.
## Launch a qemu vm with your local hard drive
My use case for this is to install openbsd on a server from a hosting provider that doesn't provide an openbsd installer :
-{{< highlight sh >}}
+```sh
qemu-system-x86_64 -curses -drive file=miniroot65.fs -drive file=/dev/sda -net nic -net user
-{{< /highlight >}}
+```
## Ressources
diff --git a/content/blog/commands/rrdtool.md b/content/blog/commands/rrdtool.md
index bca039a..dfeb6ca 100644
--- a/content/blog/commands/rrdtool.md
+++ b/content/blog/commands/rrdtool.md
@@ -8,13 +8,13 @@ tags:
## Graph manually
-{{< highlight sh >}}
+```sh
for i in `ls`; do
rrdtool graph $i.png -w 1024 -h 768 -a PNG --slope-mode --font DEFAULT:7: \
--start -3days --end now DEF:in=$i:netin:MAX DEF:out=$i:netout:MAX \
LINE1:in#0000FF:"in" LINE1:out#00FF00:"out"
done
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/debian/error-during-signature-verification.md b/content/blog/debian/error-during-signature-verification.md
index 117fcf9..7e4dbaf 100644
--- a/content/blog/debian/error-during-signature-verification.md
+++ b/content/blog/debian/error-during-signature-verification.md
@@ -9,9 +9,9 @@ tags:
## How to fix
Here is how to fix the apt-get “Error occured during the signature verification” :
-{{< highlight sh >}}
+```sh
cd /var/lib/apt
mv lists lists.old
mkdir -p lists/partial
aptitude update
-{{< /highlight >}}
+```
diff --git a/content/blog/debian/force-package-removal.md b/content/blog/debian/force-package-removal.md
index 75a5d12..33920fe 100644
--- a/content/blog/debian/force-package-removal.md
+++ b/content/blog/debian/force-package-removal.md
@@ -9,8 +9,8 @@ tags:
## How to force the removal of a package
Here is how to force package removal when post-uninstall script fails :
-{{< highlight sh >}}
+```sh
dpkg --purge --force-all <package>
-{{< /highlight >}}
+```
There is another option if you need to be smarter or if it is a pre-uninstall script that fails. Look at `/var/lib/dpkg/info/<package>.*inst`, locate the line that fails, comment it out and try to purge again. Repeat until success!
diff --git a/content/blog/debian/no-public-key-error.md b/content/blog/debian/no-public-key-error.md
index 1e5720b..9eccd74 100644
--- a/content/blog/debian/no-public-key-error.md
+++ b/content/blog/debian/no-public-key-error.md
@@ -9,6 +9,6 @@ tags:
## How to fix
Here is how to fix the no public key available error :
-{{< highlight sh >}}
+```sh
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys KEYID
-{{< /highlight >}}
+```
diff --git a/content/blog/docker/cleaning.md b/content/blog/docker/cleaning.md
index 7326f94..f5a8e99 100644
--- a/content/blog/docker/cleaning.md
+++ b/content/blog/docker/cleaning.md
@@ -9,6 +9,6 @@ tags:
## The command
Be careful that this will delete any stopped container and remove any locally unused images, volumes and tags :
-{{< highlight sh >}}
+```sh
docker system prune -f -a
-{{< /highlight >}}
+```
diff --git a/content/blog/docker/docker-compose-bridge.md b/content/blog/docker/docker-compose-bridge.md
index 8dffe1f..416b8d0 100644
--- a/content/blog/docker/docker-compose-bridge.md
+++ b/content/blog/docker/docker-compose-bridge.md
@@ -14,7 +14,7 @@ By default, docker-compose will create a network with a randomly named bridge. I
For example if your bridge is named docbr1, you need to put your services in `network_mode: “bridge”` and add a custom `network` entry like :
-{{< highlight yaml >}}
+```yaml
version: '3.0'
services:
@@ -32,4 +32,4 @@ networks:
default:
external:
name: docbr1
-{{< /highlight >}}
+```
diff --git a/content/blog/docker/migrate-data-volume.md b/content/blog/docker/migrate-data-volume.md
index 9a87f57..5e05e72 100644
--- a/content/blog/docker/migrate-data-volume.md
+++ b/content/blog/docker/migrate-data-volume.md
@@ -9,9 +9,9 @@ tags:
## The command
Here is how to migrate a data volume between two of your hosts. A rsync of the proper `/var/lib/docker/volumes` subfolder would work just as well, but here is a fun way to do it with docker and pipes :
-{{< highlight sh >}}
+```sh
export VOLUME=tiddlywiki
export DEST=10.1.0.242
docker run --rm -v $VOLUME:/from alpine ash -c "cd /from ; tar -cpf - . " \
| ssh $DEST "docker run --rm -i -v $VOLUME:/to alpine ash -c 'cd /to ; tar -xfp - ' "
-{{< /highlight >}}
+```
diff --git a/content/blog/docker/shell-usage-in-dockerfile.md b/content/blog/docker/shell-usage-in-dockerfile.md
index 21e81fc..25fc22b 100644
--- a/content/blog/docker/shell-usage-in-dockerfile.md
+++ b/content/blog/docker/shell-usage-in-dockerfile.md
@@ -14,9 +14,9 @@ The default shell is `[“/bin/sh”, “-c”]`, which doesn't handle pipe fail
To process errors when using pipes use this :
-{{< highlight sh >}}
+```sh
SHELL ["/bin/bash", "-eux", "-o", "pipefail", "-c"]
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md b/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md
index 815d352..c35116e 100644
--- a/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md
+++ b/content/blog/freebsd/change-the-ip-address-of-a-running-jail.md
@@ -11,6 +11,6 @@ tags:
Here is how to change the ip address of a running jail :
-{{< highlight sh >}}
+```sh
jail -m ip4.addr=“192.168.1.87,192.168.1.88” jid=1
-{{< /highlight >}}
+```
diff --git a/content/blog/freebsd/clean-install-does-not-boot.md b/content/blog/freebsd/clean-install-does-not-boot.md
index d5603f7..b473cde 100644
--- a/content/blog/freebsd/clean-install-does-not-boot.md
+++ b/content/blog/freebsd/clean-install-does-not-boot.md
@@ -10,7 +10,7 @@ tags:
I installed a fresh FreeBSD server today, and to my surprise it refused to boot. I had to do the following from my liveUSB :
-{{< highlight yaml >}}
+```sh
gpart set -a active /dev/ada0
gpart set -a bootme -i 1 /dev/ada0
-{{< /highlight >}}
+```
diff --git a/content/blog/gentoo/get-zoom-to-work.md b/content/blog/gentoo/get-zoom-to-work.md
index c275ece..d47ca54 100644
--- a/content/blog/gentoo/get-zoom-to-work.md
+++ b/content/blog/gentoo/get-zoom-to-work.md
@@ -12,13 +12,13 @@ The zoom video conderencing tool works on gentoo, but since it is not integrated
## Running the client
-{{< highlight yaml >}}
+```sh
./ZoomLauncher
-{{< /highlight >}}
+```
## Working around the "zoommtg address not understood" error
When you try to authenticate you will have your web browser pop up with a link it cannot interpret. You need to get the `zoommtg://.*` thing and run it in another ZoomLauncher (do not close the zoom process that spawned this authentication link or the authentication will fail :
-{{< highlight yaml >}}
+```sh
./ZoomLauncher 'zoommtg://zoom.us/google?code=XXXXXXXX'
-{{< /highlight >}}
+```
diff --git a/content/blog/hugo/adding-custom-shortcode-age.md b/content/blog/hugo/adding-custom-shortcode-age.md
index 72fb9bd..432d820 100644
--- a/content/blog/hugo/adding-custom-shortcode-age.md
+++ b/content/blog/hugo/adding-custom-shortcode-age.md
@@ -14,9 +14,9 @@ On the [about-me]({{< ref "about-me" >}}) page I had hardcoded my age. I wanted
Added a custom markdown shortcode in hugo in as simple as creating a `layouts/shortcodes/` directory. Each html file created inside will define a shortcode from the filename. In my example I want to calculate my age so I named the shortcode `age.html` and added the following simple template code :
-{{< highlight html >}}
+```html
{{ div (sub now.Unix 493473600 ) 31556926 }}
-{{< / highlight >}}
+```
The first number is the timestamp of my birthday, the second represents how many seconds there are in a year.
@@ -24,14 +24,14 @@ The first number is the timestamp of my birthday, the second represents how many
With this `layouts/shortcodes/age.html` file I can just add the following in a page to add my age :
-{{< highlight html >}}
+```html
{{< print "{{% age %}}" >}}
-{{< / highlight >}}
+```
And if you are wondering how I am able to display a shortcode code inside this page without having it render, it is because I defined another shortcode that does exactly like this :
-{{< highlight html >}}
+```html
{{< print "{{ index .Params 0 }}" >}}
-{{< / highlight >}}
+```
You can find these examples [here](https://git.adyxax.org/adyxax/www/tree/layouts/shortcodes)! Hugo really is a powerful static website generator, it is amazing.
diff --git a/content/blog/hugo/switching-to-hugo.md b/content/blog/hugo/switching-to-hugo.md
index dc2841f..834f36e 100644
--- a/content/blog/hugo/switching-to-hugo.md
+++ b/content/blog/hugo/switching-to-hugo.md
@@ -12,49 +12,49 @@ This is the website you are currently reading. It is a static website built usin
## Installing hugo
-{{< highlight sh >}}
+```sh
go get github.com/gohugoio/hugo
-{{< / highlight >}}
+```
You probably won't encounter this issue but this command failed at the time I installed hugo because the master branch in one of the dependencies was
tainted. I fixed it with by using a stable tag for this project and continue installing hugo from there:
-{{< highlight sh >}}
+```sh
cd go/src/github.com/tdewolff/minify/
tig --all
git checkout v2.6.1
go get github.com/gohugoio/hugo
-{{< / highlight >}}
+```
This did not build me the extended version of hugo that I need for the [docsy](https://github.com/google/docsy) theme I chose, so I had to get it by doing :
-{{< highlight sh >}}
+```sh
cd ~/go/src/github.com/gohugoio/hugo/
go get --tags extended
go install --tags extended
-{{< / highlight >}}
+```
## Bootstraping this site
-{{< highlight sh >}}
+```sh
hugo new site www
cd www
git init
git submodule add https://github.com/google/docsy themes/docsy
-{{< / highlight >}}
+```
The docsy theme requires two nodejs programs to run :
-{{< highlight sh >}}
+```sh
npm install -D --save autoprefixer
npm install -D --save postcss-cli
-{{< / highlight >}}
+```
## hugo commands
To spin up the live server for automatic rebuilding the website when writing articles :
-{{< highlight sh >}}
+```sh
hugo server --bind 0.0.0.0 --minify --disableFastRender
-{{< / highlight >}}
+```
To publish the website in the `public` folder :
-{{< highlight sh >}}
+```sh
hugo --minify
-{{< / highlight >}}
+```
diff --git a/content/blog/kubernetes/get_key_and_certificae.md b/content/blog/kubernetes/get_key_and_certificae.md
index 30b60e5..29a7789 100644
--- a/content/blog/kubernetes/get_key_and_certificae.md
+++ b/content/blog/kubernetes/get_key_and_certificae.md
@@ -14,7 +14,7 @@ My use case is to deploy a wildcard certificate that was previously handled by a
## The solution
Assuming we are working with a secret named `wild.adyxax.org-cert` and our namespace is named `legacy` :
-{{< highlight sh >}}
+```sh
kubectl -n legacy get secret wild.adyxax.org-cert -o json -o=jsonpath="{.data.tls\.crt}" | base64 -d > fullchain.cer
kubectl -n legacy get secret wild.adyxax.org-cert -o json -o=jsonpath="{.data.tls\.key}" | base64 -d > adyxax.org.key
-{{< /highlight >}}
+```
diff --git a/content/blog/kubernetes/pg_dump_restore.md b/content/blog/kubernetes/pg_dump_restore.md
index 0251728..0fa09ac 100644
--- a/content/blog/kubernetes/pg_dump_restore.md
+++ b/content/blog/kubernetes/pg_dump_restore.md
@@ -11,21 +11,21 @@ tags:
## Dumping
Assuming we are working with a postgresql statefulset, our namespace is named `miniflux` and our master pod is named `db-postgresql-0`, trying to
dump a database named `miniflux`:
-{{< highlight sh >}}
+```sh
export POSTGRES_PASSWORD=$(kubectl get secret --namespace miniflux db-postgresql \
-o jsonpath="{.data.postgresql-password}" | base64 --decode)
kubectl run db-postgresql-client --rm --tty -i --restart='Never' --namespace miniflux \
--image docker.io/bitnami/postgresql:11.8.0-debian-10-r19 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- pg_dump --host db-postgresql -U postgres -d miniflux > miniflux.sql-2020062501
-{{< /highlight >}}
+```
## Restoring
Assuming we are working with a postgresql statefulset, our namespace is named `miniflux` and our master pod is named `db-postgresql-0`, trying to
restore a database named `miniflux`:
-{{< highlight sh >}}
+```sh
kubectl -n miniflux cp miniflux.sql-2020062501 db-postgresql-0:/tmp/miniflux.sql
kubectl -n miniflux exec -ti db-postgresql-0 -- psql -U postgres -d miniflux
miniflux=# \i /tmp/miniflux.sql
kubectl -n miniflux exec -ti db-postgresql-0 -- rm /tmp/miniflux.sql
-{{< /highlight >}}
+```
diff --git a/content/blog/kubernetes/single-node-cluster-taint.md b/content/blog/kubernetes/single-node-cluster-taint.md
index 5b80598..bd7ddb2 100644
--- a/content/blog/kubernetes/single-node-cluster-taint.md
+++ b/content/blog/kubernetes/single-node-cluster-taint.md
@@ -10,11 +10,11 @@ tags:
## The solution
On a single node cluster, control plane nodes are tainted so that the cluster never schedules pods on them. To change that run :
-{{< highlight sh >}}
+```sh
kubectl taint nodes --all node-role.kubernetes.io/master-
-{{< /highlight >}}
+```
Getting dns in your pods :
-{{< highlight sh >}}
+```sh
add --cluster-dns=10.96.0.10 to /etc/conf.d/kubelet
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/bacula-bareos.md b/content/blog/miscellaneous/bacula-bareos.md
index 19111c3..6fdf648 100644
--- a/content/blog/miscellaneous/bacula-bareos.md
+++ b/content/blog/miscellaneous/bacula-bareos.md
@@ -13,28 +13,28 @@ Bacula is a backup software, bareos is a fork of it. Here are some tips and solu
## Adjust an existing volume for pool configuration changes
In bconsole, run the following commands and follow the prompts :
-{{< highlight sh >}}
+```sh
update pool from resource
update all volumes in pool
-{{< /highlight >}}
+```
## Using bextract
On the sd you need to have a valid device name with the path to your tape, then run :
-{{< highlight sh >}}
+```sh
bextract -V <volume names separated by |> <device-name>
<directory-to-store-files>
-{{< /highlight >}}
+```
## Integer out of range sql error
If you get an sql error `integer out of range` for an insert query in the catalog, check the id sequence for the table which had the error. For
example with the basefiles table :
-{{< highlight sql >}}
+```sql
select nextval('basefiles_baseid_seq');
-{{< /highlight >}}
+```
You can then fix it with :
-{{< highlight sql >}}
+```sql
alter table BaseFiles alter column baseid set data type bigint;
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/bash-tcp-client.md b/content/blog/miscellaneous/bash-tcp-client.md
index 2f31d14..e3246ef 100644
--- a/content/blog/miscellaneous/bash-tcp-client.md
+++ b/content/blog/miscellaneous/bash-tcp-client.md
@@ -10,8 +10,8 @@ tags:
There are some fun toys in bash. I would not rely on it for a production script, but here is one such things :
-{{< highlight sh >}}
+```sh
exec 5<>/dev/tcp/10.1.0.254/8080
bash$ echo -e "GET / HTTP/1.0\n" >&5
bash$ cat <&5
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/boot-from-initramfs.md b/content/blog/miscellaneous/boot-from-initramfs.md
index df740b6..759219f 100644
--- a/content/blog/miscellaneous/boot-from-initramfs.md
+++ b/content/blog/miscellaneous/boot-from-initramfs.md
@@ -14,9 +14,9 @@ Sometimes, your linux machine can get stuck while booting and drop you into an i
All initramfs are potentially different, but almost always feature busybox and common mechanisms. Recently I had to finish booting from an initramfs shell, here is how I used `switch_root` to do so :
-{{< highlight sh >}}
+```sh
lvm vgscan
lvm vgchange -ay vg
mount -t ext4 /dev/mapper/vg-root /root
exec switch_root -c /dev/console /root /sbin/init
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/etc-update-alpine.md b/content/blog/miscellaneous/etc-update-alpine.md
index 20461d9..86fdcae 100644
--- a/content/blog/miscellaneous/etc-update-alpine.md
+++ b/content/blog/miscellaneous/etc-update-alpine.md
@@ -10,7 +10,7 @@ tags:
## The script
Alpine linux doesn't seem to have a tool to merge pending configuration changes, so I wrote one :
-{{< highlight sh >}}
+```sh
#!/bin/sh
set -eu
@@ -37,4 +37,4 @@ for new_file in $(find /etc -iname '*.apk-new'); do
esac
done
done
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/i3dropdown.md b/content/blog/miscellaneous/i3dropdown.md
index fa10db4..31c0a52 100644
--- a/content/blog/miscellaneous/i3dropdown.md
+++ b/content/blog/miscellaneous/i3dropdown.md
@@ -14,21 +14,21 @@ i3dropdown is a tool to make any X application drop down from the top of the scr
## Compilation
First of all, you have get i3dropdown and compile it. It does not have any dependencies so it is really easy :
-{{< highlight sh >}}
+```sh
git clone https://gitlab.com/exrok/i3dropdown
cd i3dropdown
make
cp build/i3dropdown ~/bin/
-{{< /highlight >}}
+```
## i3 configuration
Here is a working example of the pavucontrol app, a volume mixer I use :
-{{< highlight conf >}}
+```cfg
exec --no-startup-id i3 --get-socketpath > /tmp/i3wm-socket-path
for_window [instance="^pavucontrol"] floating enable
bindsym Mod4+shift+p exec /home/julien/bin/i3dropdown -W 90 -H 50 pavucontrol pavucontrol-qt
-{{< /highlight >}}
+```
To work properly, i3dropdown needs to have the path to the i3 socket. Because the command to get the socketpath from i3 is a little slow, it is best to cache it somewhere. By default
i3dropdown recognises `/tmp/i3wm-socket-path`. Then each window managed by i3dropdown needs to be floating. The last line bind a key to invoke or mask the app.
diff --git a/content/blog/miscellaneous/link-deleted-inode.md b/content/blog/miscellaneous/link-deleted-inode.md
index c16ea78..171986f 100644
--- a/content/blog/miscellaneous/link-deleted-inode.md
+++ b/content/blog/miscellaneous/link-deleted-inode.md
@@ -15,8 +15,8 @@ Sometimes a file gets deleted by mistake, but thankfully it is still opened by s
Get the inode number from `lsof` (or from `fstat` if you are on a modern system), then run something like the following :
-{{< highlight sh >}}
+```sh
debugfs -w /dev/mapper/vg-home -R 'link <16008> /some/path'
-{{< /highlight >}}
+```
In this example 16008 is the inode number you want to link to (the < > are important, they tell debugfs you are manipulating an inode). Beware that **the path is relative to the root of the block device** you are restoring onto.
diff --git a/content/blog/miscellaneous/mencoder.md b/content/blog/miscellaneous/mencoder.md
index 4eeb5a9..7487e69 100644
--- a/content/blog/miscellaneous/mencoder.md
+++ b/content/blog/miscellaneous/mencoder.md
@@ -9,14 +9,14 @@ tags:
## Aggregate png images into a video
Example command :
-{{< highlight sh >}}
+```sh
mencoder mf://*.png -mf w=1400:h=700:fps=1:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi
-{{< /highlight >}}
+```
You should use the following to specify a list of files instead of `*.png`:
-{{< highlight sh >}}
+```sh
mf://@list.txt
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/miscellaneous/mirroring-to-github.md b/content/blog/miscellaneous/mirroring-to-github.md
index ab42914..78615d0 100644
--- a/content/blog/miscellaneous/mirroring-to-github.md
+++ b/content/blog/miscellaneous/mirroring-to-github.md
@@ -16,13 +16,13 @@ It turns out it is quite simple. First you will need to generate a [github acces
Then you create a git hook with a script that looks like the following :
-{{< highlight sh >}}
+```sh
#!/usr/bin/env bash
set -eu
git push --mirror --quiet https://adyxax:TOKEN@github.com/adyxax/www.git &> /dev/null
echo 'github updated'
-{{< /highlight >}}
+```
Just put your token there, adjust your username and the repository path then it will work. I am using this in `post-receive` hooks on my git server on several repositories without any issue.
diff --git a/content/blog/miscellaneous/mssql-centos-7.md b/content/blog/miscellaneous/mssql-centos-7.md
index 8ba44e6..cf87a87 100644
--- a/content/blog/miscellaneous/mssql-centos-7.md
+++ b/content/blog/miscellaneous/mssql-centos-7.md
@@ -15,7 +15,7 @@ I had to do this in order to help a friend, I do not think I would ever willingl
## Procedure
Here is how to setup mssql on a fresh centos 7
-{{< highlight sh >}}
+```sh
vi /etc/sysconfig/network-scripts/ifcfg-eth0
vi /etc/resolv.conf
curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo
@@ -34,4 +34,4 @@ passwd
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Europe/Paris /etc/localtime
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -p
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/my-postgresql-role-cannot-login.md b/content/blog/miscellaneous/my-postgresql-role-cannot-login.md
index d85f3bf..0b4460e 100644
--- a/content/blog/miscellaneous/my-postgresql-role-cannot-login.md
+++ b/content/blog/miscellaneous/my-postgresql-role-cannot-login.md
@@ -13,6 +13,6 @@ Login is a permission on postgresql, that sometimes is not obvious it can cause
## The solution
Simply log in as postgres or another administrator account and run :
-{{< highlight sh >}}
+```sh
ALTER ROLE "user" LOGIN;
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/nginx-ldap.md b/content/blog/miscellaneous/nginx-ldap.md
index deea4a4..932a87a 100644
--- a/content/blog/miscellaneous/nginx-ldap.md
+++ b/content/blog/miscellaneous/nginx-ldap.md
@@ -8,7 +8,7 @@ tags:
## How to
-{{< highlight nginx >}}
+```nginx
ldap_server ldap {
auth_ldap_cache_enabled on;
auth_ldap_cache_expiration_time 10000;
@@ -23,4 +23,4 @@ ldap_server ldap {
require valid_user;
#require group "cn=admins,ou=groups,dc=adyxax,dc=org";
}
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/nginx-rewrite-break-last.md b/content/blog/miscellaneous/nginx-rewrite-break-last.md
index 7cb854e..6cc435e 100644
--- a/content/blog/miscellaneous/nginx-rewrite-break-last.md
+++ b/content/blog/miscellaneous/nginx-rewrite-break-last.md
@@ -13,7 +13,7 @@ Today I was called in escalation to debug why a set of rewrites was suddenly mis
## Outside a location block
When used outside a location block, these keywords stop the rules evaluation and then evaluate to a location. Consider the following example :
-{{< highlight nginx >}}
+```nginx
server {
[...]
location / {
@@ -28,11 +28,11 @@ server {
rewrite ([^/]+\.txt)$ /texts/$1 last;
rewrite ([^/]+\.cfg)$ /configs/$1 break;
}
-{{< /highlight >}}
+```
If you run several curls you can see the behaviour illustrated :
-{{< highlight sh >}}
+```sh
curl http://localhost/test
root # we hit the root handler without any redirect matching
@@ -41,14 +41,14 @@ texts # we hit the rewrite to /texts/test.txt, which is then reevaluated and hi
curl http://localhost/test.cfg
configs # we hit the rewrite to /configs/test.cfg, which is then reevaluated and hits the configs location
-{{< /highlight >}}
+```
## Inside a location block
When used inside a location block a rewrite rule flagged last will eventually trigger a location change (it is reevaluated based on the new url) but this does not happen when break is used.
Consider the following example :
-{{< highlight nginx >}}
+```nginx
server {
[...]
location / {
@@ -63,11 +63,11 @@ server {
return 200 'configs';
}
}
-{{< /highlight >}}
+```
If you run several curls you can see the behaviour illustrated :
-{{< highlight sh >}}
+```sh
curl http://localhost/test
root # we hit the root handler without any redirect matching
@@ -76,7 +76,7 @@ texts # we hit the rewrite to /texts/test.txt, which is then reevaluated and hi
curl http://localhost/test.cfg
404 NOT FOUND # or maybe a file if you had a test.cfg file in your root directory!
-{{< /highlight >}}
+```
Can you see what happened for the last test? The break statement in a location stops all evaluation, and do not reevaluate the resulting path in any location. Nginx therefore tries to serve a file from the root directory specified for the server. That is the reason we do not get either `root` or `configs` as outputs.
diff --git a/content/blog/miscellaneous/osm-overlay-example.md b/content/blog/miscellaneous/osm-overlay-example.md
index de31d95..bff86b5 100644
--- a/content/blog/miscellaneous/osm-overlay-example.md
+++ b/content/blog/miscellaneous/osm-overlay-example.md
@@ -13,7 +13,7 @@ OpenStreetMap is a great resource and there is a lot more information stored the
## The solution
Go to http://overpass-turbo.eu/ and enter a filter script similar to the following :
-{{< highlight html >}}
+```html
<osm-script>
<query type="node">
<has-kv k="amenity" v="recycling"/>
@@ -22,6 +22,6 @@ Go to http://overpass-turbo.eu/ and enter a filter script similar to the followi
<!-- print results -->
<print mode="body"/>
</osm-script>
-{{< /highlight >}}
+```
This example will highlight the recycling points near a target location. From there you can build almost any filter you can think of!
diff --git a/content/blog/miscellaneous/pleroma.md b/content/blog/miscellaneous/pleroma.md
index 725541a..15f7298 100644
--- a/content/blog/miscellaneous/pleroma.md
+++ b/content/blog/miscellaneous/pleroma.md
@@ -12,7 +12,7 @@ This article is about my installation of pleroma in a standard alpine linux lxd
## Installation notes
-{{< highlight sh >}}
+```sh
apk add elixir nginx postgresql postgresql-contrib git sudo erlang-ssl erlang-xmerl erlang-parsetools \
erlang-runtime-tools make gcc build-base vim vimdiff htop curl
/etc/init.d/postgresql start
@@ -24,10 +24,10 @@ mix deps.get
mix generate_config
cp config/generated_config.exs config/prod.secret.exs
cat config/setup_db.psql
-{{< /highlight >}}
+```
At this stage you are supposed to execute these setup_db commands in your postgres. Instead of chmoding and stuff detailed in the official documentation I execute it manually from psql shell :
-{{< highlight sh >}}
+```sh
su - postgres
psql
CREATE USER pleroma WITH ENCRYPTED PASSWORD 'XXXXXXXXXXXXXXXXXXX';
@@ -35,21 +35,21 @@ CREATE DATABASE pleroma_dev OWNER pleroma;
\c pleroma_dev;
CREATE EXTENSION IF NOT EXISTS citext;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
-{{< /highlight >}}
+```
Now back to pleroma :
-{{< highlight sh >}}
+```sh
MIX_ENV=prod mix ecto.migrate
MIX_ENV=prod mix phx.server
-{{< /highlight >}}
+```
If this last command runs without error your pleroma will be available and you can test it with :
-{{< highlight sh >}}
+```sh
curl http://localhost:4000/api/v1/instance
-{{< /highlight >}}
+```
If this works, you can shut it down with two C-c and we can configure nginx. This article doesn't really cover my setup since my nginx doesn't run there, and I am using letsencrypt wildcard certificates fetched somewhere else unrelated, so to simplify I only paste the vhost part of the configuration :
-{{< highlight sh >}}
+```sh
### in nginx.conf inside the container ###
# {{{ pleroma
proxy_cache_path /tmp/pleroma-media-cache levels=1:2 keys_zone=pleroma_media_cache:10m max_size=500m
@@ -96,10 +96,10 @@ location /proxy {
}
client_max_body_size 20M;
-{{< /highlight >}}
+```
Now add the phx.server on boot. I run pleroma has plemora user to completely limit the permissions of the server software. The official documentation has all files belong to the user running the server, I prefer that only the uploads directory does. Since I don't run nginx from this container I also edit this out :
-{{< highlight sh >}}
+```sh
adduser -s /sbin/nologin -D -h /srv/pleroma pleroma
cp -a /root/.hex/ /srv/pleroma/.
cp -a /root/.mix /srv/pleroma/.
@@ -110,12 +110,12 @@ sed -i /etc/init.d/pleroma -e '/^command_user=/s/=.*/=nobody:nobody/'
sed -i /etc/init.d/pleroma -e 's/nginx //'
rc-update add pleroma default
rc-update add pleroma start
-{{< /highlight >}}
+```
You should be good to go and access your instance from any web browser. After creating your account in a web browser come back to the cli and set yourself as moderator :
-{{< highlight sh >}}
+```sh
mix set_moderator adyxax
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/miscellaneous/postgresql-read-only.md b/content/blog/miscellaneous/postgresql-read-only.md
index 48ef392..1449da3 100644
--- a/content/blog/miscellaneous/postgresql-read-only.md
+++ b/content/blog/miscellaneous/postgresql-read-only.md
@@ -9,10 +9,10 @@ tags:
## The solution
Here is the bare minimum a user need in order to have complete read only access on a postgresql database :
-{{< highlight sh >}}
+```sh
GRANT CONNECT ON DATABASE "db" TO "user";
\c db
GRANT USAGE ON SCHEMA public TO "user";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "user";
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO "user";
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/postgresql-reassign.md b/content/blog/miscellaneous/postgresql-reassign.md
index 75644aa..999b2af 100644
--- a/content/blog/miscellaneous/postgresql-reassign.md
+++ b/content/blog/miscellaneous/postgresql-reassign.md
@@ -9,13 +9,13 @@ tags:
## The solution
Here is the sequence of commande that will change the owner of all objects in a database from a user named "support" to another named "test-support":
-{{< highlight sh >}}
+```sql
ALTER DATABASE name OWNER TO new_owner
for tbl in `psql -qAt -c "select tablename from pg_tables where schemaname = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
for tbl in `psql -qAt -c "select sequence_name from information_schema.sequences where sequence_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
for tbl in `psql -qAt -c "select table_name from information_schema.views where table_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
-{{< /highlight >}}
+```
-{{< highlight sh >}}
+```sql
reassign owned by "support" to "test-support";
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/purge-postfix-queue-based-content.md b/content/blog/miscellaneous/purge-postfix-queue-based-content.md
index d131af2..3800b07 100644
--- a/content/blog/miscellaneous/purge-postfix-queue-based-content.md
+++ b/content/blog/miscellaneous/purge-postfix-queue-based-content.md
@@ -13,6 +13,6 @@ Sometimes a lot of spam can acacumulate in a postfix queue.
## The solution
Here is a command that can search through queued emails for a certain character string (here XXX as an example) and delete the ones that contain it :
-{{< highlight sh >}}
+```sh
find /var/spool/postfix/deferred/ -type f -exec grep -li 'XXX' '{}' \; | xargs -n1 basename | xargs -n1 postsuper -d
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/reusing-ssh-connections.md b/content/blog/miscellaneous/reusing-ssh-connections.md
index 496f456..e7d949a 100644
--- a/content/blog/miscellaneous/reusing-ssh-connections.md
+++ b/content/blog/miscellaneous/reusing-ssh-connections.md
@@ -13,7 +13,7 @@ It is possible to share multiple sessions over a single connection. One of the a
## How to
You need a directory to store the sockets for the opened sessions, I use the `~/.ssh/tmp` directory for it. Whatever you choose, make sure it exists by running `mkdir` now. Then add these two lines at the start of your `~/.ssh/config` :
-{{< highlight sh >}}
+```cfg
ControlMaster auto
ControlPath ~/.ssh/tmp/%h_%p_%r
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/rocketchat.md b/content/blog/miscellaneous/rocketchat.md
index d0cc370..8cf0dbc 100644
--- a/content/blog/miscellaneous/rocketchat.md
+++ b/content/blog/miscellaneous/rocketchat.md
@@ -14,11 +14,11 @@ I needed to test some scripts that interact with a rocketchat instance at work.
## The commands
Docker simple install :
-{{< highlight sh >}}
+```sh
docker run --name db -d mongo --smallfiles --replSet hurricane
docker exec -ti db mongo
> rs.initiate()
docker run -p 3000:3000 --name rocketchat --env ROOT_URL=http://hurricane --env MONGO_OPLOG_URL=mongodb://db:27017/local?replSet=hurricane --link db -d rocket.chat
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/screen-cannot-open-terminal.md b/content/blog/miscellaneous/screen-cannot-open-terminal.md
index 0e2de99..f687b66 100644
--- a/content/blog/miscellaneous/screen-cannot-open-terminal.md
+++ b/content/blog/miscellaneous/screen-cannot-open-terminal.md
@@ -11,15 +11,15 @@ tags:
## The problem
At my current workplace there are die hard screen fanatics that refuse to upgrade to tmux. Sometimes I get the following error :
-{{< highlight sh >}}
+```sh
Cannot open your terminal '/dev/pts/0' - please check.
-{{< /highlight >}}
+```
## The solution
This error means that you did not open the shell with the user you logged in with. You can make screen happy by running :
-{{< highlight sh >}}
+```sh
script /dev/null
-{{< /highlight >}}
+```
In this new environment your screen commands will work normally.
diff --git a/content/blog/miscellaneous/seti-at-home.md b/content/blog/miscellaneous/seti-at-home.md
index 681b2c8..bc8fa8b 100644
--- a/content/blog/miscellaneous/seti-at-home.md
+++ b/content/blog/miscellaneous/seti-at-home.md
@@ -13,7 +13,7 @@ Me and some friends were feeling nostalgics of running Seti@Home as a screensave
## The commands
-{{< highlight sh >}}
+```sh
apt install boinc
echo "graou" > /var/lib/boinc-client/gui_rpc_auth.cfg
systemctl restart boinc-client
@@ -21,4 +21,4 @@ boinccmd --host localhost --passwd graou --get_messages 0
boinccmd --host localhost --passwd graou --get_state|less
boinccmd --host localhost --passwd graou --lookup_account http://setiathome.berkeley.edu <EMAIL> XXXXXX
boinccmd --host localhost --passwd graou --project_attach http://setiathome.berkeley.edu <ACCOUNT_KEY>
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/sqlite-pretty-print.md b/content/blog/miscellaneous/sqlite-pretty-print.md
index 4a4112e..1289824 100644
--- a/content/blog/miscellaneous/sqlite-pretty-print.md
+++ b/content/blog/miscellaneous/sqlite-pretty-print.md
@@ -8,9 +8,9 @@ tags:
## The solution
In `~/.sqliterc` add the following :
-{{< highlight sh >}}
+```cfg
.mode column
.headers on
.separator ROW "\n"
.nullvalue NULL
-{{< /highlight >}}
+```
diff --git a/content/blog/miscellaneous/tc.md b/content/blog/miscellaneous/tc.md
index 58268a6..1aef7e8 100644
--- a/content/blog/miscellaneous/tc.md
+++ b/content/blog/miscellaneous/tc.md
@@ -8,14 +8,14 @@ tags:
## How to
-{{< highlight sh >}}
+```sh
tc qdisc show dev eth0
tc qdisc add dev eth0 root netem delay 200ms
tc qdisc show dev eth0
tc qdisc delete dev eth0 root netem delay 200ms
tc qdisc show dev eth0
-{{< /highlight >}}
+```
## References
diff --git a/content/blog/netapp/investigate-memory-errors.md b/content/blog/netapp/investigate-memory-errors.md
index 8ad96b2..2b107c6 100644
--- a/content/blog/netapp/investigate-memory-errors.md
+++ b/content/blog/netapp/investigate-memory-errors.md
@@ -8,7 +8,7 @@ tags:
## The commands
-{{< highlight sh >}}
+```sh
set adv
system node show-memory-errors -node <cluster_node>
-{{< / highlight >}}
+```
diff --git a/content/docs/adyxax.org/nethack.md b/content/docs/adyxax.org/nethack.md
index 095f0ca..777ed40 100644
--- a/content/docs/adyxax.org/nethack.md
+++ b/content/docs/adyxax.org/nethack.md
@@ -11,46 +11,46 @@ I am hosting a private nethack game server accessible via ssh for anyone who wil
TODO
-{{< highlight sh >}}
+```sh
groupadd -r games
useradd -r -g games nethack
git clone
-{{< /highlight >}}
+```
## nethack
TODO
-{{< highlight sh >}}
-{{< /highlight >}}
+```sh
+```
## scores script
TODO
-{{< highlight sh >}}
-{{< /highlight >}}
+```sh
+```
## copying shared libraries
-{{< highlight sh >}}
+```sh
cd /opt/nethack
for i in `ls bin`; do for l in `ldd bin/$i | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done; done
for l in `ldd dgamelaunch | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done
for l in `ldd nethack-3.7.0-r1/games/nethack | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done
-{{< /highlight >}}
+```
## making device nodes
TODO! For now I mount all of /dev in the chroot :
-{{< highlight sh >}}
+```sh
#mknod -m 666 dev/ptmx c 5 2
mount -R /dev /opt/nethack/dev
-{{< /highlight >}}
+```
## debugging
-{{< highlight sh >}}
+```sh
gdb chroot
run --userspec=nethack:games /opt/nethack/ /dgamelaunch
-{{< /highlight >}}
+```
diff --git a/content/docs/gentoo/installation.md b/content/docs/gentoo/installation.md
index b500252..0416a40 100644
--- a/content/docs/gentoo/installation.md
+++ b/content/docs/gentoo/installation.md
@@ -16,10 +16,10 @@ You can get a bootable iso or liveusb from https://www.gentoo.org/downloads/. I
Once you boot on the installation media, you can start sshd and set a temporary password and proceed with the installation more confortably from another machine :
-{{< highlight sh >}}
+```sh
/etc/init.d/sshd start
passwd
-{{< /highlight >}}
+```
Don't forget to either run `dhcpcd` or manually set an ip and gateway to the machine.
@@ -27,7 +27,7 @@ Don't forget to either run `dhcpcd` or manually set an ip and gateway to the mac
There are several options depending on wether you need soft raid, full disk encryption or a simple root device with no additional complications. It will also differ if you are using a virtual machine or a physical one.
-{{< highlight sh >}}
+```sh
tmux
blkdiscard /dev/nvme0n1
sgdisk -n1:0:+2M -t1:EF02 /dev/nvme0n1
@@ -37,7 +37,7 @@ mkfs.fat -F 32 -n efi-boot /dev/nvme0n1p2
mkfs.xfs /dev/nvme0n1p3
mount /dev/sda3 /mnt/gentoo
cd /mnt/gentoo
-{{< /highlight >}}
+```
Make sure you do not repeat the mistake I too often make by mounting something to /mnt while using the liveusb/livecd. You will lose your shell if you do this and will need to reboot!
@@ -46,109 +46,109 @@ Make sure you do not repeat the mistake I too often make by mounting something t
Get the stage 3 installation file from https://www.gentoo.org/downloads/. I personnaly use the non-multilib one from the advanced choices, since I am no longer using and 32bits software except steam, and I use steam from a multilib chroot.
Put the archive on the server in /mnt/gentoo (you can simply wget it from there), then extract it :
-{{< highlight sh >}}
+```sh
tar xpf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
mount /dev/nvme0n1p2 boot
mount -R /proc proc
mount -R /sys sys
mount -R /dev dev
chroot .
-{{< /highlight >}}
+```
## Initial configuration
We prepare the local language of the system :
-{{< highlight sh >}}
+```sh
echo 'LANG="en_US.utf8"' > /etc/env.d/02locale
echo 'en_US.UTF-8 UTF-8' >> /etc/locale.gen
locale-gen
env-update && source /etc/profile
echo 'nameserver 1.1.1.1' > /etc/resolv.conf
-{{< /highlight >}}
+```
We set a loop device to hold the portage tree. It will be formatted with optimisation for the many small files that compose it :
-{{< highlight sh >}}
+```sh
mkdir -p /srv/gentoo-distfiles
truncate -s 10G /portage.img
mke2fs -b 1024 -i 2048 -m 0 -O "dir_index" -F /portage.img
tune2fs -c 0 -i 0 /portage.img
mkdir /usr/portage
mount -o loop,noatime,nodev /portage.img /usr/portage/
-{{< /highlight >}}
+```
We set default compilation options and flags. If you are not me and cannot rsync this location, you can browse it from https://packages.adyxax.org/x86-64/etc/portage/ :
-{{< highlight sh >}}
+```sh
rsync -a --delete packages.adyxax.org:/srv/gentoo-builder/x86-64/etc/portage/ /etc/portage/
sed -i /etc/portage/make.conf -e s/buildpkg/getbinpkg/
echo 'PORTAGE_BINHOST="https://packages.adyxax.org/x86-64/packages/"' >> /etc/portage/make.conf
-{{< /highlight >}}
+```
We get the portage tree and sync the timezone
-{{< highlight sh >}}
+```sh
emerge --sync
-{{< /highlight >}}
+```
## Set hostname and timezone
-{{< highlight sh >}}
+```sh
export HOSTNAME=XXXXX
sed -i /etc/conf.d/hostname -e /hostname=/s/=.*/=\"${HOSTNAME}\"/
echo "Europe/Paris" > /etc/timezone
emerge --config sys-libs/timezone-data
-{{< /highlight >}}
+```
## Check cpu flags and compatibility
TODO
-{{< highlight sh >}}
+```sh
emerge cpuid2cpuflags -1q
cpuid2cpuflags
gcc -### -march=native /usr/include/stdlib.h
-{{< /highlight >}}
+```
## Rebuild the system
-{{< highlight sh >}}
+```sh
emerge --quiet -e @world
emerge --quiet dosfstools app-admin/logrotate app-admin/syslog-ng app-portage/gentoolkit \
dev-vcs/git bird openvpn htop net-analyzer/tcpdump net-misc/bridge-utils \
sys-apps/i2c-tools sys-apps/pciutils sys-apps/usbutils sys-boot/grub sys-fs/ncdu \
sys-process/lsof net-vpn/wireguard-tools
emerge --unmerge nano -q
-{{< /highlight >}}
+```
## Grab a working kernel
Next we need to Grab a working kernel from our build server along with its modules. If you don't have one already, you have some work to do!
Check the necessary hardware support with :
-{{< highlight sh >}}
+```sh
i2cdetect -l
lspci -nnk
lsusb
-{{< /highlight >}}
+```
TODO specific page with details on how to build required modules like the nas for example.
-{{< highlight sh >}}
+```sh
emerge gentoo-sources genkernel -q
...
-{{< /highlight >}}
+```
## Final configuration steps
### fstab
-{{< highlight sh >}}
+```sh
# /etc/fstab: static file system information.
#
#<fs> <mountpoint> <type> <opts> <dump/pass>
/dev/vda3 / ext4 noatime,discard 0 1
/dev/vda2 /boot vfat noatime 1 2
/portage.img /usr/portage ext2 noatime,nodev,loop 0 0
-{{< /highlight >}}
+```
### networking
-{{< highlight sh >}}
+```sh
echo 'hostname="phoenix"' > /etc/conf.d/hostname
echo 'dns_domain_lo="adyxax.org"
config_eth0="192.168.1.3 netmask 255.255.255.0"
@@ -156,7 +156,7 @@ routes_eth0="default via 192.168.1.1"' > /etc/conf.d/net
cd /etc/init.d
ln -s net.lo net.eth0
rc-update add net.eth0 boot
-{{< /highlight >}}
+```
### Grub
@@ -170,28 +170,28 @@ grub-mkconfig -o /boot/grub/grub.cfg
### /etc/hosts
-{{< highlight sh >}}
+```sh
scp root@collab-jde.nexen.net:/etc/hosts /etc/
-{{< /highlight >}}
+```
### root account access
-{{< highlight sh >}}
+```sh
mkdir -p /root/.ssh
echo ' ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILOJV391WFRYgCVA2plFB8W8sF9LfbzXZOrxqaOrrwco hurricane' > /root/.ssh/authorized_keys
passwd
-{{< /highlight >}}
+```
### Add necessary daemons on boot
-{{< highlight sh >}}
+```sh
rc-update add syslog-ng default
rc-update add cronie default
rc-update add sshd default
-{{< /highlight >}}
+```
## TODO
-{{< highlight sh >}}
+```sh
net-firewall/shorewall
...
rc-update add shorewall default
@@ -216,7 +216,7 @@ rc-update add docker default
app-emulation/lxd
rc-update add lxd default
-{{< /highlight >}}
+```
## References
diff --git a/content/docs/gentoo/kernel_upgrades.md b/content/docs/gentoo/kernel_upgrades.md
index b6f0adc..b438454 100644
--- a/content/docs/gentoo/kernel_upgrades.md
+++ b/content/docs/gentoo/kernel_upgrades.md
@@ -9,18 +9,18 @@ tags:
## Introduction
Now that I am mostly running OpenBSD servers I just use genkernel to build my custom configuration on each node with :
-{{< highlight sh >}}
+```sh
eselect kernel list
eselect kernel set 1
genkernel all --kernel-config=/proc/config.gz --menuconfig
nvim --diff /proc/config.gz /usr/src/linux/.config
-{{< / highlight >}}
+```
Bellow you will find how I did things previously when centralising the build of all kernels on a collab-jde machine, and distributing them all afterwards. Local nodes would only rebuild local modules and get on with their lives.
## Building on collab-jde
-{{< highlight sh >}}
+```sh
PREV_VERSION=4.14.78-gentoo
eselect kernel list
eselect kernel set 1
@@ -34,11 +34,11 @@ for ARCHI in `ls /srv/gentoo-builder/kernels/`; do
INSTALL_MOD_PATH=/srv/gentoo-builder/kernels/${ARCHI}/ make modules_install
INSTALL_PATH=/srv/gentoo-builder/kernels/${ARCHI}/ make install
done
-{{< / highlight >}}
+```
## Deploying on each node :
-{{< highlight sh >}}
+```sh
export VERSION=5.4.28-gentoo-x86_64
wget http://packages.adyxax.org/kernels/x86_64/System.map-${VERSION} -O /boot/System.map-${VERSION}
wget http://packages.adyxax.org/kernels/x86_64/config-${VERSION} -O /boot/config-${VERSION}
@@ -53,4 +53,4 @@ make modules_prepare
emerge @module-rebuild
genkernel --install initramfs --ssh-host-keys=create-from-host
grub-mkconfig -o /boot/grub/grub.cfg
-{{< / highlight >}}
+```
diff --git a/content/docs/gentoo/lxd.md b/content/docs/gentoo/lxd.md
index 0e2dfdd..60d199a 100644
--- a/content/docs/gentoo/lxd.md
+++ b/content/docs/gentoo/lxd.md
@@ -12,18 +12,18 @@ I have used LXD for many years successfully, I was never satisfied with the dock
## Installation
-{{< highlight sh >}}
+```sh
touch /etc{/subuid,/subgid}
usermod --add-subuids 1000000-1065535 root
usermod --add-subgids 1000000-1065535 root
emerge -q app-emulation/lxd
/etc/init.d/lxd start
rc-update add lxd default
-{{< /highlight >}}
+```
## Initial configuration
-{{< highlight sh >}}
+```sh
myth /etc/init.d # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
@@ -43,4 +43,4 @@ Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
-{{< /highlight >}}
+```
diff --git a/content/docs/gentoo/steam.md b/content/docs/gentoo/steam.md
index 26a2a2f..906a62f 100644
--- a/content/docs/gentoo/steam.md
+++ b/content/docs/gentoo/steam.md
@@ -17,7 +17,7 @@ achieve that with containers but didn't quite made it work as well as this chroo
Note that there is no way to provide a "most recent stage 3" installation link. You will have to browse http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64/
and adjust the download url manually bellow :
-{{< highlight sh >}}
+```sh
mkdir /usr/local/steam
cd /usr/local/steam
wget http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20190122T214501Z.tar.xz
@@ -74,13 +74,13 @@ wget -P /etc/portage/repos.conf/ https://raw.githubusercontent.com/anyc/steam-ov
emaint sync --repo steam-overlay
emerge games-util/steam-launcher -q
useradd -m -G audio,video steam
-{{< /highlight >}}
+```
## Launch script
Note that we use `su` and not `su -` since we need to preserve the environment. If you don't you won't get any sound in game. The pulseaudio socket is shared through the mount of
/run inside the chroot :
-{{< highlight sh >}}
+```sh
su
cd /usr/local/steam
mount -R /dev dev
@@ -93,4 +93,4 @@ chroot .
env-update && source /etc/profile
su steam
steam
-{{< /highlight >}}
+```
diff --git a/content/docs/gentoo/upgrades.md b/content/docs/gentoo/upgrades.md
index 83f3c56..4984cd7 100644
--- a/content/docs/gentoo/upgrades.md
+++ b/content/docs/gentoo/upgrades.md
@@ -9,24 +9,24 @@ tags:
## Introduction
Here is my go to set of commands when I upgrade a gentoo box :
-{{< highlight sh >}}
+```sh
emerge-webrsync
eselect news read
-{{< /highlight >}}
+```
The news have to be reviewed carefully and if I cannot act on it immediately I copy paste the relevant bits to my todolist.
## The upgrade process
I run the upgrade process in steps, the first one asking you to validate the upgrade path. You will also be prompted to validate before cleaning :
-{{< highlight sh >}}
+```sh
emerge -qAavutDN world --verbose-conflicts --keep-going --with-bdeps=y && emerge --depclean -a && revdep-rebuild -i -- -q --keep-going; eclean --deep distfiles && eclean --deep packages && date
-{{< /highlight >}}
+```
After all this completes it is time to evaluate configuration changes :
-{{< highlight sh >}}
+```sh
etc-update
-{{< /highlight >}}
+```
If a new kernel has been emerged, have a look at [the specific process for that]({{< ref "kernel_upgrades" >}}).
diff --git a/content/docs/openbsd/install_from_linux.md b/content/docs/openbsd/install_from_linux.md
index 4cfe54c..853ce21 100644
--- a/content/docs/openbsd/install_from_linux.md
+++ b/content/docs/openbsd/install_from_linux.md
@@ -12,12 +12,12 @@ This article explains a simple method to install OpenBSD when all you have is a
## How to
First login as root on the linux you want to reinstall as Openbsd then fetch the installer :
-{{< highlight sh >}}
+```sh
curl https://cdn.openbsd.org/pub/OpenBSD/6.8/amd64/bsd.rd -o /bsd.rd
-{{< /highlight >}}
+```
Then edit the loader configuration, in this example grub2 :
-{{< highlight sh >}}
+```sh
echo '
menuentry "OpenBSD" {
set root=(hd0,msdos1)
@@ -25,6 +25,6 @@ menuentry "OpenBSD" {
}' >> /etc/grub.d/40_custom
echo 'GRUB_TIMEOUT=60' >> /etc/default/grub
grub2-mkconfig > /boot/grub2/grub.cfg
-{{< /highlight >}}
+```
If you reboot now and connect your remote console you should be able to boot the OpenBSD installer.
diff --git a/content/docs/openbsd/pf.md b/content/docs/openbsd/pf.md
index 50d7b9e..a4e8c39 100644
--- a/content/docs/openbsd/pf.md
+++ b/content/docs/openbsd/pf.md
@@ -10,7 +10,7 @@ tags:
The open ports list is refined depending on the usage obviously, and not all servers listen for wireguard... It is just a template :
-{{< highlight conf >}}
+```cfg
vpns="{ wg0 }"
table <myself> const { self }
@@ -39,4 +39,4 @@ pass in on $vpns from <private> to <myself>
block return in on ! lo0 proto tcp to port 6000:6010
# Port build user does not need network
block return out log proto {tcp udp} user _pbuild
-{{< /highlight >}}
+```
diff --git a/content/docs/openbsd/smtpd.md b/content/docs/openbsd/smtpd.md
index 6db62ec..e1452ab 100644
--- a/content/docs/openbsd/smtpd.md
+++ b/content/docs/openbsd/smtpd.md
@@ -9,7 +9,7 @@ tags:
Here is my template for a simple smtp relay. The host names in the outbound action are to be customized obviously, and in my setups `yen` the relay destination is only reachable via wireguard. If not in such setup, smtps with authentication is to be configured :
-{{< highlight conf >}}
+```cfg
table aliases file:/etc/mail/aliases
listen on socket
@@ -20,13 +20,13 @@ action "outbound" relay host "smtp://yen" mail-from "root+phoenix@adyxax.org"
match from local for local action "local_mail"
match from local for any action "outbound"
-{{< /highlight >}}
+```
## Primary mx
Here is my primary mx configuration as a sample :
-{{< highlight conf >}}
+```cfg
pki adyxax.org cert "/etc/ssl/yen.adyxax.org.crt"
pki adyxax.org key "/etc/ssl/private/yen.adyxax.org.key"
@@ -59,7 +59,7 @@ match from local for local action "local_mail"
match from any auth for any action "outbound"
match from mail-from "root+phoenix@adyxax.org" for any action "outbound" # if you need to relay emails from another machine to the internet like I do
-{{< /highlight >}}
+```
## Secondary mx