Long overdue first commit with content

This commit is contained in:
Julien Dessaux 2020-04-28 17:29:52 +02:00
parent f63ce5bdd8
commit 6cc9d8c72a
92 changed files with 2031 additions and 97 deletions

3
README Normal file
View file

@ -0,0 +1,3 @@
(cd ~/git/www; time hugo --minify)
hugo server --bind 0.0.0.0 --minify --disableFastRender

View file

@ -53,18 +53,19 @@ anchor = "smart"
[languages] [languages]
[languages.en] [languages.en]
title = "Yet Another SysAdmin Wiki" title = "Yet Another SysAdmin Wiki/Blog"
description = "The wiki of yet another sysadmin" description = "The personal space of yet another sysadmin"
languageName ="English" languageName ="English"
# Weight used for sorting. # Weight used for sorting.
weight = 1 weight = 1
[languages.fr]
title = "Encore un wiki d'AdminSys" #[languages.fr]
description = "Docsy er operativsystem for skyen" #title = "Un wiki/blog d'AdminSys"
languageName ="French" #description = "Le petit bout d'internet d'un adminsys"
contentDir = "content/fr" #languageName ="Français"
#time_format_default = "02.01.2006" #contentDir = "content/fr"
#time_format_blog = "02.01.2006" ##time_format_default = "02.01.2006"
##time_format_blog = "02.01.2006"
[markup] [markup]
[markup.goldmark] [markup.goldmark]
@ -99,7 +100,7 @@ version_menu = "Releases"
algolia_docsearch = false algolia_docsearch = false
# Enable Lunr.js offline search # Enable Lunr.js offline search
offlineSearch = false offlineSearch = true
# User interface configuration # User interface configuration
[params.ui] [params.ui]

View file

@ -1,13 +1,16 @@
+++ +++
title = "Goldydocs" title = "Yet Another SysAdmin Wiki"
linkTitle = "Goldydocs" linkTitle = "Yet Another SysAdmin Wiki"
+++ +++
{{< blocks/cover title="Welcome to Yet Another SysAdmin Wiki!" image_anchor="top" height="full" color="orange">}} {{< blocks/cover title="Welcome to Yet Another SysAdmin Wiki/Blog!" image_anchor="top" height="full" color="primary">}}
You can see this wiki as an aggregation of various information (but almost always SysAdmin related) and stuff I gathered around the Internet. When I have to work on something that needed some research, I put there a sum up of what I Hello, my name is Julien Dessaux.
have done, all along with personal thoughts. There will be documentation articles and maybe some blog posts if the documentation article is not suitable.
Well I hope you feel welcome here. I accept all good wills that might be motivated to add some material here. Do not hesitate to leave a message at adyxax -AT- adyxax.org, asking for This wiki/blog is as an aggregation of various things (almost always SysAdmin related) I have been working on. It is a personnal space that I try to fill up with my experience and knowledge of computer system and network administration. You can
a translation or whatever ;-) learn more about me [on this page]({{< relref "/docs/about-me/_index.md" >}})
I hope you feel welcome here, do not hesitate to leave a message at julien -DOT- dessaux -AT- adyxax -DOT- org. You can ask for a translation, some more details on a topic covered here, or just say hi or whatever ;-)
Have a good time!
{{< /blocks/cover >}} {{< /blocks/cover >}}

View file

@ -1,13 +1,7 @@
--- ---
title: "Docsy Blog" title: "Yet Another SysAdmin Blog"
linkTitle: "Blog" linkTitle: "Blog"
menu: menu:
main: main:
weight: 30 weight: 30
--- ---
This is the **blog** section. It has two categories: News and Travels.
Files in these directories will be listed in reverse chronological order.

View file

@ -0,0 +1,5 @@
---
title: "Ansible"
linkTitle: "Ansible"
weight: 30
---

View file

@ -0,0 +1,36 @@
---
title: "Ansible vault example"
linkTitle: "Ansible vault example"
date: 2018-02-21
description: >
Ansible vault example
---
Here is how to edit a vault protected file :
{{< highlight sh >}}
ansible-vault edit hostvars/blah.yml
{{< / highlight >}}
Here is how to put a multiline entry like a private key in vault (for a simple value, just don't use a `|`):
{{< highlight yaml >}}
ssl_key : |
----- BEGIN PRIVATE KEY -----
blahblahblah
blahblahblah
----- END PRIVATE KEY -----
{{< /highlight >}}
And here is how to use it in a task :
{{< highlight yaml >}}
- copy:
path: /etc/ssl/private.key
mode: 0400
content: '{{ ssl_key }}'
{{< / highlight >}}
To run a playbook, you will need to pass the `--ask-vault` argument or to export a `ANSIBLE_VAULT_PASSWORD_FILE=/home/julien/.vault_pass.txt` variable (the file needs to contain a single line with your vault password here).
## Ressources
* how to break long lines in ansible : https://watson-wilson.ca/blog/2018/07/11/ansible-tips/

View file

@ -0,0 +1,89 @@
---
title: "Ansible custom facts"
linkTitle: "Ansible custom facts"
date: 2018-09-25
description: >
How to write custom facte with ansible
---
Custom facts are actually quite easy to implement despite the lack of documentation about it.
## How they work
On any Ansible controlled host — that is, the remote machine that is being controlled and not the machine on which the playbook is run — you just need to create a directory at
`/etc/ansible/facts.d`. Inside this directory, you can place one or more `*.fact` files. These are files that return JSON data, which will then be included in the raft of facts that
Ansible gathers.
The facts will be available to ansible at `hostvars.host.ansible_local.<fact_name>`.
## A simple example
Here is the simplest example of a fact, let's suppose we make it `/etc/ansible/facts.d/mysql.fact` :
{{< highlight sh >}}
#!/bin/sh
set -eu
echo '{"password": "xxxxxx"}'
{{< /highlight >}}
This will give you the fact `hostvars.host.ansible_local.mysql.password` for this machine.
## A more complex example
A more interesting example is something I use with small webapps. In the container that hosts the frontent I use a small ansible role to generate a mysql password on its first run, and
provision a database with a user that has access to it on a mysql server. This fact ensures that on subsequent runs we will stay idempotents. Here is how it works.
First the fact from before, only slightly modified :
{{< highlight sh >}}
#!/bin/sh
set -eu
echo '{"password": "{{mysql_password}}"}'
{{< /highlight >}}
This fact is deployed with the following tasks :
{{< highlight yaml >}}
- name: Generate a password for mysql database connections if there is none
set_fact: mysql_password="{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
when: (ansible_local.mysql_client|default({})).password is undefined
- name: Deploy mysql client ansible fact to handle the password
template:
src: ../templates/mysql_client.fact
dest: /etc/ansible/facts.d/
owner: root
mode: 0500
when: (ansible_local.mysql_client|default({})).password is undefined
- name: reload ansible_local
setup: filter=ansible_local
when: (ansible_local.mysql_client|default({})).password is undefined
- name: Ensures mysql database exists
mysql_db:
name: '{{ansible_hostname}}'
state: present
delegate_to: "{{mysql_server}}"
- name: Ensures mysql user exists
mysql_user:
name: '{{ansible_hostname}}'
host: '{{ansible_hostname}}'
priv: '{{ansible_hostname}}.*:ALL'
password: '{{ansible_local.mysql_client.password}}'
state: present
delegate_to: '{{mysql_server}}'
{{< /highlight >}}
## Caveat : a fact you deploy is not immediately available
Note that installing a fact does not make it exist before the next inventory run on the host. This can be problematic especially if you rely on facts caching to speed up ansible. Here
is how to make ansible reload facts using the setup tasks (If you paid attention you already saw me use it above).
{{< highlight yaml >}}
- name: reload ansible_local
setup: filter=ansible_local
{{< /highlight >}}
## References
- https://medium.com/@jezhalford/ansible-custom-facts-1e1d1bf65db8

View file

@ -0,0 +1,38 @@
---
title: "Dump all ansible variables"
linkTitle: "Dump all ansible variables"
date: 2019-10-15
description: >
How to dump all variables used by ansible
---
Here is the task to use in order to achieve that :
{{< highlight yaml >}}
- name: Dump all vars
action: template src=dumpall.j2 dest=ansible.all
{{< /highlight >}}
And here is the template to use with it :
{{< highlight jinja >}}
Module Variables ("vars"):
--------------------------------
{{ vars | to_nice_json }}
Environment Variables ("environment"):
--------------------------------
{{ environment | to_nice_json }}
GROUP NAMES Variables ("group_names"):
--------------------------------
{{ group_names | to_nice_json }}
GROUPS Variables ("groups"):
--------------------------------
{{ groups | to_nice_json }}
HOST Variables ("hostvars"):
--------------------------------
{{ hostvars | to_nice_json }}
{{< /highlight >}}

View file

@ -0,0 +1,5 @@
---
title: "Cfengine"
linkTitle: "Cfengine"
weight: 40
---

View file

@ -0,0 +1,153 @@
---
title: "Leveraging yaml with cfengine"
linkTitle: "Leveraging yaml with cfengine"
date: 2018-09-25
description: >
How to leverage yaml inventory files with cfengine
---
CFEngine has core support for JSON and YAML. You can use this support to read, access, and merge JSON and YAML files and use these to keep policy files internal and simple. You
access the data using the usual cfengine standard primitives.
The use case bellow lacks a bit or error control with argument validation, it will fail miserably if the YAML file is invalid.
## Example yaml
In `cmdb/hosts/andromeda.yaml` we describe some properties of a host named andromeda:
{{< highlight yaml >}}
domain: adyxax.org
host_interface: dummy0
host_ip: "10.1.0.255"
tunnels:
collab:
port: 1195
ip: "10.1.0.15"
peer: "10.1.0.14"
remote_host: collab.example.net
remote_port: 1199
legend:
port: 1194
ip: "10.1.0.3"
peer: "10.1.0.2"
remote_host: legend.adyxax.org
remote_port: 1195
{{< /highlight >}}
## Reading the yaml
I am bundling the values in a common bundle, accessible globally. This is one of the first bundles processed in the order my policy files are loaded. This is just an extract, you can load multiple files and merge them to distribute common
settings :
{{< highlight yaml >}}
bundle common g
{
vars:
has_host_data::
"host_data" data => readyaml("$(sys.inputdir)/cmdb/hosts/$(sys.host).yaml", 100k);
classes:
any::
"has_host_data" expression => fileexists("$(sys.inputdir)/cmdb/hosts/$(sys.host).yaml");
}
{{< /highlight >}}
## Using the data
### Cfengine agent bundle
We access the data using the global g.host_data variable, here is a complete example :
{{< highlight yaml >}}
bundle agent openvpn
{
vars:
any::
"tunnels" slist => getindices("g.host_data[tunnels]");
files:
any::
"/etc/openvpn/common.key"
create => "true",
edit_defaults => empty,
perms => system_owned("440"),
copy_from => local_dcp("$(sys.inputdir)/templates/openvpn/common.key.cftpl"),
classes => if_repaired("openvpn_common_key_repaired");
methods:
any::
"any" usebundle => install_package("$(this.bundle)", "openvpn");
"any" usebundle => openvpn_tunnel("$(tunnels)");
services:
linux::
"openvpn@$(tunnels)"
service_policy => "start",
classes => if_repaired("tunnel_$(tunnels)_service_repaired");
commands:
any::
"/usr/sbin/service openvpn@$(tunnels) restart" classes => if_repaired("tunnel_$(tunnels)_service_repaired"), ifvarclass => "openvpn_common_key_repaired";
reports:
any::
"$(this.bundle): common.key repaired" ifvarclass => "openvpn_common_key_repaired";
"$(this.bundle): $(tunnels) service repaired" ifvarclass => "tunnel_$(tunnels)_service_repaired";
}
bundle agent openvpn_tunnel(tunnel)
{
classes:
any::
"has_remote" and => { isvariable("g.host_data[tunnels][$(tunnel)][remote_host]"), isvariable("g.host_data[tunnels][$(tunnel)][remote_port]") };
files:
any::
"/etc/openvpn/$(tunnel).conf"
create => "true",
edit_defaults => empty,
perms => system_owned("440"),
edit_template => "$(sys.inputdir)/templates/openvpn/tunnel.conf.cftpl",
template_method => "cfengine",
classes => if_repaired("openvpn_$(tunnel)_conf_repaired");
commands:
any::
"/usr/sbin/service openvpn@$(tunnel) restart" classes => if_repaired("tunnel_$(tunnel)_service_repaired"), ifvarclass => "openvpn_$(tunnel)_conf_repaired";
reports:
any::
"$(this.bundle): $(tunnel).conf repaired" ifvarclass => "openvpn_$(tunnel)_conf_repaired";
"$(this.bundle): $(tunnel) service repaired" ifvarclass => "tunnel_$(tunnel)_service_repaired";
}
{{< /highlight >}}
### Template file
Templates can reference the g.host_data too, like in the following :
{{< highlight cfg >}}
[%CFEngine BEGIN %]
proto udp
port $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][port])
dev-type tun
dev tun_$(openvpn_tunnel.tunnel)
comp-lzo
script-security 2
ping 10
ping-restart 20
ping-timer-rem
persist-tun
persist-key
cipher AES-128-CBC
secret /etc/openvpn/common.key
ifconfig $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][ip]) $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][peer])
user nobody
[%CFEngine centos:: %]
group nobody
[%CFEngine ubuntu:: %]
group nogroup
[%CFEngine has_remote:: %]
remote $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_host]) $(g.host_data[tunnels][$(openvpn_tunnel.tunnel)][remote_port])
[%CFEngine END %]
{{< /highlight >}}
## References
- https://docs.cfengine.com/docs/master/examples-tutorials-json-yaml-support-in-cfengine.html
- https://docs.cfengine.com/docs/3.10/reference-functions-readyaml.html
- https://docs.cfengine.com/docs/3.10/reference-functions-mergedata.html

View file

@ -0,0 +1,5 @@
---
title: "Commands"
linkTitle: "Commands"
weight: 40
---

View file

@ -0,0 +1,11 @@
---
title: "List active calls on asterisk"
linkTitle: "List active calls on asterisk"
date: 2018-09-25
description: >
How to show active calls on an asterisk system
---
{{< highlight yaml >}}
watch -d -n1 'asterisk -rx “core show channels”'
{{< /highlight >}}

View file

@ -0,0 +1,14 @@
---
title: "How to have asterisk call you into a meeting"
linkTitle: "How to have asterisk call you into a meeting"
date: 2018-09-25
description: >
How to have asterisk call you itself into a meeting
---
At alterway we sometimes have DTMF problems that prevent my mobile from joining a conference room. Here is something I use to have asterisk call me
and place me inside the room :
{{< highlight yaml >}}
channel originate SIP/numlog/06XXXXXXXX application MeetMe 85224,M,secret
{{< /highlight >}}

View file

@ -0,0 +1,13 @@
---
title: "Busybox web server"
linkTitle: "Busybox web server"
date: 2019-04-16
description: >
Busybox web server
---
If you have been using things like `python -m SimpleHTTPServer`, here is something even more simple and lightweight to use :
{{< highlight sh >}}
busybox httpd -vfp 80
{{< /highlight >}}

View file

@ -0,0 +1,13 @@
---
title: "Capture a video of your desktop"
linkTitle: "Capture a video of your desktop"
date: 2011-11-20
description: >
Capture a video of your desktop
---
You can capture a video of your linux desktop with ffmpeg :
{{< highlight sh >}}
ffmpeg -f x11grab -s xga -r 25 -i :0.0 -sameq /tmp/out.mpg
{{< /highlight >}}

View file

@ -0,0 +1,17 @@
---
title: "Clean conntrack states"
linkTitle: "Clean conntrack states"
date: 2018-03-02
description: >
Clean conntrack states
---
Here is an example of how to clean conntrack states that match a specific query on a linux firewall :
{{< highlight sh >}}
conntrack -L conntrack -p tcp orig-dport 65372 | \
while read _ _ _ _ src dst sport dport _; do
conntrack -D conntrack proto tcp orig-src ${src#*=} orig-dst ${dst#*=} \
sport ${sport#*=} dport ${dport#*=}
done
{{< /highlight >}}

View file

@ -0,0 +1,14 @@
---
title: "Convert unix timestamp to readable date"
linkTitle: "Convert unix timestamp to readable date"
date: 2011-01-06
description: >
Convert unix timestamp to readable date
---
As I somehow have a hard time remembering this simple date flags as I rarely need it, I decided to write it down here :
{{< highlight sh >}}
$ date -d @1294319676
Thu Jan 6 13:14:36 GMT 2011
{{< /highlight >}}

View file

@ -0,0 +1,20 @@
---
title: "DMIdecode"
linkTitle: "DMIdecode"
date: 2011-02-16
description: >
DMIdecode
---
DMIdecode to obtain Hardware informations.
## Mose useful commands
- System informations: `dmidecode -t1`
- Chassis informations: `dmidecode -t4`
- CPU informations: `dmidecode -t4`
- RAM informations: `dmidecode -t17`
## Sources
- `man 8 dmidecode`

View file

@ -0,0 +1,12 @@
---
title: "Find hardlinks to a same file"
linkTitle: "Find hardlinks to a same file"
date: 2018-03-02
description: >
Find hardlinks to a same file
---
{{< highlight sh >}}
find . -samefile /path/to/file
{{< /highlight >}}

View file

@ -0,0 +1,12 @@
---
title: "Find where inodes are used"
linkTitle: "Find where inodes are used"
date: 2018-04-25
description: >
Find where inodes are used
---
{{< highlight sh >}}
find . -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
{{< /highlight >}}

View file

@ -0,0 +1,13 @@
---
title: "Import commits from one git repo to another"
linkTitle: "Import commits from one git repo to another"
date: 2018-09-25
description: >
Import commits from one git repo to another
---
This imports commits from a repo in the `../masterfiles` folder and applies them to the repository inside the current folder :
{{< highlight sh >}}
(cd ../masterfiles/; git format-patch stdout origin/master) | git am
{{< /highlight >}}

View file

@ -0,0 +1,13 @@
---
title: "Rewrite a git commit history"
linkTitle: "Rewrite a git commit history"
date: 2018-03-05
description: >
Rewrite a git commit history
---
Here is how to rewrite a git commit history, for example to remove a file :
{{< highlight sh >}}
git filter-branch index-filter "git rm --cached --ignore-unmatch ${file}" --prune-empty --tag-name-filter cat - -all
{{< /highlight >}}

View file

@ -0,0 +1,19 @@
---
title: "ipmitool"
linkTitle: "ipmitool"
date: 2018-03-05
description: >
ipmitool
---
- launch ipmi shell : `ipmitool -H XX.XX.XX.XX -C3 -I lanplus -U <ipmi_user> shell`
- launch ipmi remote text console : `ipmitool -H XX.XX.XX.XX -C3 -I lanplus -U <ipmi_user> sol activate`
- Show local ipmi lan configuration : `ipmitool lan print`
- Update local ipmi lan configuration :
{{< highlight sh >}}
ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr 10.31.149.39
ipmitool lan set 1 netmask 255.255.255.0
mc reset cold
{{< /highlight >}}

View file

@ -0,0 +1,42 @@
---
title: "mdadm"
linkTitle: "mdadm"
date: 2011-11-15
description: >
mdadm
---
## Watch the array status
{{< highlight sh >}}
watch -d -n10 mdadm --detail /dev/md127
{{< /highlight >}}
## Recovery from livecd
{{< highlight sh >}}
mdadm --examine --scan >> /etc/mdadm.conf
mdadm --assemble --scan /dev/md/root
mount /dev/md127 /mnt # or vgscan...
{{< /highlight >}}
If auto detection does not work, you can still assemble an array manually :
{{< highlight sh >}}
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
{{< /highlight >}}
## Resync an array
First rigorously check the output of `cat /proc/mdstat`
{{< highlight sh >}}
mdadm --manage --re-add /dev/md0 /dev/sdb1
{{< /highlight >}}
## Destroy an array
{{< highlight sh >}}
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdb
{{< /highlight >}}

View file

@ -0,0 +1,11 @@
---
title: "MegaCLI"
linkTitle: "MegaCLI"
date: 2018-03-05
description: >
MegaCLI for dell hardware investigations
---
- `megacli -LDInfo -LALL -aALL|grep state`
- `MegaCli -PDlist -a0|less`

View file

@ -0,0 +1,20 @@
---
title: "omreport"
linkTitle: "omreport"
date: 2018-03-05
description: >
omreport
---
## Your raid status at a glance
- `omreport storage pdisk controller=0 vdisk=0|grep -E '^ID|State|Capacity|Part Number'|grep -B1 -A2 Failed`
## Other commands
{{< highlight sh >}}
omreport storage vdisk
omreport storage pdisk controller=0 vdisk=0
omreport storage pdisk controller=0 pdisk=0:0:4
{{< /highlight >}}

View file

@ -0,0 +1,17 @@
---
title: "qemu-nbd"
linkTitle: "qemu-nbd"
date: 2019-07-01
description: >
qemu-nbd
---
{{< highlight sh >}}
modprobe nbd max_part=8
qemu-nbd -c /dev/nbd0 image.img
mount /dev/nbd0p1 /mnt # or vgscan && vgchange -ay
[...]
umount /mnt
qemu-nbd -d /dev/nbd0
{{< /highlight >}}

View file

@ -0,0 +1,31 @@
---
title: "Qemu"
linkTitle: "Qemu"
date: 2019-06-10
description: >
Qemu
---
## Quickly launch a qemu vm with local qcow as hard drive
In this example I am using the docker0 bridge because I do not want to have to modify my shorewall config, but any proper bridge would do :
{{< highlight sh >}}
ip tuntap add tap0 mode tap
brctl addif docker0 tap0
qemu-img create -f qcow2 obsd.qcow2 10G
qemu-system-x86_64 -curses -drive file=install65.fs,format=raw -drive file=obsd.qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
qemu-system-x86_64 -curses -drive file=obsd.qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0
{{< /highlight >}}
The first qemu command runs the installer, the second one just runs the vm.
## Launch a qemu vm with your local hard drive
My use case for this is to install openbsd on a server from a hosting provider that doesn't provide an openbsd installer :
{{< highlight sh >}}
qemu-system-x86_64 -curses -drive file=miniroot65.fs -drive file=/dev/sda -net nic -net user
{{< /highlight >}}
## Ressources
- https://github.com/dodoritfort/OpenBSD/wiki/Installer-OpenBSD-sur-votre-serveur-Kimsufi

View file

@ -0,0 +1,21 @@
---
title: "rrdtool"
linkTitle: "rrdtool"
date: 2018-09-25
description: >
rrdtool
---
## Graph manually
{{< highlight sh >}}
for i in `ls`; do
rrdtool graph $i.png -w 1024 -h 768 -a PNG --slope-mode --font DEFAULT:7: \
--start -3days --end now DEF:in=$i:netin:MAX DEF:out=$i:netout:MAX \
LINE1:in#0000FF:"in" LINE1:out#00FF00:"out"
done
{{< /highlight >}}
## References
- https://calomel.org/rrdtool.html

View file

@ -0,0 +1,5 @@
---
title: "Debian"
linkTitle: "Debian"
weight: 40
---

View file

@ -0,0 +1,15 @@
---
title: "Error occured during the signature verification"
linkTitle: "Error occured during the signature verification"
date: 2015-02-27
description: >
Error occured during the signature verification
---
Here is how to fix the apt-get “Error occured during the signature verification” :
{{< highlight sh >}}
cd /var/lib/apt
mv lists lists.old
mkdir -p lists/partial
aptitude update
{{< /highlight >}}

View file

@ -0,0 +1,14 @@
---
title: "Force package removal"
linkTitle: "Force package removal"
date: 2015-01-27
description: >
Force package removal
---
Here is how to force package removal when post-uninstall script fails :
{{< highlight sh >}}
dpkg --purge --force-all <package>
{{< /highlight >}}
There is another option if you need to be smarter or if it is a pre-uninstall script that fails. Look at `/var/lib/dpkg/info/<package>.*inst`, locate the line that fails, comment it out and try to purge again. Repeat until success!

View file

@ -0,0 +1,12 @@
---
title: "Fix the no public key available error"
linkTitle: "Fix the no public key available error"
date: 2016-01-27
description: >
Fix the no public key available error
---
Here is how to fix the no public key available error :
{{< highlight sh >}}
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys KEYID
{{< /highlight >}}

View file

@ -0,0 +1,5 @@
---
title: "Docker"
linkTitle: "Docker"
weight: 40
---

View file

@ -0,0 +1,12 @@
---
title: "Cleaning a docker host"
linkTitle: "Cleaning a docker host"
date: 2018-01-29
description: >
How to retrieve storage space by cleaning a docker host
---
Be carefull that this will delete any stopped container and remove any locally unused image and tags :
{{< highlight sh >}}
docker system prune -f -a
{{< /highlight >}}

View file

@ -0,0 +1,31 @@
---
title: "Docker compose predictable bridge"
linkTitle: "Docker compose predictable bridge"
date: 2018-09-25
description: >
How to use a predefined bridge with docker compose
---
By default, docker-compose will create a network with a randomly named bridge. If you are like me using a strict firewall on all your machines, this just cannot work.
You need to put your services in `network_mode: “bridge”` and add a custom `network` entry like :
{{< highlight yaml >}}
version: '3.0'
services:
sshportal:
image: moul/sshportal
environment:
- SSHPORTAL_DEFAULT_ADMIN_INVITE_TOKEN=integration
command: server --debug
depends_on:
- testserver
ports:
- 2222
network_mode: "bridge"
networks:
default:
external:
name: bridge
{{< /highlight >}}

View file

@ -0,0 +1,15 @@
---
title: "Migrate a data volume"
linkTitle: "Migrate a data volume"
date: 2018-01-30
description: >
How to migrate a data volume
---
Here is how to migrate a data volume between two of your hosts. A rsync of the proper `/var/lib/docker/volumes` subfolder would work just as well, but is here a fun way to do it with docker and pipes :
{{< highlight sh >}}
export VOLUME=tiddlywiki
export DEST=10.1.0.242
docker run --rm -v $VOLUME:/from alpine ash -c "cd /from ; tar -cpf - . " \
| ssh $DEST "docker run --rm -i -v $VOLUME:/to alpine ash -c 'cd /to ; tar -xfp - ' "
{{< /highlight >}}

View file

@ -0,0 +1,16 @@
---
title: "Shell usage in dockerfile"
linkTitle: "Shell usage in dockerfile"
date: 2019-02-04
description: >
How to use a proper shell in a dockerfile
---
The default shell is `[“/bin/sh”, “-c”]`, which doesn't handle pipe fails when chaining commands. To process errors when using pipes use this :
{{< highlight sh >}}
SHELL ["/bin/bash", "-eux", "-o", "pipefail", "-c"]
{{< /highlight >}}
## References
- https://bearstech.com/societe/blog/securiser-et-optimiser-notre-liste-des-bonnes-pratiques-liees-aux-dockerfiles/

View file

@ -0,0 +1,5 @@
---
title: "FreeBSD"
linkTitle: "FreeBSD"
weight: 40
---

View file

@ -0,0 +1,11 @@
---
title: "Activate the serial console"
linkTitle: "Activate the serial console"
date: 2018-01-03
description: >
How to activate the serial console
---
Here is how to activate the serial console on a FreeBSD server :
- Append `console=“comconsole”` to `/boot/loader.conf`
- Append or update existing line with `ttyd0` in `/etc/ttys` to : `ttyd0 “/usr/libexec/getty std.9600” vt100 on secure`

View file

@ -0,0 +1,13 @@
---
title: "Change the ip address of a running jail"
linkTitle: "Change the ip address of a running jail"
date: 2018-09-25
description: >
How to change the ip address of a running jail
---
Here is how to change the ip address of a running jail :
{{< highlight sh >}}
jail -m ip4.addr=“192.168.1.87,192.168.1.88” jid=1
{{< /highlight >}}

View file

@ -0,0 +1,14 @@
---
title: "Clean install does not boot"
linkTitle: "Clean install does not boot"
date: 2018-01-02
description: >
How to fix a clean install that refuses to boot
---
I installed a fresh FreeBSD server today, and to my surprise it refused to boot. I had to do the following from my liveUSB :
{{< highlight yaml >}}
gpart set -a active /dev/ada0
gpart set -a bootme -i 1 /dev/ada0
{{< /highlight >}}

View file

@ -0,0 +1,5 @@
---
title: "Gentoo"
linkTitle: "Gentoo"
weight: 40
---

View file

@ -0,0 +1,24 @@
---
title: "Get zoom to work"
linkTitle: "Get zoom to work"
date: 2018-01-02
description: >
How to get the zoom video conferencing tool to work on gentoo
---
The zoom video conderencing tool works on gentoo, but since it is not integrated in a desktop environment on my machine (I am running an i3 window manager) I cannot authenticate on the google corporate domain where I work. Here is how to work
around that.
## Running the client
{{< highlight yaml >}}
./ZoomLauncher
{{< /highlight >}}
## Working around the "zoommtg address not understood" error
When you try to authenticate you will have your web browser pop up with a link it cannot interpret. You need to get the `zoommtg://.*` thing and run it in another ZoomLauncher (do not close the zoom process that spawned this authentication link
or the authentication will fail :
{{< highlight yaml >}}
./ZoomLauncher 'zoommtg://zoom.us/google?code=XXXXXXXX'
{{< /highlight >}}

13
content/en/blog/gentoo/steam.md Executable file
View file

@ -0,0 +1,13 @@
---
title: "Steam"
linkTitle: "Steam"
date: 2019-02-16
description: >
How to make steam work seamlessly on gentoo with a chroot
---
I am not using a multilib profile on gentoo (I use amd64 only everywhere), so when the time came to install steam I had to get a little creative. Overall I believe this is the perfect
way to install and use steam as it self contains it cleanly while not limiting the functionalities. In particular sound works, as does the hardware acceleration in games. I tried to
achieve that with containers but didn't quite made it work as well as this chroot setup.
[Here is the link to the full article describing how I achieved that.]({{< relref "/docs/gentoo/steam.md" >}})

View file

@ -0,0 +1,5 @@
---
title: "Miscellaneous"
linkTitle: "Miscellaneous"
weight: 40
---

View file

@ -0,0 +1,38 @@
---
title: "Some bacula/bareos commands"
linkTitle: "Some bacula/bareos commands"
date: 2018-01-10
description: >
Some bacula/bareos commands
---
Bacula is a backup software, bareos is a fork of it. Here are some tips and solutions to specific problems.
## Adjust an existing volume for pool configuration changes
In bconsole, run the following commands and follow the prompts :
{{< highlight sh >}}
update pool from resource
update all volumes in pool
{{< /highlight >}}
## Using bextract
On the sd you need to have a valid device name with the path to your tape, then run :
{{< highlight sh >}}
bextract -V <volume names separated by |> <device-name>
<directory-to-store-files>
{{< /highlight >}}
## Integer out of range sql error
If you get an sql error `integer out of range` for an insert query in the catalog, check the id sequence for the table which had the error. For
example with the basefiles table :
{{< highlight sql >}}
select nextval('basefiles_baseid_seq');
{{< /highlight >}}
You can then fix it with :
{{< highlight sql >}}
alter table BaseFiles alter column baseid set data type bigint;
{{< /highlight >}}

View file

@ -0,0 +1,15 @@
---
title: "Bash tcp client"
linkTitle: "Bash tcp client"
date: 2018-03-21
description: >
Bash tcp client
---
There are somea fun toys in bash. I would not rely on it for a production script, but here is one such things :
{{< highlight sh >}}
exec 5<>/dev/tcp/10.1.0.254/8080
bash$ echo -e "GET / HTTP/1.0\n" >&5
bash$ cat <&5
{{< /highlight >}}

View file

@ -0,0 +1,16 @@
---
title: "Boot from initramfs shell"
linkTitle: "Boot from initramfs shell"
date: 2014-01-24
description: >
Boot from initramfs shell
---
I had to finish booting from an initramfs shell, here is how I used `switch_root` to do so :
{{< highlight sh >}}
lvm vgscan
lvm vgchange -ay vg
mount -t ext4 /dev/mapper/vg-root /root
exec switch_root -c /dev/console /root /sbin/init
{{< /highlight >}}

View file

@ -0,0 +1,29 @@
---
title: "Building rpm packages"
linkTitle: "Building rpm packages"
date: 2016-02-22
description: >
Building rpm packages
---
Here is how to build locally an rpm package. Tested at the time on a centos 7.
## Setup your environment
First of all, you have to use a non-root account.
- Create the necessary directories : `mkdir -p ~/rpmbuild/{BUILD,RPMS,S{OURCE,PEC,RPM}S}`
- Tell rpmbuild where to build by adding the following in your `.rpmmacros` file : `echo -e “%_topdir\t$HOME/rpmbuild” » ~/.rpmmacros`
## Building package
There are several ways to build a rpm, depending on what kind of stuff you have to deal with.
### Building from a tar.gz archive containing a .spec file
Run the following on you .tar.gz archive : `rpmbuild -tb memcached-1.4.0.tar.gz`. When the building process ends, you will find your package in a `$HOME/rpmbuild/RPMS/x86_64/` like directory, depending on your architecture.
### Building from a spec file
- `rpmbuild -v -bb ./contrib/redhat/collectd.spec`
- If you are missing some dependencies : `rpmbuild -v -bb ./contrib/redhat/collectd.spec 2>&1 |awk '/is needed/ {print $1;}'|xargs yum install -y`

View file

@ -0,0 +1,11 @@
---
title: "Clean old centos kernels"
linkTitle: "Clean old centos kernels"
date: 2016-02-03
description: >
Clean old centos kernels
---
There is a setting in `/etc/yum.conf` that does exactly that : `installonly_limit=`. The value of this setting is the number of older kernels that are kept when a new kernel is installed by yum. If the number of installed kernels becomes greater than this, the oldest one gets removed at the same time a new one is installed.
This cleaning can also be done manually with a command that belongs to the yum-utils package : `package-cleanup oldkernels count=2`

View file

@ -0,0 +1,14 @@
---
title: "Investigate postgresql disk usage"
linkTitle: "Investigate postgresql disk usage"
date: 2015-11-24
description: >
Investigate postgresql disk usage
---
## How to debug disk occupation in postgresql
- get a database oid number from `ncdu` in `/var/lib/postgresql`
- reconcile oid number and db name with : `select oid,datname from pg_database where oid=18595;`
- Then in database : `select table_name,pg_relation_size(quote_ident(table_name)) from information_schema.tables where table_schema = 'public' order by 2;`

View file

@ -0,0 +1,38 @@
---
title: "etc-update script for alpine linux"
linkTitle: "etc-update script for alpine linux"
date: 2019-04-02
description: >
etc-update script for alpine linux
---
Alpine linux doesn't seem to have a tool to merge pending configuration changes, so I wrote one :
{{< highlight sh >}}
#!/bin/sh
set -eu
for new_file in $(find /etc -iname '*.apk-new'); do
current_file=${new_file%.apk-new}
echo "===== New config file version for $current_file ====="
diff ${current_file} ${new_file} || true
while true; do
echo "===== (r)eplace file with update? (d)iscard update? (m)erge files? (i)gnore ====="
PS2="k/d/m/i? "
read choice
case ${choice} in
r)
mv ${new_file} ${current_file}
break;;
d)
rm -f ${new_file}
break;;
m)
vimdiff ${new_file} ${current_file}
break;;
i)
break;;
esac
done
done
{{< /highlight >}}

View file

@ -0,0 +1,9 @@
---
title: "Use spaces in fstab"
linkTitle: "Use spaces in fstab"
date: 2011-09-29
description: >
How to use spaces in a folder name in fstab
---
Here is how to use spaces in a folder name in fstab : you put `\040` where you want a space.

View file

@ -0,0 +1,32 @@
---
title: "i3dropdown"
linkTitle: "i3dropdown"
date: 2020-01-23
description: >
i3dropdown
---
i3dropdown is a tool to make any X application drop down from the top of the screen, in the famous quake console style back in the day.
## Compilation
First of all, you have get i3dropdown and compile it. It does not have any dependencies so it is really easy :
{{< highlight sh >}}
git clone https://gitlab.com/exrok/i3dropdown
cd i3dropdown
make
cp build/i3dropdown ~/bin/
{{< /highlight >}}
## i3 configuration
Here is a working example of the pavucontrol app, a volume mixer I use :
{{< highlight conf >}}
exec --no-startup-id i3 --get-socketpath > /tmp/i3wm-socket-path
for_window [instance="^pavucontrol"] floating enable
bindsym Mod4+shift+p exec /home/julien/bin/i3dropdown -W 90 -H 50 pavucontrol pavucontrol-qt
{{< /highlight >}}
To work properly, i3dropdown needs to have the path to the i3 socket. Because the command to get the socketpath from i3 is a little slow, it is best to cache it somewhere. By default
i3dropdown recognises `/tmp/i3wm-socket-path`. Then each window managed by i3dropdown needs to be floating. The last line bind a key to invoke or mask the app.

View file

@ -0,0 +1,9 @@
---
title: "Removing libreoffice write protection"
linkTitle: "Removing libreoffice write protection"
date: 2018-03-05
description: >
Removing libreoffice write protection
---
You can choose to ignore write-protection by setting `Tools > Options > libreOffice Writer > Formatting Aids > Protected Areas > Ignore protection`.

View file

@ -0,0 +1,10 @@
---
title: "Link to a deleted inode"
linkTitle: "Link to a deleted inode"
date: 2018-03-05
description: >
Link to a deleted inode
---
Get the inode number from `lsof`, then run `debugfs -w /dev/mapper/vg-home -R 'link <16008> /some/path'` where 16008 is the inode number (the < > are important, they tell debugfs you manipulate an inode).

View file

@ -0,0 +1,10 @@
---
title: "Understanding make"
linkTitle: "Understanding make"
date: 2018-01-30
description: >
Understanding make
---
http://gromnitsky.users.sourceforge.net/articles/notes-for-new-make-users/

View file

@ -0,0 +1,21 @@
---
title: "Aggregate images into a video with mencoder"
linkTitle: "Aggregate images into a video with mencoder"
date: 2018-04-30
description: >
Aggregate images into a video withmencoder
---
## Aggregate png images into a video
{{< highlight sh >}}
mencoder mf://*.png -mf w=1400:h=700:fps=1:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi
{{< /highlight >}}
You should use the following to specify a list of files instead of `*.png`:
{{< highlight sh >}}
mf://@list.txt
{{< /highlight >}}
## References
- http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-enc-images.html

View file

@ -0,0 +1,29 @@
---
title: "Installing mssql on centos 7"
linkTitle: "Installing mssql on centos 7"
date: 2019-07-09
description: >
Installing mssql on centos 7
---
{{< highlight sh >}}
vi /etc/sysconfig/network-scripts/ifcfg-eth0
vi /etc/resolv.conf
curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo
curl -o /etc/yum.repos.d/mssql-prod.repo https://packages.microsoft.com/config/rhel/7/prod.repo
yum update
yum install -y mssql-server mssql-tools
yum install -y sudo
localectl set-locale LANG=en_US.utf8
echo "export LANG=en_US.UTF-8" >> /etc/profile.d/locale.sh
echo "export LANGUAGE=en_US.UTF-8" >> /etc/profile.d/locale.sh
yum install -y openssh-server
systemctl enable sshd
systemctl start sshd
passwd
/opt/mssql/bin/mssql-conf setup
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Europe/Paris /etc/localtime
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -p
{{< /highlight >}}

View file

@ -0,0 +1,12 @@
---
title: "Cannot login role into postgresql"
linkTitle: "Cannot login role into postgresql"
date: 2015-11-24
description: >
Cannot login role into postgresql
---
{{< highlight sh >}}
ALTER ROLE "user" LOGIN;
{{< /highlight >}}

View file

@ -0,0 +1,25 @@
---
title: "LDAP auth with nginx"
linkTitle: "LDAP auth with nginx"
date: 2018-03-05
description: >
LDAP auth with nginx
---
{{< highlight sh >}}
ldap_server ldap {
auth_ldap_cache_enabled on;
auth_ldap_cache_expiration_time 10000;
auth_ldap_cache_size 1000;
url "ldaps://ldapslave.adyxax.org/ou=Users,dc=adyxax,dc=org?uid?sub?(objectClass=posixAccount)";
binddn "cn=admin,dc=adyxax,dc=org";
binddn_passwd secret;
group_attribute memberUid;
group_attribute_is_dn off;
satisfy any;
require valid_user;
#require group "cn=admins,ou=groups,dc=adyxax,dc=org";
}
{{< /highlight >}}

View file

@ -0,0 +1,117 @@
---
title: "Pleroma installation notes"
linkTitle: "Pleroma installation notes"
date: 2018-11-16
description: >
Pleroma installation notes
---
This article is about my installation of pleroma in a standard alpine linux lxd container.
## Installation notes
{{< highlight sh >}}
apk add elixir nginx postgresql postgresql-contrib git sudo erlang-ssl erlang-xmerl erlang-parsetools erlang-runtime-tools make gcc build-base vim vimdiff htop curl
/etc/init.d/postgresql start
rc-update add postgresql default
cd /srv
git clone https://git.pleroma.social/pleroma/pleroma
cd pleroma/
mix deps.get
mix generate_config
cp config/generated_config.exs config/prod.secret.exs
cat config/setup_db.psql
{{< /highlight >}}
At this stage you are supposed to execute these setup_db commands in your postgres. Instead of chmoding and stuff detailed in the official documentation I execute it manually from psql shell :
{{< highlight sh >}}
su - postgres
psql
CREATE USER pleroma WITH ENCRYPTED PASSWORD 'XXXXXXXXXXXXXXXXXXX';
CREATE DATABASE pleroma_dev OWNER pleroma;
\c pleroma_dev;
CREATE EXTENSION IF NOT EXISTS citext;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
{{< /highlight >}}
Now back to pleroma :
{{< highlight sh >}}
MIX_ENV=prod mix ecto.migrate
MIX_ENV=prod mix phx.server
{{< /highlight >}}
If this last command runs without error your pleroma will be available and you can test it with :
{{< highlight sh >}}
curl http://localhost:4000/api/v1/instance
{{< /highlight >}}
If this works, you can shut it down with two C-c and we can configure nginx. This article doesn't really cover my setup since my nginx doesn't run there, and I am using letsencrypt wildcard certificates fetched somewhere else unrelated, so to simplify I only paste the vhost part of the configuration :
{{< highlight sh >}}
### in nginx.conf inside the container ###
# {{{ pleroma
proxy_cache_path /tmp/pleroma-media-cache levels=1:2 keys_zone=pleroma_media_cache:10m max_size=500m inactive=200m use_temp_path=off;
ssl_session_cache shared:ssl_session_cache:10m;
server {
listen 80;
listen [::]:80;
server_name social.adyxax.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name social.adyxax.org;
root /usr/share/nginx/html;
include /etc/nginx/vhost.d/social.conf;
ssl_certificate /etc/nginx/fullchain;
ssl_certificate_key /etc/nginx/privkey;
}
# }}}
### in a vhost.d/social.conf ###
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://172.16.1.8:4000/;
add_header 'Access-Control-Allow-Origin' '*';
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
allow all;
}
location /proxy {
proxy_cache pleroma_media_cache;
proxy_cache_lock on;
proxy_pass http://172.16.1.8:4000$request_uri;
}
client_max_body_size 20M;
{{< /highlight >}}
Now add the phx.server on boot. I run pleroma has plemora user to completely limit the permissions of the server software. The official documentation has all files belong to the user running the server, I prefer that only the uploads directory does. Since I don't run nginx from this container I also edit this out :
{{< highlight sh >}}
adduser -s /sbin/nologin -D -h /srv/pleroma pleroma
cp -a /root/.hex/ /srv/pleroma/.
cp -a /root/.mix /srv/pleroma/.
chown -R pleroma:pleroma /srv/pleroma/uploads
cp installation/init.d/pleroma /etc/init.d
sed -i /etc/init.d/pleroma -e '/^directory=/s/=.*/=\/srv\/pleroma/'
sed -i /etc/init.d/pleroma -e '/^command_user=/s/=.*/=nobody:nobody/'
sed -i /etc/init.d/pleroma -e 's/nginx //'
rc-update add pleroma default
rc-update add pleroma start
{{< /highlight >}}
You should be good to go and access your instance from any web browser. After creating your account in a web browser come back to the cli and set yourself as moderator :
{{< highlight sh >}}
mix set_moderator adyxax
{{< /highlight >}}
## References
- https://git.pleroma.social/pleroma/pleroma

View file

@ -0,0 +1,17 @@
---
title: "Grant postgresql read only access"
linkTitle: "Grant postgresql read only access"
date: 2015-11-24
description: >
Grant postgresql read only access
---
{{< highlight sh >}}
GRANT CONNECT ON DATABASE "db" TO "user";
\c db
GRANT USAGE ON SCHEMA public TO "user";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "user";
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO "user";
{{< /highlight >}}

View file

@ -0,0 +1,18 @@
---
title: "Change owner on a postgresql database and all tables"
linkTitle: "Change owner on a postgresql database and all tables"
date: 2012-04-20
description: >
Change owner on a postgresql database and all tables
---
{{< highlight sh >}}
ALTER DATABASE name OWNER TO new_owner
for tbl in `psql -qAt -c "select tablename from pg_tables where schemaname = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
for tbl in `psql -qAt -c "select sequence_name from information_schema.sequences where sequence_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
for tbl in `psql -qAt -c "select table_name from information_schema.views where table_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
{{< /highlight >}}
{{< highlight sh >}}
reassign owned by "support" to "test-support";
{{< /highlight >}}

View file

@ -0,0 +1,11 @@
---
title: "Pulseaudio"
linkTitle: "Pulseaudio"
date: 2018-09-25
description: >
Pulseaudio
---
- List outputs : `pacmd list-sinks | grep -e 'name:' -e 'index'`
- Select a new one : `pacmd set-default-sink alsa_output.usb-C-Media_Electronics_Inc._USB_PnP_Sound_Device-00.analog-stereo`

View file

@ -0,0 +1,13 @@
---
title: "Purge postfix queue based on email contents"
linkTitle: "Purge postfix queue based on email contents"
date: 2009-04-27
description: >
Purge postfix queue based on email contents
---
{{< highlight sh >}}
find /var/spool/postfix/deferred/ -type f -exec grep -li 'XXX' '{}' \; | xargs -n1 basename | xargs -n1 postsuper -d
{{< /highlight >}}

View file

@ -0,0 +1,21 @@
---
title: "Qmail"
linkTitle: "Qmail"
date: 2018-03-05
description: >
Qmail
---
## Commands
- Get statistics : `qmail-qstat`
- list queued mails : `qmail-qread`
- Read an email in the queue (NNNN is the #id from qmail-qread) : `find /var/qmail/queue -name NNNN| xargs cat | less`
- Change queue lifetime for qmail in seconds (example here for 15 days) : `echo 1296000 > /var/qmail/control/queuelifetime`
## References
- http://www.lifewithqmail.org/lwq.html
- http://www.fileformat.info/tip/linux/qmailnow.htm
- https://www.hivelocity.net/kb/how-to-change-queue-lifetime-for-qmail/

View file

@ -0,0 +1,18 @@
---
title: "RocketChat"
linkTitle: "RocketChat"
date: 2019-08-06
description: >
RocketChat
---
Docker simple install :
{{< highlight sh >}}
docker run --name db -d mongo --smallfiles --replSet hurricane
docker exec -ti db mongo
> rs.initiate()
docker run -p 3000:3000 --name rocketchat --env ROOT_URL=http://hurricane --env MONGO_OPLOG_URL=mongodb://db:27017/local?replSet=hurricane --link db -d rocket.chat
{{< /highlight >}}

View file

@ -0,0 +1,17 @@
---
title: "Screen cannot open terminal error"
linkTitle: "Screen cannot open terminal error"
date: 2018-07-03
description: >
Screen cannot open terminal error
---
If you encounter :
{{< highlight sh >}}
Cannot open your terminal '/dev/pts/0' - please check.
{{< /highlight >}}
Then you did not open the shell with the user you logged in with. You can make screen happy by running :
{{< highlight sh >}}
script /dev/null
{{< /highlight >}}

View file

@ -0,0 +1,18 @@
---
title: "Seti@Home"
linkTitle: "Seti@Home"
date: 2018-03-05
description: >
Seti@Home
---
{{< highlight sh >}}
apt install boinc
echo "graou" > /var/lib/boinc-client/gui_rpc_auth.cfg
systemctl restart boinc-client
boinccmd --host localhost --passwd graou --get_messages 0
boinccmd --host localhost --passwd graou --get_state|less
boinccmd --host localhost --passwd graou --lookup_account http://setiathome.berkeley.edu <EMAIL> XXXXXX
boinccmd --host localhost --passwd graou --project_attach http://setiathome.berkeley.edu <ACCOUNT_KEY>
{{< /highlight >}}

View file

@ -0,0 +1,16 @@
---
title: "Sqlite pretty print"
linkTitle: "Sqlite pretty print"
date: 2019-06-19
description: >
Sqlite pretty print
---
- In ~/.sqliterc :
{{< highlight sh >}}
.mode column
.headers on
.separator ROW "\n"
.nullvalue NULL
{{< /highlight >}}

View file

@ -1,4 +1,3 @@
--- ---
title: "Switching to Hugo" title: "Switching to Hugo"
linkTitle: "Switching to Hugo" linkTitle: "Switching to Hugo"

View file

@ -0,0 +1,5 @@
---
title: "Netapp"
linkTitle: "Netapp"
weight: 30
---

View file

@ -0,0 +1,12 @@
---
title: "Investigate memory errors"
linkTitle: "Investigate memory errors"
date: 2018-07-06
description: >
How to investigate memory errors on a data ONTAP system
---
{{< highlight sh >}}
set adv
system node show-memory-errors -node <cluster_node>
{{< / highlight >}}

View file

@ -1,8 +0,0 @@
---
title: "General News"
linkTitle: "News"
weight: 20
---

View file

@ -1,8 +1,5 @@
--- ---
title: "Travels" title: "Travels"
linkTitle: "Travels" linkTitle: "Travels"
weight: 20 weight: 20
--- ---

View file

@ -1,4 +1,3 @@
--- ---
title: "I am back from New Zealand" title: "I am back from New Zealand"
linkTitle: "Back from New Zealand" linkTitle: "Back from New Zealand"

View file

@ -1,12 +1,10 @@
--- ---
title: "Documentation" title: "Yet Another SysAdmin Wiki"
linkTitle: "Documentation" linkTitle: "Wiki"
weight: 20 weight: 20
menu: menu:
main: main:
weight: 20 weight: 20
--- ---
This section is where the user documentation for your project lives - all the information your users need to understand and successfully use your project. This is the wiki section of this website. When articles are not just self contained blog post I organise the information in the sections bellow :

View file

@ -1,9 +1,9 @@
--- ---
title: "About" title: "About me"
linkTitle: "About" linkTitle: "About me"
weight: 1 weight: 1
description: > description: >
Information about this site and the author Information about the author of this website
--- ---
## Who am I? ## Who am I?
@ -13,24 +13,23 @@ Hello, and thanks for asking! My name is Julien Dessaux, and I am a 34 years old
## Online presence ## Online presence
You won't find me on social networking websites. I have a Linkedin account that I don't use and that's it. I tried to make social networking work when I installed a pleroma instance You won't find me on social networking websites. I have a Linkedin account that I don't use and that's it. I tried to make social networking work when I installed a pleroma instance
for my own use but I ended up trashing it. I just don't get this aspect for my own use but I ended up trashing it. I just don't get this aspect of modern society. I hang out with my friends and we catch up : we talk about our lives, what happened to us. We share photos and
of modern society. I hang out with my friends when I want to hang out with them, and each time it's a blast : we talk about our lives, what happened to us. We share photos and stories while having a drink... and that's it!
stories while having a beer and it's really great that way : I don't want to change any of that.
## Professional Career ## Professional Career
I'm currently employed as a System and Network Architect at an awesome company named AlterWay, after 7 years at another awesome company named Intersec where I lead the IT team. I'm currently employed as a System and Network Architect at an awesome company named AlterWay, 3 years and counting. Before that I worked for 7 years at another awesome company named Intersec where I lead the IT team.
## Intersec ### Intersec
When I joined Intersec in September 2009 as the first full time system administrator we were just about 15 people. When I left in 2016 it had grown up to more than 160 people with When I joined Intersec in September 2009 as the first full time system administrator we were just about 15 people. When I left in 2016 it had grown up to more than 160 people with
branch offices in three countries, and I am glad I was along for the ride. I have been the head of IT for about four years, participating in Intersec's growth by scaling the branch offices in three countries, and I am glad I was along for the ride. I have been the head of IT for about four years, participating in Intersec's growth by scaling the
infrastructure, deploying new services (Remote access, self hosted email, backups, monitoring, etc.), and recruiting my teammates. I left Intersec looking for new challenges and infrastructure, deploying new services (Remote access, self hosted email, backups, monitoring, etc.), and recruiting my teammates. I left Intersec looking for new challenges and
for a new life away from the capital. Paris is one of the best cities on earth, but I needed a change and left for Lyon. for a new life away from the capital. Paris is one of the best cities on earth, but I needed a change and left for Lyon.
## AlterWay ### AlterWay
I joined Alterway in October 2016 for a more technical role and a bit of a career shift towards networking. It has been and still is a great experience. I joined Alterway in October 2016 for a more technical role and a bit of a career shift towards networking. It has been a great experience.
## How to get in touch ## How to get in touch

View file

@ -3,18 +3,19 @@ title: "adyxax.org"
linkTitle: "adyxax.org" linkTitle: "adyxax.org"
weight: 1 weight: 1
description: > description: >
adyxax.org is how I call my personal computer infrastructure. adyxax.org is my personal computer infrastructure. This section details how I built it and why, and how I maintain it.
--- ---
## What is adyxax.org? ## What is adyxax.org?
adyxax.org is how I call my personal computer infrastructure. It is very much like a small personnal private cloud of servers hosted here and there. I am using my experience as a adyxax.org is very much like a small personnal cloud of servers hosted here and there. I am using my experience as a
sysadmin to make it all work and provide various services that are useful to me and people close to me. sysadmin to make it all work and provide various services that are useful to me and people that are close to me. As a good sysadmin, I am trying to be lazy and build the most self
maintainable solution, with as little maintenance overhead as possible.
It relies on gentoo and openbsd servers interconnected with point to point openvpn links. Services run inside lxd containers and communications between all those services is assured It relies on mostly gentoo (and some optional openbsd) servers interconnected with point to point openvpn links. Services run inside lxd containers and communications between all those services work
thanks to dynamic routing with bird and ospf along those openvpn links. thanks to dynamic routing with bird and ospf along those openvpn links.
## Why write about it? ## Why write about it?
It is a rather unusual infrastructure that I am proud of, and writing about it helps me to reflect on what I built. Gentoo, OpenBSD and LXD is not the most popular combination of It is a rather unusual infrastructure that I am proud of, and writing about it helps me to reflect on what I built. Gentoo, OpenBSD and LXD is not the most popular combination of
technologies but it allowed me to build something simple, flexible and I believe somewhat elegant and beautiful. technologies but I leveraged it to build something simple, flexible and I believe somewhat elegant and beautiful.

View file

@ -0,0 +1,8 @@
---
title: "Services"
linkTitle: "Services"
weight: 1
description: >
Here are the services provided by adyxax.org
---

View file

@ -0,0 +1,16 @@
---
title: "checkmk"
linkTitle: "checkmk"
weight: 1
description: >
checkmk
---
TODO
## Updating
- Download latest raw edition package from http://mathias-kettner.com/check_mk_download_version.php?HTML=yes&version=1.2.8p15&edition=cre and install it.
- `run omd backup adyxax adyxax.bak`
- `run omd update adyxax`
- If all went well, apt purge the previous check_mk version to free space.

View file

@ -0,0 +1,56 @@
---
title: "nethack"
linkTitle: "nethack"
weight: 1
description: >
nethack
---
## dgamelaunch
TODO
{{< highlight sh >}}
groupadd -r games
useradd -r -g games nethack
git clone
{{< /highlight >}}
## nethack
TODO
{{< highlight sh >}}
{{< /highlight >}}
## scores script
TODO
{{< highlight sh >}}
{{< /highlight >}}
## copying shared libraries
{{< highlight sh >}}
cd /opt/nethack
for i in `ls bin`; do for l in `ldd bin/$i | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done; done
for l in `ldd dgamelaunch | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done
for l in `ldd nethack-3.7.0-r1/games/nethack | tail -n +1 | cut -d'>' -f2 | awk '{print $1}'`; do if [ -f $l ]; then echo $l; cp $l lib64/; fi; done
{{< /highlight >}}
## making device nodes
TODO! For now I mount all of /dev in the chroot :
{{< highlight sh >}}
#mknod -m 666 dev/ptmx c 5 2
mount -R /dev /opt/nethack/dev
{{< /highlight >}}
## debugging
{{< highlight sh >}}
gdb chroot
run --userspec=nethack:games /opt/nethack/ /dgamelaunch
{{< /highlight >}}

View file

@ -0,0 +1,60 @@
---
title: "www"
linkTitle: "www"
weight: 1
description: >
adyxax.org main entry website. www.adyxax.org, wiki.adyxax.org and blog.adyxax.org all point here.
---
This is the website you are currently reading. It is a static website built using [hugo](https://github.com/gohugoio/hugo). This article details how I
installed hugo, how I initialised this website and how I manage it. I often refer to it as wiki.adyxax.org because I hosted a unique dokuwiki for a long
time as my main website (and a pmwiki before that), but with hugo it has become more than that. It is now a mix of wiki, blog and showcase of my work and interests.
## Installing hugo
{{< highlight sh >}}
go get github.com/gohugoio/hugo
{{< / highlight >}}
You probably won't encounter this issue but this command failed at the time I installed hugo because the master branch in one of the dependencies was
tainted. I fixed it with by using a stable tag for this project and continue installing hugo from there:
{{< highlight sh >}}
cd go/src/github.com/tdewolff/minify/
tig --all
git checkout v2.6.1
go get github.com/gohugoio/hugo
{{< / highlight >}}
This did not build me the extended version of hugo that I need for the [docsy](https://github.com/google/docsy) theme I chose, so I had to get it by doing :
{{< highlight sh >}}
cd ~/go/src/github.com/gohugoio/hugo/
go get --tags extended
go install --tags extended
{{< / highlight >}}
## Bootstraping this site
{{< highlight sh >}}
hugo new site www
cd www
git init
git submodule add https://github.com/google/docsy themes/docsy
{{< / highlight >}}
The docsy theme requires two nodejs programs to run :
{{< highlight sh >}}
npm install -D --save autoprefixer
npm install -D --save postcss-cli
{{< / highlight >}}
## hugo commands
To spin up the live server for automatic rebuilding the website when writing articles :
{{< highlight sh >}}
hugo server --bind 0.0.0.0 --minify --disableFastRender
{{< / highlight >}}
To publish the website in the `public` folder :
{{< highlight sh >}}
hugo --minify
{{< / highlight >}}

View file

@ -1,39 +0,0 @@
# This website
This website is a static website build using [hugo](https://github.com/gohugoio/hugo). This article details how I installed hugo, how I initialised this website and how I manage it.
## Installing hugo
{{< highlight sh >}}
go get github.com/gohugoio/hugo
{{< / highlight >}}
This failed because the master branch in one of the dependencies was tainted, I fixed it with :
{{< highlight sh >}}
cd go/src/github.com/tdewolff/minify/
tig --all
git checkout v2.6.1
go get github.com/gohugoio/hugo
{{< / highlight >}}
This didn't build me the extended version of hugo that I need for the theme I chose, so I had to do :
{{< highlight sh >}}
cd ~/go/src/github.com/gohugoio/hugo/
go get --tags extended
go install --tags extended
{{< / highlight >}}
## Bootstraping this site
{{< highlight sh >}}
hugo new site www
cd www
git init
git submodule add https://github.com/alex-shpak/hugo-book themes/book
{{< / highlight >}}
## Live server for automatic rebuilding when writing
{{< highlight sh >}}
hugo server --bind 0.0.0.0 --minify
{{< / highlight >}}

View file

@ -0,0 +1,8 @@
---
title: "Gentoo"
linkTitle: "Gentoo"
weight: 1
description: >
Gentoo related articles
---

View file

@ -0,0 +1,231 @@
---
title: "Installation"
linkTitle: "installation"
weight: 1
description: >
Installation of a gentoo system
---
## Installation media
You can get a bootable iso or liveusb from https://www.gentoo.org/downloads/. I recommend the minimal one. To create a bootable usb drive juste use `dd` to copy the image on it. Then boot on this brand new installation media.
Once you boot on the installation media, you can start sshd and set a temporary password and proceed with the installation more confortably from another machine :
{{< highlight sh >}}
/etc/init.d/sshd start
passwd
{{< /highlight >}}
## Partitionning
There are several options depending on wether you need soft raid, full disk encryption or a simple root device with no additional complications. It will also differ if you are using a virtual machine or a physical one.
{{< highlight sh >}}
fdisk /dev/sda
g
n
1
2048
+2M
t
1
4
n
2
6144
+512M
t
2
1
n
3
1054720
w
mkfs.ext4 /dev/sda3
mkfs.fat -F 32 -n efi-boot /dev/sda2
mount /dev/sda3 /mnt/gentoo
{{< /highlight >}}
## Get the stage3 and chroot into it
Get the stage 3 installation file from https://www.gentoo.org/downloads/. I personnaly use the non-multilib one from the advanced choices, since I am no longer using and 32bits software except steam, and I use steam from a multilib chroot.
Put the archive on the server in /mnt/gentoo (you can simply wget it from there), then extract it :
{{< highlight sh >}}
tar xpf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
mount /dev/sda2 boot
mount -t proc none proc
mount -t sysfs none sys
mount -o rbind /dev dev
cp /etc/resolv.conf etc/
chroot .
{{< /highlight >}}
## Initial configuration
We prepare the local language of the system :
{{< highlight sh >}}
env-update && source /etc/profile
echo 'LANG="en_US.utf8"' > /etc/env.d/02locale
sed '/#en_US.UTF-8/s/#//' -i /etc/locale.gen
locale-gen
source /etc/profile
{{< /highlight >}}
We set a loop device to hold the portage tree. It will be formatted with optimisation for the many small files that compose it :
{{< highlight sh >}}
mkdir -p /srv/gentoo-distfiles
truncate -s 10G /portage.img
mke2fs -b 1024 -i 2048 -m 0 -O "dir_index" -F /portage.img
tune2fs -c 0 -i 0 /portage.img
mkdir /usr/portage
mount -o loop,noatime,nodev /portage.img /usr/portage/
{{< /highlight >}}
We set default compilation options and flags. If you are not me and cannot rsync this location, you can browse it from https://packages.adyxax.org/x86-64/etc/portage/ :
{{< highlight sh >}}
rsync -a --delete packages.adyxax.org:/srv/gentoo-builder/x86-64/etc/portage/ /etc/portage/
sed -i /etc/portage/make.conf -e s/buildpkg/getbinpkg/
echo 'PORTAGE_BINHOST="https://packages.adyxax.org/x86-64/packages/"' >> /etc/portage/make.conf
{{< /highlight >}}
We get the portage tree and sync the timezone
{{< highlight sh >}}
emerge --sync
{{< /highlight >}}
## Set hostname and timezone
{{< highlight sh >}}
export HOSTNAME=XXXXX
sed -i /etc/conf.d/hostname -e /hostname=/s/=.*/=\"${HOSTNAME}\"/
echo "Europe/Paris" > /etc/timezone
emerge --config sys-libs/timezone-data
{{< /highlight >}}
## Check cpu flags and compatibility
TODO
{{< highlight sh >}}
emerge cpuid2cpuflags -1q
cpuid2cpuflags
gcc -### -march=native /usr/include/stdlib.h
{{< /highlight >}}
## Rebuild the system
{{< highlight sh >}}
emerge --quiet -e @world
emerge --quiet dosfstools app-admin/logrotate app-admin/syslog-ng app-portage/gentoolkit dev-vcs/git bird openvpn htop net-analyzer/tcpdump net-misc/bridge-utils sys-apps/i2c-tools sys-apps/pciutils sys-apps/usbutils sys-boot/grub sys-fs/ncdu sys-process/lsof
{{< /highlight >}}
## Grab a working kernel
Next we need to Grab a working kernel from our build server along with its modules. If you don't have one already, you have some work to do!
Check the necessary hardware support with :
{{< highlight sh >}}
i2cdetect -l
lspci -nnk
lsusb
{{< /highlight >}}
TODO specific page with details on how to build required modules like the nas for example.
{{< highlight sh >}}
emerge gentoo-sources genkernel -q
...
{{< /highlight >}}
## Final configuration steps
### fstab
{{< highlight sh >}}
# /etc/fstab: static file system information.
#
#<fs> <mountpoint> <type> <opts> <dump/pass>
/dev/vda3 / ext4 noatime 0 1
/dev/vda2 /boot vfat noatime 1 2
/portage.img /usr/portage ext2 noatime,nodev,loop 0 0
{{< /highlight >}}
### networking
{{< highlight sh >}}
echo 'hostname="phoenix"' > /etc/conf.d/hostname
echo 'dns_domain_lo="adyxax.org"
config_eth0="192.168.1.3 netmask 255.255.255.0"
routes_eth0="default via 192.168.1.1"' > /etc/conf.d/net
cd /etc/init.d
ln -s net.lo net.eth0
rc-update add net.eth0 boot
{{< /highlight >}}
### Grub
TODO especially the conf in /etc/default/grub when using an encrypted /
{{< highlight sh >}}
{{< /highlight >}}
### /etc/hosts
{{< highlight sh >}}
scp root@collab-jde.nexen.net:/etc/hosts /etc/
{{< /highlight >}}
### root account access
{{< highlight sh >}}
mkdir -p /root/.ssh
echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDN1ha6PFKgxF3MSWUlDaruVVpj3UzoiN4IJEvDrCnDbIW8xu+TclbeGJSRXXBbqRKeUfhX0GDA7cvSUIAz2U7AGK7wq5tbzJKagVYtxcSHBSi6dZR9KGb3eoshnrCeFzem1jWXG02PZJGvjB+ml3QhUguyAqm9q0n/NL6zzKhGoKiELO+tQghGIY8jafRv4rE4yyXZnwuCu8JI9P8ldGhKgOPeOdKIVTIVezUmKILWgAF+Hg7O72rQqUua9sdoK1mEYme/wgu0bQbvN26owGgBAgS3uc2nngLD01TZToG/wC1wH9A3KxT6+3akjRlPfLOY0BuK4OBGEGm6e0KZrIMhUr8fHQ8nmTmBqw7puI0gIXYB2EjhpsQ7TijYVqLYXbyxaXYyqisgY0QRWC7Te5Io6TSgorfXzi7zrcQGgWByHkhxTylf36LYSKWEheIQIRqytOdGqeXagFMz2ptLFKk4dA61LS5fPXIJucdghvnmLPml8cO9/9VHQ7gq7DxQu7sIwt/W13yTTUyI9DSHwxeHUwECzxAb5pOVL6pRjTMH8q1/eAMl35TFSh6s5tGvvHGz9+gMlE9A2Pv8CyXDBmXV6srrwxTSlglnmgdq6c9w3VtBKu572/z0cS6vqZMgEno4rIiwyhqNWdjbMXYw/U0q/w5XC9zCcSuluxvaY14qqQ== adyxax
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDMdBAFjENiPMTtq90GT3+NZ68nfGxQiRExaYYnLzm1ecmulCvsuA4AOpeLY6f+FWe+ludiw7nhrXzssDdsKBy0QL+XQyvjjjW4X+k9MYhP1gAWXEOGJnjJ/1ovEsMt++6fLyNKLUTA46kErbEehDs22r+rIiEKatrn0BNrJcRI94H44oEL1/ImzVam0cSBL0tPiaJxe60sBs7M76zfyFtVdMGkeuBpS7ee+FLA58fsS3/sEZmkas8MT0QdvZz1y/66MknXYbIaqDSOUACXGF4yVKpogLRRJ1SgNo1Ujo/U3VOR1O4CiQczsZOcbSdjgl0x3fJb7BaIxrZy9iW2I7G/L/chfTvRws+x1s1y5FNZOOiXMCdZjhgLaRwb6p5gMsMVn9sJbhDjmejcAkBKQDkzbvxxhfVkH225FoVXA9YF0msWLyOEyZQYbA8autLDJsAOT5RDfw/G82DQBufAPEBR/bPby0Hl5kjqW75bpSVxDvzmKwt3EpITg9iuYEhvYZ/Zq5qC1UJ54ZfOvaf0PsTUzFePty6ve/JzfxCV1XgFQ+B8l4NSz11loDfNXSUngf7lL4qu5X4aN6WmLFO1YbyFlfpvt3K1CekJmWVeE5mV9EFTUJ4ParVWRGiA4W+zaCOsHgRkcGkp4eYGyWW8gOR/lVxYU2IFl9mbMrC9bkdRbQ== hurricane' > /root/.ssh/authorized_keys
passwd
{{< /highlight >}}
### Add necessary daemons on boot
{{< highlight sh >}}
rc-update add syslog-ng default
rc-update add cronie default
rc-update add sshd default
{{< /highlight >}}
## TODO
{{< highlight sh >}}
net-firewall/shorewall
...
rc-update add shorewall default
sed '/PRODUCTS/s/=.*/="shorewall"/' -i /etc/conf.d/shorewall-init
rc-update add shorewall-init boot
net-analyzer/fail2ban
echo '[sshd]
enabled = true
filter = sshd
ignoreip = 127.0.0.1/8 10.1.0.0/24 37.187.103.36 137.74.173.247 90.85.207.113
bantime = 3600
banaction = shorewall
logpath = /var/log/messages
maxretry = 3' > /etc/fail2ban/jail.d/sshd.conf
rc-update add fail2ban default
app-emulation/docker
/etc/docker/daemon.json
{ "iptables": false }
rc-update add docker default
app-emulation/lxd
rc-update add lxd default
{{< /highlight >}}
## References
- http://blog.siphos.be/2013/04/gentoo-protip-using-buildpkgonly/
- https://wiki.gentoo.org/wiki/Genkernel
- https://wiki.gentoo.org/wiki/Kernel/Configuration
- https://wiki.gentoo.org/wiki/Kernel
- https://forums.gentoo.org/viewtopic-t-1076024-start-0.html
- https://wiki.gentoo.org/wiki/Binary_package_guide#Setting_up_a_binary_package_host

View file

@ -0,0 +1,45 @@
---
title: "Gentoo Kernel Upgrades"
linkTitle: "Kernel Upgrades"
weight: 1
description: >
Gentoo kernel upgrades on adyxax.org
---
# Gentoo kernel upgrades
## Building on collab-jde
{{< highlight sh >}}
PREV_VERSION=4.14.78-gentoo
eselect kernel list
eselect kernel set 1
cd /usr/src/linux
for ARCHI in `ls /srv/gentoo-builder/kernels/`; do
make mrproper
cp /srv/gentoo-builder/kernels/${ARCHI}/config-${PREV_VERSION} .config
echo "~~~~~~~~~~ $ARCHI ~~~~~~~~~~"
make oldconfig
make -j5
INSTALL_MOD_PATH=/srv/gentoo-builder/kernels/${ARCHI}/ make modules_install
INSTALL_PATH=/srv/gentoo-builder/kernels/${ARCHI}/ make install
done
{{< / highlight >}}
## Deploying on each node :
{{< highlight sh >}}
export VERSION=5.4.28-gentoo-x86_64
wget http://packages.adyxax.org/kernels/x86_64/System.map-${VERSION} -O /boot/System.map-${VERSION}
wget http://packages.adyxax.org/kernels/x86_64/config-${VERSION} -O /boot/config-${VERSION}
wget http://packages.adyxax.org/kernels/x86_64/vmlinuz-${VERSION} -O /boot/vmlinuz-${VERSION}
rsync -a --delete collab-jde.nexen.net:/srv/gentoo-builder/kernels/x86_64/lib/modules/${VERSION} /lib/modules/
eselect kernel set 1
cd /usr/src/linux
cp /boot/config-${VERSION} .config
cp /boot/System.map-${VERSION} System.map
(cd usr ; make gen_init_cpio)
make modules_prepare
emerge @module-rebuild
genkernel --install initramfs
grub-mkconfig -o /boot/grub/grub.cfg
{{< / highlight >}}

View file

@ -0,0 +1,38 @@
---
title: "LXD"
linkTitle: "LXD"
weight: 1
description: >
How to setup a LXD server
---
{{< highlight sh >}}
touch /etc{/subuid,/subgid}
usermod --add-subuids 1000000-1065535 root
usermod --add-subgids 1000000-1065535 root
emerge -q app-emulation/lxd
/etc/init.d/lxd start
rc-update add lxd default
{{< /highlight >}}
{{< highlight sh >}}
myth /etc/init.d # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 10.1.0.247
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Invalid input, try again.
Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
{{< /highlight >}}

View file

@ -0,0 +1,65 @@
---
title: "Steam"
linkTitle: "Steam"
weight: 1
description: >
How to make steam work seamlessly on gentoo with a chroot
---
I am not using a multilib profile on gentoo (I use amd64 only everywhere), so when the time came to install steam I had to get a little creative. Overall I believe this is the perfect
way to install and use steam as it self contains it cleanly while not limiting the functionalities. In particular sound works, as does the hardware acceleration in games. I tried to
achieve that with containers but didn't quite made it work as well as this chroot setup.
## Installation notes
Note that there is no way to provide a "most recent stage 3" installation link. You will have to browse http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64/
and adjust the download url manually bellow :
{{< highlight sh >}}
mkdir /usr/local/steam
cd /usr/local/steam
wget http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20190122T214501Z.tar.xz
tar -xvpf stage3*
rm stage3*
cp -L /etc/resolv.conf etc
mkdir usr/portage
mkdir -p srv/gentoo-distfiles
mount -R /dev dev
mount -R /sys sys
mount -t proc proc proc
mount -R /usr/portage usr/portage
mount -R /usr/src usr/src
mount -R /srv/gentoo-distfiles/ srv/gentoo-distfiles/
mount -R /run run
cp /etc/portage/make.conf etc/portage/
sed -e '/LLVM_TARGETS/d' -e '/getbinpkg/d' -i etc/portage/make.conf
rm -rf etc/portage/package.use
cp /etc/portage/package.use etc/portage/
cp /etc/portage/package.accept_keywords etc/portage/
chroot .
env-update && source /etc/profile
wget -P /etc/portage/repos.conf/ https://raw.githubusercontent.com/anyc/steam-overlay/master/steam-overlay.conf
emaint sync --repo steam-overlay
emerge dev-vcs/git -q
emerge --ask games-util/steam-launcher
useradd -m -G audio,video steam
{{< /highlight >}}
## Launch script
Note that we use `su` and not `su -` since we need to preserve the environment. If you don't you won't get any sound in game. The pulseaudio socket is shared through the mount of
/run inside the chroot :
{{< highlight sh >}}
su
cd /usr/local/steam
mount -R /dev dev
mount -R /sys sys
mount -t proc proc proc
mount -R /usr/portage usr/portage
mount -R /usr/src usr/src
mount -R /run run
chroot .
env-update && source /etc/profile
su steam
steam
{{< /highlight >}}

@ -1 +1 @@
Subproject commit f060b7923fc82c93749b4eaa9656b6ec8496bfc8 Subproject commit 8b786354829536454688df1e63b9cad792f68562