aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--assets/base.css2
-rw-r--r--content/blog/aws/defaults.md254
-rw-r--r--content/blog/aws/secrets.md136
-rw-r--r--content/blog/cloudflare/importing-terraform.md59
-rw-r--r--content/blog/miscellaneous/generate-github-access-token-for-github-app.md67
-rw-r--r--content/blog/terraform/email-dns-unused-zone.md104
-rw-r--r--content/books/stormlight_archive/words-of-radiance-audiobook.md9
-rw-r--r--search/go.mod2
8 files changed, 632 insertions, 1 deletions
diff --git a/assets/base.css b/assets/base.css
index 94cfb9c..24774b6 100644
--- a/assets/base.css
+++ b/assets/base.css
@@ -147,7 +147,9 @@ body header nav ul li a,
body header nav ul li a:visited,
a:hover {
color: var(--red);
+ text-wrap: balance;
}
h2, h3, h4, h5, h6 {
color: var(--green);
+ text-wrap: balance;
}
diff --git a/content/blog/aws/defaults.md b/content/blog/aws/defaults.md
new file mode 100644
index 0000000..9fdbfa3
--- /dev/null
+++ b/content/blog/aws/defaults.md
@@ -0,0 +1,254 @@
+---
+title: Securing AWS default VPCs
+description: With terraform/opentofu
+date: 2024-09-10
+tags:
+- aws
+- opentofu
+- terraform
+---
+
+## Introduction
+
+AWS offers some network conveniences in the form of a default VPC, default security group (allowing access to the internet) and default routing table. These exist in all AWS regions your accounts have access to, even if never plan to deploy anything there. And yes most AWS regions cannot be disabled entirely, only the most recent ones can be.
+
+I feel the need to clean up these resources in order to prevent any misuse. Most people do not understand networking and some could inadvertently spawn instances with public IP addresses. By making the default VPC inoperative, these people need to come to someone more knowledgeable before they do anything foolish.
+
+## Module
+
+The special default variants of the following AWS terraform resources are quirky: defining them does not create anything but automatically import the built-in aws resources and then edit their attributes to match your configuration. Furthermore, destroying these resources would only remove them from your state.
+
+``` hcl
+resource "aws_default_vpc" "default" {
+ tags = { Name = "default" }
+}
+
+resource "aws_default_security_group" "default" {
+ ingress = []
+ egress = []
+ tags = { Name = "default" }
+ vpc_id = aws_default_vpc.default.id
+}
+
+resource "aws_default_route_table" "default" {
+ default_route_table_id = aws_default_vpc.default.default_route_table_id
+ route = []
+ tags = { Name = "default - empty" }
+}
+```
+
+The key here (and initial motivation for this article) is the `ingress = []` expression syntax (or `egress` or `route`): while these attributes are normally block attributes, you can also use them in a `= []` expression in order to express that you want to enforce the resource not having any ingress, egress or route rules. Defining the resources without any block rules would just leave these attributes untouched.
+
+## Iterating through all the default regions
+
+As I said, most AWS regions cannot be disabled entirely, only the most recent ones can be. It is currently not possible to instanciate terraform providers on the fly, but thankfully it is coming in a future OpenTofu release! In the meantime, we need to do these kinds of horrors:
+
+``` hcl
+provider "aws" {
+ alias = "ap-northeast-1"
+ profile = var.environment
+ region = "ap-northeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-2"
+ profile = var.environment
+ region = "ap-northeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-northeast-3"
+ profile = var.environment
+ region = "ap-northeast-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-south-1"
+ profile = var.environment
+ region = "ap-south-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-1"
+ profile = var.environment
+ region = "ap-southeast-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ap-southeast-2"
+ profile = var.environment
+ region = "ap-southeast-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "ca-central-1"
+ profile = var.environment
+ region = "ca-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-central-1"
+ profile = var.environment
+ region = "eu-central-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-north-1"
+ profile = var.environment
+ region = "eu-north-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-1"
+ profile = var.environment
+ region = "eu-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-2"
+ profile = var.environment
+ region = "eu-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "eu-west-3"
+ profile = var.environment
+ region = "eu-west-3"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "sa-east-1"
+ profile = var.environment
+ region = "sa-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-1"
+ profile = var.environment
+ region = "us-east-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-east-2"
+ profile = var.environment
+ region = "us-east-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-1"
+ profile = var.environment
+ region = "us-west-1"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+provider "aws" {
+ alias = "us-west-2"
+ profile = var.environment
+ region = "us-west-2"
+ default_tags { tags = { "managed-by" = "tofu" } }
+}
+
+module "ap-northeast-1" {
+ providers = { aws = aws.ap-northeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-2" {
+ providers = { aws = aws.ap-northeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ap-northeast-3" {
+ providers = { aws = aws.ap-northeast-3 }
+ source = "../modules/defaults"
+}
+
+module "ap-south-1" {
+ providers = { aws = aws.ap-south-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-1" {
+ providers = { aws = aws.ap-southeast-1 }
+ source = "../modules/defaults"
+}
+
+module "ap-southeast-2" {
+ providers = { aws = aws.ap-southeast-2 }
+ source = "../modules/defaults"
+}
+
+module "ca-central-1" {
+ providers = { aws = aws.ca-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-central-1" {
+ providers = { aws = aws.eu-central-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-north-1" {
+ providers = { aws = aws.eu-north-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-1" {
+ providers = { aws = aws.eu-west-1 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-2" {
+ providers = { aws = aws.eu-west-2 }
+ source = "../modules/defaults"
+}
+
+module "eu-west-3" {
+ providers = { aws = aws.eu-west-3 }
+ source = "../modules/defaults"
+}
+
+module "sa-east-1" {
+ providers = { aws = aws.sa-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-1" {
+ providers = { aws = aws.us-east-1 }
+ source = "../modules/defaults"
+}
+
+module "us-east-2" {
+ providers = { aws = aws.us-east-2 }
+ source = "../modules/defaults"
+}
+
+module "us-west-1" {
+ providers = { aws = aws.us-west-1 }
+ source = "../modules/defaults"
+}
+
+module "us-west-2" {
+ providers = { aws = aws.us-west-2 }
+ source = "../modules/defaults"
+}
+```
+
+## Conclusion
+
+Terraform is absolutely quirky at times, but it is not its fault here: the AWS provider and their magical default resources are.
diff --git a/content/blog/aws/secrets.md b/content/blog/aws/secrets.md
new file mode 100644
index 0000000..476d235
--- /dev/null
+++ b/content/blog/aws/secrets.md
@@ -0,0 +1,136 @@
+---
+title: Managing AWS secrets
+description: with the CLI and with terraform/opentofu
+date: 2024-08-13
+tags:
+- aws
+- opentofu
+- terraform
+---
+
+## Introduction
+
+Managing secrets in AWS is not an everyday task that allows me to naturally remember the specifics when I need them, especially the `--name` and `--secret-id` CLI inconsistency. I found I was lacking some simple notes that would prevent me from having to search the web in the future, here they are.
+
+## CLI
+
+### Creating secrets
+
+From a simple string:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-string \
+ --secret-string 'test'
+```
+
+From a text file:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-text \
+ --secret-string "$(cat ~/Downloads/adyxax.2024-07-31.private-key.pem)"
+```
+
+For binary file we `base64` encode the data:
+
+``` shell
+aws --profile common secretsmanager create-secret \
+ --name test-binary \
+ --secret-binary "$(cat ~/Downloads/some-blob|base64)"
+```
+
+### Updating secrets
+
+Beware that all the other aws secretsmanager commands use the `--secret-id` flag instead of the `--name` we needed when creating the secret.
+
+Update a secret string with:
+
+``` shell
+aws --profile common secretsmanager update-secret \
+ --secret-id test-string \
+ --secret-string 'test'
+```
+
+### Reading secrets
+
+Listing:
+
+``` shell
+aws --profile common secretsmanager list-secrets | jq -r '[.SecretList[].Name]'
+```
+
+Getting a secret value:
+
+``` shell
+aws --profile common secretsmanager get-secret-value --secret-id test-string
+```
+
+### Deleting secrets
+
+``` shell
+aws --profile common secretsmanager delete-secret --secret-id test-string
+```
+
+## Terraform
+
+### Resource
+
+Secret string:
+
+``` hcl
+resource "random_password" "main" {
+ length = 64
+ special = false
+ lifecycle {
+ ignore_changes = [special]
+ }
+}
+
+resource "aws_secretsmanager_secret" "main" {
+ name = "grafana-admin-password"
+}
+
+resource "aws_secretsmanager_secret_version" "main" {
+ secret_id = aws_secretsmanager_secret.main.id
+ secret_string = random_password.main.result
+}
+```
+
+Secret binary:
+
+``` hcl
+resource "random_bytes" "main" {
+ length = 32
+}
+
+resource "aws_secretsmanager_secret" "main" {
+ name = "data-encryption-key"
+}
+
+resource "aws_secretsmanager_secret_version" "main" {
+ secret_id = aws_secretsmanager_secret.main.id
+ secret_binary = random_bytes.main.base64
+}
+```
+
+### Datasource
+
+``` hcl
+data "aws_secretsmanager_secret_version" "main" {
+ secret_id = "test"
+}
+```
+
+Using the datasource differs if it contains a `secret_string` or a `secret_binary`. In most cases you will know your secret data therefore know which one to use. If for some reason you do not, this might be one of the rare legitimate use cases for the [try function](https://developer.hashicorp.com/terraform/language/functions/try):
+
+``` hcl
+try(
+ data.aws_secretsmanager_secret_version.main.secret_binary,
+ data.aws_secretsmanager_secret_version.main.secret_string,
+)
+```
+
+## Conclusion
+
+Once upon a time I wrote many small and short articles like this one but for some reason stopped. I will try to take on this habit again.
diff --git a/content/blog/cloudflare/importing-terraform.md b/content/blog/cloudflare/importing-terraform.md
new file mode 100644
index 0000000..7fc5dfd
--- /dev/null
+++ b/content/blog/cloudflare/importing-terraform.md
@@ -0,0 +1,59 @@
+---
+title: Importing cloudflare DNS records in terraform/opentofu
+description: a way to get the records IDs
+date: 2024-07-16
+tags:
+- cloudflare
+- opentofu
+- terraform
+---
+
+## Introduction
+
+Managing cloudflare DNS records using terraform/opentofu is easy enough, but importing existing records into your automation is not straightforward.
+
+## The problem
+
+Contrary to AWS, GCP and (I think) all other providers, a `cloudflare_record` terraform resource only specifies one potential value of the DNS record. Because of that, you cannot import the resource using a record's name since it can have multiple record values: you need a cloudflare record ID for that.
+
+Sadly these IDs are elusive and I did not find a way to get those from the webui dashboard. As best as I can tell, you have to query cloudflare's API to get this information.
+
+## Querying the API
+
+Most examples around the Internet make use of the old way of authenting with an email and an API key. The modern way is with an API token! An interesting fact is that while not straightforwardly specified, you can use it as a Bearer token. Here is the little script I wrote for this purpose:
+
+``` shell
+#!/usr/bin/env bash
+set -euo pipefail
+
+if [ "$#" -ne 3 ]; then
+ echo "usage: $(basename $0) <zone-name> <record-type> <record-name>"
+ exit 1
+else
+ ZONE_NAME="$1"
+ RECORD_TYPE="$2"
+ RECORD_NAME="$3"
+fi
+
+if [ -z "${CLOUDFLARE_API_TOKEN:-}" ]; then
+ echo "Please export a CLOUDFLARE_API_TOKEN environment variable prior to running this script" >&2
+ exit 1
+fi
+
+BASE_URL="https://api.cloudflare.com"
+
+get () {
+ REQUEST="$1"
+ curl -s -X GET "${BASE_URL}${REQUEST}" \
+ -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \
+ -H "Content-Type: application/json" | jq -r '.result[] | .id'
+}
+
+ZONE_ID=$(get "/client/v4/zones?name=${ZONE_NAME}")
+
+get "/client/v4/zones/${ZONE_ID}/dns_records?name=${RECORD_NAME}&type=${RECORD_TYPE}"
+```
+
+## Conclusion
+
+It works perfectly: with this script I managed to run my `tofu import cloudflare_record.factorio XXXX/YYYY` command and get on with my work.
diff --git a/content/blog/miscellaneous/generate-github-access-token-for-github-app.md b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
new file mode 100644
index 0000000..c08b92f
--- /dev/null
+++ b/content/blog/miscellaneous/generate-github-access-token-for-github-app.md
@@ -0,0 +1,67 @@
+---
+title: Generating a github access token for a github app in bash
+description: A useful script
+date: 2024-08-24
+tags:
+- bash
+- github
+---
+
+## Introduction
+
+Last week I had to find a way to generate a github access token for a github app.
+
+## The problem
+
+Github apps are the newest and recommended way to provide programmatic access to things that need to interact with github. You get some credentials that allow you to authenticate then generate some JWT which you can use to generate an access key... Lovely!
+
+When developping an "app", all this complexity mostly makes sense, but when all you want is to run some script it really gets in the way. From my research most people in this situation give up on github apps and either create a robot account, or bite the bullet and create personnal access tokens. The people who resist and try to do the right thing mostly end up with some nodejs and quite a few dependencies.
+
+I needed something simpler.
+
+## The script
+
+I took a lot of inspiration from [this script](https://github.com/Nastaliss/get-github-app-pat/blob/main/generate_github_access_token.sh), cleaned it up and ended up with:
+
+``` shell
+#!/usr/bin/env bash
+# This script generates a github access token. It Requires the following
+# environment variables:
+# - GITHUB_APP_ID
+# - GITHUB_APP_INSTALLATION_ID
+# - GITHUB_APP_PRIVATE_KEY
+set -euo pipefail
+
+b64enc() { openssl enc -base64 -A | tr '+/' '-_' | tr -d '='; }
+NOW=$(date +%s)
+
+HEADER=$(printf '{
+ "alg": "RS256",
+ "exp": %d,
+ "iat": %d,
+ "iss": "adyxax",
+ "kid": "0001",
+ "typ": "JWT"
+}' "$((NOW+10))" "${NOW}" | jq -r -c .)
+
+PAYLOAD=$(printf '{
+ "exp": %s,
+ "iat": %s,
+ "iss": %s
+}' "$((NOW + 10 * 59))" "$((NOW - 10))" "${GITHUB_APP_ID}" | jq -r -c .)
+
+SIGNED_CONTENT=$(printf '%s' "${HEADER}" | b64enc).$(printf '%s' "${PAYLOAD}" | b64enc)
+SIG=$(printf '%s' "${SIGNED_CONTENT}" | \
+ openssl dgst -binary -sha256 -sign <(printf "%s" "${GITHUB_APP_PRIVATE_KEY}") | b64enc)
+JWT=$(printf '%s.%s' "${SIGNED_CONTENT}" "${SIG}")
+
+curl -s --location --request POST \
+ "https://api.github.com/app/installations/${GITHUB_APP_INSTALLATION_ID}/access_tokens" \
+ --header "Authorization: Bearer $JWT" \
+ --header 'Accept: application/vnd.github+json' \
+ --header 'X-GitHub-Api-Version: 2022-11-28' | jq -r '.token'
+```
+
+## Conclusion
+
+It works, is simple and only requires bash, jq and openssl.
diff --git a/content/blog/terraform/email-dns-unused-zone.md b/content/blog/terraform/email-dns-unused-zone.md
new file mode 100644
index 0000000..cc8dc77
--- /dev/null
+++ b/content/blog/terraform/email-dns-unused-zone.md
@@ -0,0 +1,104 @@
+---
+title: Email DNS records for zones that do not send emails
+description: Automated with terraform/opentofu
+date: 2024-09-03
+tags:
+- cloudflare
+- DNS
+- opentofu
+- terraform
+---
+
+## Introduction
+
+There are multiple DNS records one needs to configure in order to setup and securely use a domain to send or receive emails: MX, DKIM, DMARC and SPF.
+
+An often overlooked fact is that you also need to configure some of these records even if you do not intend to use a domain to send emails. If you do not, scammers will spoof your domain to send fraudulent emails and your domain's reputation will suffer.
+
+## DNS email records you need
+
+### SPF
+
+The most important and only required record you need is a TXT record on the apex of your domain that advertises the fact that no server can send emails from your domain:
+```
+"v=spf1 -all"
+```
+
+### MX
+
+If you do not intend to ever send emails, you certainly do not intend to receive emails either. Therefore you should consider removing all MX records on your zone. Oftentimes your registrar will provision some pointing to a free email infrastructure that they provide along with your domain.
+
+### DKIM
+
+You do not need DKIM records if you are not sending emails.
+
+### DMARC
+
+While not strictly necessary, I strongly recommend to set a DMARC record that instructs the world to explicitly reject all emails not matching the SPF policy:
+
+```
+"v=DMARC1;p=reject;sp=reject;pct=100"
+```
+
+## Terraform / OpenTofu code
+
+### Zones
+
+I use a map of simple objects to specify email profiles for my DNS zones:
+``` hcl
+locals {
+ zones = {
+ "adyxax.eu" = { emails = "adyxax" }
+ "adyxax.org" = { emails = "adyxax" }
+ "anne-so-et-julien.fr" = { emails = "no" }
+ }
+}
+
+data "cloudflare_zone" "main" {
+ for_each = local.zones
+
+ name = each.key
+}
+```
+
+### SPF
+
+Then I map each profile to spf records:
+``` hcl
+locals {
+ spf = {
+ "adyxax" = "v=spf1 mx -all"
+ "no" = "v=spf1 -all"
+ }
+}
+
+resource "cloudflare_record" "spf" {
+ for_each = local.zones
+
+ name = "@"
+ type = "TXT"
+ value = local.spf[each.value.emails]
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+### DMARC
+
+The same mapping system we had for spf can be used here too, but I choose to keep things simple and in the scope of this article. My real setup has some clever tricks to make dmarc notifications work centralized to a single domain that will be the subject another post:
+
+``` hcl
+resource "cloudflare_record" "dmarc" {
+ for_each = { for name, info in local.zones :
+ name => info if info.emails == "no"
+ }
+
+ name = "@"
+ type = "TXT"
+ value = "v=DMARC1;p=reject;sp=reject;pct=100"
+ zone_id = data.cloudflare_zone.main[each.key].id
+}
+```
+
+## Conclusion
+
+Please keep your email DNS records tight and secure!
diff --git a/content/books/stormlight_archive/words-of-radiance-audiobook.md b/content/books/stormlight_archive/words-of-radiance-audiobook.md
new file mode 100644
index 0000000..03fa6dd
--- /dev/null
+++ b/content/books/stormlight_archive/words-of-radiance-audiobook.md
@@ -0,0 +1,9 @@
+---
+title: "Words of Radiance"
+date: 2024-08-14
+description: Brandon Sanderson
+---
+
+I just finished listening to the [Graphics Audio](https://www.graphicaudiointernational.net/the-stormlight-archive-2-download-series-set.html) adaptation of [Words of Radiance]({{< ref "words-of-radiance" >}}). Just like for [The Way of Kings]({{< ref "the-way-of-kings-audiobook" >}}), I must say it was a fantastic experience that I highly recommend. The level of realization is just as good, and they kept the same actors! I was afraid the voices might not be consistent from one book to the next, but they were!
+
+It was a joy to go through this book again three and a half years later. I got so many details and references that I missed on my first read, I really recommend engaging in such an experience with this new format.
diff --git a/search/go.mod b/search/go.mod
index af84411..64891e4 100644
--- a/search/go.mod
+++ b/search/go.mod
@@ -1,6 +1,6 @@
module git.adyxax.org/adyxax/www/search
-go 1.22.3
+go 1.23.0
require github.com/stretchr/testify v1.9.0