Challenge Alias for Letsencrypt

To update my letsencrypt certificates I migrated most of them from http(s)-based update to DNS-based.
The best tool for this is imho acme.sh.

But be careful: They changed their default to ZeroSSL.
So first set the default to letsencrypt:
acme.sh --set-default-ca --server letsencrypt

Now to the best feature with DNS challenge update: Update via API with one DNS provider for another DNS provider.

For example: I use the Hetzner DNS API-tokens with acme.sh. I can generate a new token for every domain.
But I have domains on INWX. Here I have to use username and password to use acme.sh -- and this feels wrong - so I don't do this.
I added for my domain at INWX this _acme-challenge.meinsack.click. CNAME _acme-challenge.meinsack.click.madflex.de.
And now I can use the Hetzner API to update the certificate for meinsack.click via meinsack.click.madflex.de
The command for this:
acme.sh --issue --dns dns_hetzner --challenge-alias meinsack.click.madflex.de -d meinsack.click


Use Parallel to split by line – follow up

After using the parallel pattern described in my previous blog post a few times and improved the speed by quite a bit I had to write a follow up.

The previous solution with -N 1 had too much IO wait. The easiest way to solve this was to increase to a larger chunk of lines, i.e. -N 1000.

For this to work the script has to cope with more than one line. The example from the last post now looks like this:

import json
import sys

def extract(lines):
    for line in lines:
        line = line.strip()
        if line.endswith(","):
            # if the file is json and not jsonl
            line = string[:-1]
        if line.startswith(("[", "]")):
            continue
        data = json.loads(line)
        value = data.get("value")
        if value:
            print(f"X:{value}", flush=True)

extract(sys.stdin.readlines())

The important changes are:

  • readlines() instead of readline() to get all lines given by stdin

  • a loop over the lines instead of only one string

  • continue instead of return to process the remaining lines too

The speedup was for some of my bigger files more than 10x.

Use Parallel to split by line

The problem: process a big jsonl file line by line. The source file is bz2 compressed to save disk space.

Now use GNU parallel to split the file by line and run a command for every line.

bzcat bigfile.json.bz2 | parallel -j 16 --pipe --block 100M -N 1 python extract.py | gzip > output_`date +%s`.csv.gz

Parallel command options used:

-j 16

spawn 16 processes in parallel

--pipe

use pipe mode instead of the default argument mode

--block 100M

increase the block size from the default of 1M to 100M to be sure to get the full dataset -- this is maybe not necessary

-N 1

always give one dataset to the called process

Python usage example

The python file reads from stdin and extracts the wanted information, for example:

import json
import sys

def extract(string):
    string = string.strip()
    if string.endswith(","):
        # if the file is json and not jsonl
        string = string[:-1]
    if string.startswith(("[", "]")):
        return
    data = json.loads(string)
    value = data.get("value")
    if value:
        print(f"X:{value}", flush=True)

extract(sys.stdin.readline())