Automate Compute on Hetzner

Currently I am experimenting with automatic building of Raspberry PI images and the building process with qemu and packer is cooking my notebook. So why does this have to run on my notebook? Because my notebook is the fastest compute I currently have. But why not use a cloud server for that? Of course I could use an AWS EC2 instance or a Google Cloud Compute server, but I wanted to try to automate a Hetzner VPS.


I used hcloud and installed it using the ArchLinux package. To run the commands we need a token which can be created in your Hetzer project website. For this I also created a new project because I don't want to have a token lying around that can kill my productive servers.

Setup of the hcloud environment:

# setup the project, this will ask for the token, "compute" is the name of the local project
hcloud context create compute

# add ssh-key
hcloud ssh-key create --name ssh-key --public-key-from-file ~/.ssh/

Run a compute job on a new server

The structure I use here is a job folder is synced over to the server after creation and a is run. The result is then expected to be in a result folder and is synced back before the server is deleted. The job is run as root because it would need root priviledges either way.

An example may look like this:

# install docker
curl -fsSL -o
sh ./

# run the actual job
cd /root/job/
docker build --tag packer-qemu-build .
docker run --rm --privileged -v /dev:/dev -v .:/build packer-qemu-build build packer.json

# copy results
mkdir ~/result
mv raspberry-pi.img ~/result

The script that creates the server, runs the job, and deletes the server:


# create the server; choose server size by selecting a type
hcloud server create --name compute1 --image debian-12 --type cpx11 --ssh-key ssh-key
IP=$(hcloud server ip compute1)

# add/update key in known_hosts
ssh-keygen -R ${IP}

# wait until ssh login works; and add the ssh key to known hosts
until ssh -o StrictHostKeyChecking=accept-new root@${IP} true >/dev/null 2>&1; do
    echo -n "."
    sleep 1

# copy job folder to server
rsync -az job root@${IP}:.

# run script on server
ssh root@${IP} "bash job/"

# download result folder
rsync -azvP root@${IP}:result .

# we are done, delete server
hcloud server delete compute1

There are different types of Hetzner servers, run hcloud server-type list to see them. For me 2 cores and 2GB seemed enough here. Parallel execution happens only for a short time but this seems faster than the single cpu CX11 type. I chose Debian 12 because I know that docker will run without any problems. Of course this is probably the case for any Linux, but I didn't feel like experiments are needed here.

The cost for all of this is a few cents per hour. For me this is worth it to not heat up my notebook.

The downside of running a job this way is, that it is not headless. Your shell needs to stay open while this runs. This could be improved with a tmux but I had no interruption while using this the last 2 days so no need to figure that out.