Homeassistant: Monitor other systems

I wanted to monitor different computers with my Homeassistant installation. First a PC that runs at a different place that I use for GPU training and compute. This PC runs Archlinux. The other system(s) I want to monitor are Raspberry PIs. Here especially disk usage and memory overflow.

There is a package for glances in Archlinux and in Raspberry PI OS:

# archlinux
sudo pacman -S glances python-bottle python-dateutil
## gpu support
sudo yay -S python-py3nvml

# Raspberry PI OS / Debian
sudo apt install glances python3-bottle python3-dateutil

The Debian package installs a lot of packages I actually don't need, but on the other hand I don't want to build my own package or install into a virtualenv.

The second step for installation is to prepare a systemd service. For both Archlinux and Debian/Raspberry PI OS we need to edit the systemd file, because the default there is -s but we need -w. On Debian we also need to remove the -B to access the service from another system.


# disable existing one (only needed for Debian/Raspberry PI OS)
sudo systemctl stop glances
sudo systemctl disable glances
# copy the template
sudo cp /lib/systemd/system/glances.service /etc/systemd/system/glances-homeassistant.service
# change "glances -s" to "glances -w" and remove the "-B" if needed
sudo vim /etc/systemd/system/glances-homeassistant.service
# reload changed file
sudo systemctl daemon-reload
# start service
sudo systemctl start glances-homeassistant
# also start after reboot
sudo systemctl enable glances-homeassistant

Finally we setup the Sensor in Homeassistant using the Glances integration. Setup with correct IP address, without any Username/Password, keep Port 61208, set version to 3 and no SSL. Adding a username/password seems to be not that hard, but I use this in my VPN and actually don't think this information is in need for more security.

Example using the Glances Card:


And a very idle PC displayed in Homeassistant using the Entities Card:


Packer Image Building for RaspiOS Bookworm

Raspberry Pi OS changed a bit for the new release based on Debian Bookworm.

First thing is that the username needs to be set on the first start. This is not possible for me in a headless setup so I needed to add a userconf.txt to set the username. I already set the password with an Ansible task, so this was easy to solve.

The second thing that changed was the usage of NetworkManager as default and the deprecation of wpa_supplicant. This info box in the documention and no hint on what to do otherwise is actually a bad move from Raspberry!


A thread in the raspberry pi forum helped: I just need to drop a correct nmconnection file in the correct place. Important thing here, the file needs a uuid set and the correct permissions. So I added the creation of this file to my ansible setup.

Other changes like Wayland instead of Xorg or PipeWire are of no interest to me. I use Raspberry PIs only to run sensors or small services headless.

longest common sequence

I needed a way to match banking statements -- exported from my bank and inserted into hledger -- into correct accounts. The idea was to find common strings between a few statements of the same account and test against ones that go into a different account. Spoiler: I was overthinking the problem. A simple "Aldi" in the cascade of IFs is enough. I am proofreading this automatic matches either way, so some errors are ok and can be fixed in the matching code later.

Still I implemented some longest common sequence algorithm and a small webservice to find them and test against a negative list. The full code is in https://github.com/mfa/longest-common-sequence and is currently hosted at https://lcs.madflex.de/. I will probably undeploy this in a few weeks.

Things I learned:

There is a SequenceMatcher readily available in difflib in Python. This matches two strings and gives the longest matching blocks by calling get_matching_blocks. I used combinations from itertools (also Python core) to find the matches for every combination of string pairs. The whole idea is to get a list of common strings between all string pair combinations, then test the match_strings against the positive and then against the negative examples given. The positive matches are reduced by checking every matched_string with all(map(lambda x: match_string in x, positives_examples)) and the negatives by an any(...). The full code is in algorithm.py.

The website looks like this: