HTTP push in Rust

Over the years I deployed a lot of sensor pushing Python scripts to ESPs, RaspberryPIs and even some PCs. With a system update a few weeks ago a virtualenv broke and I lost some data, so replacing some of the Python scripts with Rust seems like a worthwhile endeavour. I started with the nvidia-gpu metrics because this is the system I lost data from.

The original Python code looks like this:

import requests
import nvidia_smi

nvidia_smi.nvmlInit()
handle = nvidia_smi.nvmlDeviceGetHandleByIndex(0)
gpu_temp = nvidia_smi.nvmlDeviceGetTemperature(handle, 0)
res = nvidia_smi.nvmlDeviceGetUtilizationRates(handle)
data = {
    "sensordatavalues": [
        {"value_type": "gpu_temperature", "value": "{:2.0f}".format(gpu_temp)},
        {"value_type": "gpu_load", "value": "{:2.0f}".format(res.gpu)},
    ]
}
r = requests.post("http://10.1.1.1/push/", headers={"Sensor": "ID"}, json=data)

The datastructure is a decision from years ago (2015ish) and I don't want to change it now, so the Rust code has to do the same. All pushes happen within a VPN, so there is no auth and no https. The Rust version uses a CLI to get the url and the sensor-id. This was not necessary for the Python version, because it can be easily edited on the target system.

The code of the Rust version: https://github.com/mfa/sensor-push.

What I learned when building it:

  • when it compiles it will probably run, is a cool guarantee

  • eglot / lsp based on rust-analyzer is really nice

  • it takes a lot more time for me to write Rust code compared to Python - this will improve over time

  • a code review with an expert (a colleague of mine) helped me a lot

This fulfills my curiosity on trying something new. So next step is to deploy a version on a raspberry pi (arm64) and add more sensors.

OverPass API

How many restaurants in Stuttgart Mitte actually serve vegan food? This could be solved using the Overpass QL on the overpass turbo website. But I wanted to use this from Python by using OSMPythonTools. The positive of this Python package: It has a query builder. Overpass QL doesn't come naturally, at least for me. To get the bounding box I used bbox-tool. Additionally I may want to use this in a SpatiaLite SQLite database later, so having GeoJson already simplifies that.

Annotated Python code:

import json
# pip install OSMPythonTools
from OSMPythonTools import overpass

api = overpass.Overpass()
# bbox for Stuttgart Mitte (roughly)
bbox_coords = [48.76605, 9.1657, 48.78508, 9.18995]
query = overpass.overpassQueryBuilder(
    bbox=bbox_coords,
    elementType=["node", "way"],
    # all restaurants that have a positive diet:vegan tag
    selector=['"amenity"="restaurant"', '"diet:vegan"~"yes|only"'],
    out="body",
    # include geometry, to export as GeoJson
    includeGeometry=True,
)
result = api.query(query, timeout=60)

data = []
# each element and the json version of it
for item_xml, item_json in zip(result.elements(), result.toJSON()["elements"]):
    # add the geometry to the json version
    item_json["geojson"] = item_xml.geometry()
    data.append(item_json)

# dump as json for later usage
json.dump(
    data,
    open("vegan__" + "_".join([str(i) for i in bbox_coords]) + ".json", "w"),
    indent=2,
)

One example result looks like this (random example; no testimonial):

{
  "type": "node",
  "id": 2998162386,
  "lat": 48.7727435,
  "lon": 9.1766046,
  "tags": {
    "amenity": "restaurant",
    "check_date:diet:vegan": "2022-09-25",
    "diet:vegan": "yes",
    "diet:vegetarian": "yes",
    "name": "reiskorn",
    "opening_hours": "Su-Th 17:00-22:00; Fr 17:00-23:00; Sa 12:00-23:00",
    "phone": "+497116647633",
    "website": "http://www.das-reiskorn.de/"
  },
  "geojson": {
    "type": "Point",
    "coordinates": [
      9.176605,
      48.772743
    ]
  }
}

The geojson part can be points or polygons, depending on if this was a node or a way.

For this bounding box (Stuttgart Mitte) only 14 restaurants are found! 😞

Add thumbnails to a tarfile

I have shot quite a few GoPro images over the years. They are stored in tarfiles per recording session, so the folder name is the occasion where and when they were shot. With this many images I want to get a quick overview of the contents without processing the big images, so I want to generate thumbnails. And because the original images are in tarfiles I want to store the thumbnails in the same folder in a thumbnail tarfile.

This is the Python code using Pillow:

import io
import tarfile
from pathlib import Path
from PIL import Image

folder = Path.home() / "gopro/subfolder"

with (
    tarfile.open(folder / "images.tar") as tar_images,
    tarfile.open(folder / "thumbnails.tar", "w") as tar_thumbs,
):
    for member in tar_images:
        name = member.name
        print(name)

        # get image
        fp = tar_images.extractfile(member)
        # generate thumbnail
        im = Image.open(fp)
        im.thumbnail((256, 256))

        with io.BytesIO() as f:
            # save to bytesio as jpeg
            im.save(f, format="JPEG")
            # generate a tarinfo object
            info = tarfile.TarInfo(name)
            # add length of data
            info.size = len(f.getvalue())
            # move file pointer back to 0
            f.seek(0, io.SEEK_SET)
            # add to thumbnail tarfile
            tar_thumbs.addfile(info, f)

The goal was to not use any temporary files on disk. So the image is loaded directly from the tarfile, a thumbnail is generated, stored in a BytesIO object and this is added to the thumbnail tarfile. The tarfile don't have compression enabled, because compressing JPEG doesn't bring that much, but costs CPU when using.