Travel costs this year

From November 2021 until end of November 2022 I had a BahnCard25 with a big discount (30€ less). At the beginning of 2022 I wrote about calculating if the card is worth it. In the summer (from June till end of August) Germany had the 9€ ticket, so I had no use for the BahnCard in that time. But the result is that I still saved 12.10€ by having the BahnCard25.

For 2023 I decided to not renew the BahnCard25. I would not have saved enough to justify the full price and Germany will probably get the successor of the 9€-Ticket: the 49€-Ticket (starting in April 2023). But I still want to know if it would have been worth it to have a BahnCard25. So I will track using virtual postings the same way as in 2022: add how much money I would have saved in the months before April 2023 and when using faster travel (IC, ICE).

As preparation for the 49€-Ticket I wanted to know: How much money did I spend per month on public transport and regional trains.
For this the hledger command for my data looks like this:

hledger balance ^Expenses:Train ^Expenses:ÖPNV --monthly -b 2022-01-01

                              ||    Jan  Feb    Mar     Apr     May     Jun  Jul    Aug     Sep     Oct     Nov
==============================++================================================================================
 Expenses:Train:Deutsche Bahn || €42.60    0      0       0  €55.30   €7.55    0      0  €21.35       0   €7.85
 Expenses:ÖPNV:VVS            || €10.89    0  €9.74  €33.49  €20.17  €19.60    0  €9.00  €22.38  €18.09  €29.99
------------------------------++--------------------------------------------------------------------------------
                              || €53.49    0  €9.74  €33.49  €75.47  €27.15    0  €9.00  €43.73  €18.09  €37.84

In June I paid the 9€ for June and July; and in August the 9€ for August. But in all three months I used public transport and regional trains a lot more than in previous years. I didn't track how much money I saved in that time because it was very obvious that the 9€/month is worth it.
From looking at the table it seems like I would only have paid more than 49€ in two months. But like last summer, I'll probably use public transport and especially regional trains a lot more than without the flat-rate ticket. This time I plan to track the money I would have saved.

Stuendlich Bot

I wanted to have a message in my activitypub timeline every full hour.
So I wrote a bot sending this message with a bit of config options.
This bot is modeled after @zurvollenstunde@twitter.com.

Some design decisions for my version:
  • use the activitypub api compatible with Mastodon, GotoSocial and others

  • use only core Python 3.7+. No requests, httpx or httpie - only urllib.request - so this can run anywhere

  • No real templating - a format string has to be enough

  • use of dataclasses to access config values in a nicer way

  • allow setting of timezone

The bot: @stuendlich@cress.space (German, Europe/Berlin, sends private messages)

/images/stuendlich-screenshot.png

SQLite triggers as replacement for a materialized view

To speed up a database view in SQLite a materialized view would be perfect, but there are no materialized views in SQLite. So lets build something similar that solves the speed problem by using table triggers.

Assume we have two tables: the first one is the request of something and the second one is the results coming a bit later. The timings can be guaranteed because the job that inserts the dataset into the table is calling the http function to request the result, which is than pushed back via http. Before adding the triggers a database view was used as source for a table/plot.

Tables:

-- dataset sent for processing
CREATE TABLE "datasets" ("uid" TEXT, "blob" TEXT, "created" TEXT);

-- result coming back
CREATE TABLE "results" ("id" TEXT, "uid" TEXT, "result" TEXT, "modified" TEXT);

-- timing reports
CREATE TABLE "reports" ("uid" TEXT, "created" TEXT, "modified" TEXT, "delta_sec" FLOAT);

Now add two triggers: one for INSERT of a dataset and one for UPDATE triggered by an INSERT of the result:

CREATE TRIGGER IF NOT EXISTS insert_report AFTER INSERT ON datasets
BEGIN
INSERT INTO reports (created, uid) VALUES (NEW.created, NEW.uid);
END;

CREATE TRIGGER IF NOT EXISTS update_report AFTER INSERT ON results
BEGIN
UPDATE reports SET modified=NEW.modified, delta_sec=round(julianday(NEW.modified) * 100000 - julianday(NEW.created) * 100000, 3) WHERE uid=NEW.uid;
END;

Using julianday was my first working solution which has time deltas with fragments of seconds in it. There might be a better way.

Now lets see if this works:

INSERT INTO datasets ("uid", "blob", "created") values ("123", '"some json data here"', "2022-12-08 17:38:01.743674");

SELECT * from reports where uid="123";
-- returns
-- 123|2022-12-08 17:38:01.743674||

INSERT INTO results ("id", "uid", "result", "modified") VALUES ("1", "123", '"some json result here"', "2022-12-08 17:38:02.12345");

SELECT * from reports where uid="123";
-- returns
-- 123|2022-12-08 17:38:01.743674|2022-12-08 17:38:02.12345|0.44

Datetimes used here are generated in Python code because milliseconds are important here. The "id" is returned from the processing job.

Now we have a table that can be used to plot processing times per time window - even for larger amounts of data.