At work, I have a working student who’s implementing some features in the various Golang applications that I build and maintain. I’m trying to pass some of my experience with real-world programming on to him, and one particular pull request review escalated into a blog post on API design, so I might as well share it here for archival purposes (and to fill the desolate wasteland that is my RSS feed).
The concern of the pull request was to add a function to a utility library that implements exponential backoff. His proposed API looked like:
//Retry takes a function (action) that returns an error, and two int64 values (x, y) as
//parameters and creates a retry loop with an exponential backoff such that on failure (error return),
//the action is called again after x seconds and this is incremented by a factor of 2 until y minutes
//then it is keeps on repeating after y minutes till action succeeds (no error).
func Retry(action func() error, x, y time.Duration) { ... }
The General Data Protection Regulation comes into full force today. I
took the opportunity to add a data privacy statement to the blog. Here is it
in its full glory:
No system under my control records any personal data of users of this website.
This is true because I disabled the nginx access log a long time ago, and
because I restricted the nginx error log to not report 404 errors. So the logs
are basically empty now (except for alert messages from nginx). These are the
relevant lines in my nginx config.
No idea why any blogger would be panicking about GDPR.
Side-note: Even though I choose not to wiretap my users’ browsers, I still
have analytics.
So my previous post made the frontpage on Hacker News.
Awesome! I have seen many websites collapse under the load of a HN crowd, so this is the perfect time to look at the monitoring.
1 megabit per second? That’s lower than I anticipated, even with all the size optimizations that I implemented (or let’s
just say, all the bloat that I purposefully did not add). Same goes for CPU usage: I’ve heard so many people on the
internet complain about how expensive TLS handshakes are, yet my virtual server handles all of this with less than six
percent of a single core. It’s barely even visible on the CPU usage chart, and completely drowned out by noise on the
load chart.
A month ago, danluu wrote about terminal and shell performance. In that post, he
measured the latency between a key being pressed and the corresponding character appearing in the terminal. Across
terminals, median latencies ranged between 5 and 45 milliseconds, with the 99.9th percentile going as high as 110 ms for
some terminals. Now I can see that more than 100 milliseconds is going to be noticeable, but I was certainly left
wondering: Can I really perceive a difference between 5 ms latency and 45 ms latency?
I've noticed that a lot of gifts people give to me do not really evoke the
positive feelings that they are supposed to. This happens often enough to
warrant a blog post, in which I go through different categories of
gifts and present my subjective verdict on each of them.
I want to emphasize that this lament is not directed towards anyone in
particular. However, there is a focus on types of gifts that are most commonly
given by family members and close relatives, since that’s where most of the
gift-giving happens.
I have previously noted that I get all my TLS certificates from Let’s Encrypt, but since my usage of the client
deviates quite a bit from the standard, I figured I should take a few minutes to describe my setup.
When you visit this blog, the connection will be encrypted and thus
tamper-proof thanks to a free TLS certificate from
Let’s Encrypt. They’re currently running a crowdfunding campaign to
fund their operational costs. Since I use their service extensively, I gave 50
dollars. If you, too, like the idea of a more secure and privacy-respecting
web, please consider giving generously, too.
As the first actual content on my new blog, let me tell you the story of how I went absolutely crazy.
On my private systems, I ship configuration as system packages. Every distribution has their own tooling and process for
building these packages, but I eventually grew tired of the ceremony involved in it, and wrote my own system package
compiler. Since I’m using Arch Linux everywhere, the first version generated only Pacman packages, but I
was determined to make it truly cross-distribution. The first step was support for Debian packages, which I implemented
in a mere two evenings (one for understanding the format, one for writing the generator).
Next to dpkg, the other widely deployed package format is RPM, so I set out to add support for RPM as well. If I could
write the Debian generator in two days, then surely RPM support wouldn’t take that long, either. Little did I know that
I was embarking on a multi-month endeavor (including multiple week-long breaks to restore my sanity). To add insult to
injury, I stubbornly refused to add dependencies and use existing tooling (i.e., the rpm-build(1) command). I wanted
to serialize the format directly from my own code, like I did for Pacman and Debian packages.
I had known for a long time that I need a new blog. I had one years ago in the cloud
(it’s still live), but I definitely wanted something self-hosted this time. I had a
brief look at static website generators, and quickly decided that (as usual) I want a custom-tailored solution.
The first iteration is an nginx serving static files rendered by a
tiny Go program. Content comes from a
GitHub repo and is pulled every few minutes. Good enough for a first shot.
I might change the cronjob to be triggered by a GitHub webhook later on, but only if the delay until the next cronjob
run annoys me enough.