Software problem: Missing virt-manager icons

virt-manager on my NixOS system was missing icons on most of its buttons, making the application handling somewhat difficult. A little stupid annoying problem, apparently of category 3, has wasted several hours of my life.

I reported it as a NixOS bug. NixOS people responded quickly and were helpful so after a couple of days I’ve found out the problem can be remedied by installing humanity-icon-theme package. Hopefully virt-manager NixOS package gets fixed so that users won’t have to deal with this problem again.

Debian builds with Podman

Since I no longer have Debian on my desktop computer, I had to arrange an environment for building Debian packages. I could use a virtual machine running Debian for that purpose but this would have several disadvantages:

  • It would consume unnecessarily much resources.
  • I would have to use a nested build environment such as cowbuilder anyway.
  • I would have to synchronize between my working directories and the virtual machine.

Since I’m moving to Podman, I created a Podman environment for clean Debian package builds. My Containerfile looks like this:

FROM debian:sid
COPY sources.list /etc/apt/
RUN apt-get update && \
    apt-get -y dist-upgrade && \
    apt-get -y install build-essential sudo && \
    apt-get clean
RUN adduser --disabled-password --gecos '' build

sources.list is a replacement of /etc/apt/sources.list redirecting downloads to the apt proxy I use for all my Debian machines to save bandwidth. build user is used to build the packages (to prevent installing files to e.g. /usr by mistake). sudo is needed to run commands as build user, with an available tty (this is a difference against su) to make gpg password prompt happy. is a script that takes a *.dsc file as its argument and performs the build inside the container:

#!/bin/sh -ex

if [ -z "$dscfile" ] || [ -n "$2" ]; then
    echo "usage: $0 DSC-FILE"
    exit 1
dscfile=$(basename $dscfile)

# Directory to look the provided *.dsc file in:
# Directory to put the built package to:
export DEB_BUILD_OPTIONS='parallel=8'

cd /home/$user
# Set up signing, the key to be used and pinentry mode to make
# password prompt working in the container:
cp -a /root/.gnupg .
cat >.gnupg/gpg.conf <<EOF
pinentry-mode loopback
chown -R build:build .gnupg
# Unpack the sources and install build dependencies:
sudo -u $user dpkg-source -x $packagedir/$dscfile
cd ${dscfile%%_*}-*
apt-get -y build-dep .
# Build the package:
sudo -u $user dpkg-buildpackage --changes-option=-S
# Set reasonable owner and group on the host and move the built files
# to the destination directory:
cd ..
chown root:root *
mv *.buildinfo *.changes *.deb *.dsc *.tar.* $destdir/

Now the container image can be built:

podman build -t debian-build .

The last thing needed is a script to run the build from the host, in a container named debian-build and with host package and destination directories in $HOME/debian/:


podman rm -fi debian-build
podman run -it --name debian-build \
       -v $HOME/debian:/debian \
       -v $HOME/.gnupg:/root/.gnupg \
       debian-build / $*

Software problem: Lost network in a Podman container

I use Podman for running various stuff. It’s sometimes challenging due to problems of all categories. One special problem I experienced on one of my computers was that if a container had stopped and then was started again, it lost its network interface, only loopback interface remained available.

I searched the web and hoped it could be related to this bug. So it looked worth trying the newest Podman version. Installing a package from a development version is usually not a thing one wants to do in stable installations of Linux distributions. But on NixOS it should be harmless, so I took the exercise.

I’ve found out it’s quite easy. nixos-unstable channel must be added:

sudo nix-channel --add nixos-unstable
sudo nix-channel --update

And then it must be specified in /etc/nixos/configuration.nix (or wherever systemPackages are defined) and enabled for the given package:

{ config, pkgs, ... }:  
  unstable = import <nixos-unstable> { };
  environment.systemPackages = with pkgs; [
    unstable.podman …

So I got the latest and greatest Podman 3.2.2. Unfortunately it didn’t fix the problem.

Hmm, what now? How to find out what’s happening behind? There was no obvious error in journal and it’s not easy to debug complex tools composed of multiple components.

I could notice that slirp4netns process was running on the first container run but not on the subsequent runs. So it could be still related to the bug above. What if I restart the container with podman restart? The network was still there. OK, so how about running the container in the background and stopping and starting it using podman stop/start? It still worked. Let’s attach to the container using podman attach and exit from there. After starting the container again, the network was gone. I see.

Looks like a bug but how is it possible that nobody has noticed such a relatively noticeable problem? Probably because I couldn’t reproduce the problem on CentOS and Debian, it was present only on NixOS. Searching the web again with more specific keywords didn’t help. But stopping and starting the container again using podman restart brings the network back, which is a sufficient workaround.

I reported the behavior as a NixOS bug. I’m not sure about the software problem category, it may be any of 1, 2, 3. Let’s see what happens with the bug report.

Moving to Podman

Until recently, I used to use chroot, schroot and LXC to run my nested Linux environments. I’ve never used Docker, which has always looked like a single-purpose and quite limited tool to me. I started looking more deeply at Podman last year. I realized that this kind of technology can be mature enough now to be generally useful.

It’s important to set up the right storage driver, before containers are created in the storage, otherwise Podman would be a huge resource hog in the default setup. The best is to use Btrfs or ZFS file system, then Podman can take advantage of easy and lightweight file system cloning and works well. The storage driver to use can be configured (for a non-root user) by creating ~/.config/containers/storage.conf file with the following content (for Btrfs):

driver = "btrfs"

I’ve been trying to port some of my environments (even from virtual machines when possible) to Podman last months. It’s not without problems and it’s sometimes hacky but the benefit is that I can unify my nested environments using a single tool. Podman looks like an active project that’s hopefully not going to be abandoned anytime soon so it’s worth to invest into learning and using it.

I’ve already experienced switching from one container environment to another one in the past, when I switched from Linux-VServer to LXC. At the time, I was annoyed by missing Linux-VServer features in LXC (while LXC brought little advantages over Linux-VServer beyond being supported in kernel without patches, which was unfortunately important enough) but I could survive it. It’s different with Podman. Podman provides a lot of features although it is not that easy to use and is apparently more prone to bugs due to its complexity.

Podman has been looking promising and usable for my needs so far. I’m not yet done with conversions (partly because Podman is not yet available in Debian stable and partly following the golden rule of not touching things that work and must work) but I don’t think I’ll have to return back to (s)chroot and LXC.

Using Podman has the following advantages for me:

  • Podman works on images, which are provided for all the major distributions (not every Linux distribution has an equivalent of debootstrap to have an easy way to install it in a chroot environment).
  • Almost everything can be done from the command line, there is no need to edit configuration files.
  • Many actions can be run under a normal user, without root privileges, while still having root rights inside the container.
  • Cooperation with systemd (although not without flaws).
  • There is a lot of things that can be done with Podman.

And the following disadvantages:

  • Podman can work only on images so it’s not possible to run it simply on an unpacked directory, the other tools are still needed for that purpose.
  • Everything must be done from the command line using various commands, which is not so easy as editing a single schroot or LXC configuration file.
  • AFAIK the only official way to change a configuration of a container is to commit its image, which is an expensive operation, and to create a new container from the new image. Again, it’s simpler in schroot or LXC: Editing the configuration file and restarting the container, leaving the container file system untouched.
  • There is no Podman in current Debian stable and installing it manually is a non-trivial action.
  • It’s not very transparent, thinks are hidden behind several different components, internal arrangements of data directories and configuration, and numerous man pages (oh, good manuals are rare these days).

Software problem: KDE Wallet

When I reinstalled my desktop computer, KDE started asking me for a wallet password. Each time a short while after I started my web browser. It didn’t look all right:

  • I don’t think I’ve ever set a wallet password and I deleted whole KDE configuration after I had reinstalled the computer.
  • I couldn’t see any reason why my web browser should access the wallet just after starting.

The bogus password prompt kept popping up every day and it got harder to ignore it. I had two options:

  1. Disable the wallet in KDE settings.
  2. Fix the wallet.

Option 1. would be easier and solving the problem immediately. However I opted for 2. (problem of category 1, user curiosity and naivety) because using the wallet could be useful and I also wondered what tries to access the wallet and why.

It seems KDE Wallet has no usable documentation (another problem of category 1, trying to use software that needs documentation and doesn’t have one), which made the process even more cumbersome than needed.

The first step to move on was to get rid of the wallet I couldn’t access because it asked for a password that I’ve never set. I’ve found out it can be done in KDE wallet manager where it’s possible to delete the wallet. OK, done.

Now the followup step was to create a new wallet. When I was prompted about the wallet type, I opted for GPG encryption. Then I pressed Next button and got an error dialog with the following message:

Seems that your system has no keys suitable for encryption. Please set-up at least one encryption key, then try again.

What? I have a GnuPG encryption key, so what’s the problem?

Looking around, I’ve discovered there is an application called kgpg. At the first run, it didn’t look promising. But at the second run kgpg provided a somewhat better error message:

An error occurred while scanning your keyring:

gpg: Oops: keyid_from_fingerprint: no pubkey

Indeed, gpg reported the same error from a command line when listing my keys:

$ gpg -K
gpg: Oops: keyid_from_fingerprint: no pubkey

But it still listed the keys, so what was the problem? I verified that I can encrypt and decrypt files using my default key. After a lot of experiments with gpg commands, I came to a conclusion that my more than a decade old ~/.gnupg directory is no longer all right and perhaps got damaged when it had been converted from gpg version 1 to version 2. So I exported my private keys, deleted ~/.gnupg and imported my previously exported keys. The wallet still insisted on its error but kgpg started working. I went successfully through the kgpg wizard and set everything needed, but the wallet error still remained.

Maybe the problem could be that the wallet doesn’t understand my old keys. A question is why when kgpg can see them fine. Nevertheless it’s not a good idea to use my personal key in KDE so I generated a new key pair. Then I could finally create a new wallet using that key. And only now, too late, KDE Wallet provided the reason why it couldn’t see my old key — it wasn’t set as ultimately trusted.

The seemingly innocent KDE Wallet password dialog popping up in my new installation has led to quite a lot of wasted time, apparently due to problems of categories 2 and 3. The positive effects of all of it are that I have cleaned up my ~/.gnupg (way too much — I’ll have to import some of my old stuff) and that I can use KDE Wallet now if it appears any useful.

Software problem: Wahoo mobile application

I wanted to make a bike trip, planned it on komoot and, just before the trip, tried to synchronize the route to my Wahoo ELEMNT. It didn’t work, the Wahoo device couldn’t connect to WiFi. WiFi problems with Wahoo are not uncommon so I tried to use Wahoo mobile application to download my route.

The Wahoo application generally works but this time it welcomed me with the following screen:


What?! Why an account is needed to do things like upgrading firmware, changing my device settings (which can be done only using the application, not in the device itself), seeing incoming phone calls, or fetching routes from komoot? It must be either company stupidity or a bad intent, clearly software problem of category 2.

Nevertheless, what could I do at the moment? Creating an account requires agreeing to complicated privacy terms that, to my understanding, permit Wahoo to gather anything. A missing summary explaining the terms in an easy to understand way is probably a GDPR violation and stops me from trusting the company. And stopped me from creating the account at the moment. My primary concern is that I don’t want to upload my recorded data anywhere. While Wahoo could technically already get them using their application before, maybe the whole thing with the account is to get my explicit consent to do that. No, Wahoo, no.

BTW what’s the “seamless ELEMNT Experience” they promise thanks to the account? For example that data can be transferred 100% wirelessly, without the need to connect the device via cables. What a bullshit, don’t they realize that the device cannot be charged wirelessly? I have to connect the device to my computer anyway in order to charge it and then it’s easier to transfer the data using the cable connection than using a mobile application and some web application.

Now back to the planned route, how to transfer it from komoot? komoot allows to download routes, but only without embedded navigation instructions, which is not very useful. Another software problem of category 2, software deficiency.

I couldn’t re-plan the route on my computer in BRouter because I had reinstalled my computer recently and BRouter wasn’t running there yet. I admit, relying on proprietary SaaS is a problem of category 1, user stupidity. Well, it wasn’t a trip into unknown and I decided to ride without navigation, which was OK after all.

But what’s the long term solution? I’m against throwing away usable devices and replacing them with “better” ones, for multiple reasons. So I’ll probably create the account, to be able to use the device in future, and I’ll connect the device with the application only when all the recorded tracks are deleted from it. I hope that Wahoo won’t remove the ability to transfer and delete the tracks privately with a future firmware update (I’ll try to avoid firmware updates if possible). And I’ll ask Wahoo from time to time to tell me what data about me they keep, to be sure. GDPR is a very useful tool to deal with such companies.

I made a mistake when I bought a Polar device (their heart monitors are good but their other devices really suck and are miles behind the competition1). Now I can see I’ve made another mistake when I bought a Wahoo device, I’ll never buy anything from Wahoo again. Instead of trying to save money, I should buy a Garmin device, which is the only remaining option. Garmin is also not anything great, considering they produce and use proprietary FIT formats without freely accessible documentation, but at least last time I checked, it was possible to use Garmin devices, unlike Wahoo or Polar, without any application. Can we still hope that one day there will come a company making sport devices respecting user freedoms? Looking at e.g. the smartphone market, I would say that no. Most people are happy with what we have.



It’s unbelievable that one can still make a device that can be configured only using a proprietary Windows application, doesn’t have enough memory to store data from a multi-day bike trip and can transfer recorded data only using the Windows application or, ehm, a cloud account. The primitive and confusing handling is another problem but maybe some users like it.

Software problem: Emacs windows not properly maximized

I don’t like wasting screen space so the very first configuration action I do in newly installed desktop environments is making all windows maximized by default.

After I had reinstalled my computer, I experienced a problem that new Emacs windows (or frames, in Emacs terminology) created after Emacs had started were not fully vertically maximized, being about two lines shorter than they should. It can be easily remedied by unmaximizing and then maximizing them again but since I use lots of Emacs windows it’s quite annoying to do that all the time.

Some initial observations were:

  • The problem happened only for Emacs windows, not for other applications.
  • The same Emacs version running on a different computer with a different KDE version and on the same monitor worked fine.

This was weird – is it a bug in Emacs or in KDE or in combination of both? Or was it an environment issue? I also suspected Emacs could get confused by a font change during startup or some other configuration change. But running emacs -q proved the problem happens also in the default configuration.

Searching the web also didn’t help. Having no better choice, I tried miscellaneous actions in Emacs until I’ve discovered that the following Emacs setting fixes the problem:

(setq frame-resize-pixelwise t)

Which doesn’t make much sense because:

  • The initial Emacs window is all right.
  • The additional Emacs windows were shorter by more than one line.
  • When a short window was unmaximized and maximized again, it got properly maximized.

So it looks like a bug, i.e. software problem of category 3. It’s hard to say where though and it may be bound to special environment properties so it would probably require a lot of time trying to report it. The workaround is good enough and I like the given setting anyway.

Software problem: DKIM

I managed to waste my time on another thing. I foolishly thought that I should finally add DKIM to my mail server. Configuring it in the mail server was relatively easy. But once I started to check whether it actually works, I’ve fallen into trouble.

I started with the elliptic curve algorithm (ed25519). OpenDKIM on Debian stable failed to verify the signature due to an unknown algorithm. So I tried sending an e-mail to Gmail and it reported its DKIM check as failed too. Not very encouraging, but Google is generally incompetent as for e-mail handling1, so I suspected it could be their fault and tried further. The best explanation I could dig out from Gmail was “no key”, which could indicate Gmail also doesn’t understand ed25519 algorithm. Indeed, when I copied the failed message from Gmail to a newer version of OpenDKIM, it passed. OK, apparently dealing with software problem of category 2, an e-mail provider unable to handle modern standards despite it encourages e-mail senders to use DKIM.

Since this experience had indicated ed25519 is still too new, I tried to configure an additional, RSA, key. But my registrar, Hover, doesn’t accept long enough TXT DNS records. This would be OK if they accepted split DNS records. I managed to create a split record in Hover, apparently bypassing some checks by chance because my further similar attempts were rejected in the web UI. But the record didn’t work and the corresponding address couldn’t be resolved. After many attempts I’ve given up, Hover apparently doesn’t support split DNS records, which is a software problem of category 2 again, registrar DNS deficiency.

So I have DKIM set up and up-to-date SMTP servers can enjoy it. For the rest, nothing has changed, they think DKIM signatures are still missing in my e-mails. I can’t see any reason to waste more time on it.



Just from my personal experience: Crippling incoming e-mails by deduplication, a horrible filter system and a spam filtering that’s best to disable because of nonsense false positives.

Czech programming keyboard is back!

I’ve recently reinstalled my computer and switched to a different Linux distribution there. Because of the change, I got new (or different) versions of many desktop components. And when configuring my keyboard, I can see Czech programming keyboard is available in X again. What a relief!

Software problems (and rsync failed to set permissions)

I regret that I have little opportunity to work on free software outside my job these days. One of the reasons is that I have been struggling all the time with things that don’t work, a significant part of them being software problems.

Software problems can be categorized into three groups:

  1. User stupidity problems
  2. Software stup deficiencies
  3. Bugs

In the current complex environments it’s often not clear in which of the categories a given problem falls. Nevertheless a problem is a problem and it (sometimes) must be solved or at least a workaround must be found regardless of the cause and all the unknowns.

I’m starting a series of posts about some of the software problems I experience. I have the following reasons to do that:

  • It’s a therapy. Clearly seeing what I waste time on helps me recognize I’m objectively prevented from doing useful things and perhaps find ways how to avoid dealing with less or more useless things.
  • Defining and describing a problem often helps solving it.
  • Some of the problems may be experienced by other users too and the solutions may help them.
  • It’s sort of my documentation of the problems.

Let’s start with something simple today. My backups to external media were breaking due to the following error:

rsync: [generator] failed to set permissions on “…”: Operation not supported (95)

The problem was apparently that symbolic links have normally 777 permissions, but on OpenAFS they have 755. A remedy is to add --no-perms after -a in rsync command line options. That means rsync won’t preserve permissions at all but it is not a big problem for me with that kind of backups and it’s clearly better than failing backups. I’d say this problem falls into category 2.