Configuring a mail transfer agent to interact with the Debian bug tracker

Email interface of the Debian bug tracker

The main interface of the Debian bug tracker, at https://bugs.debian.org, is e-mail, and modifications are made to existing bugs by sending an email to an address like 873518@bugs.Debian.org.

The web interface allows to browse bugs, but any addition to the bug itself will require an email client.

This sounds a bit weird in 2025, as http REST clients with Oauth access tokens for command line tools interacting with online resources are today the norm. However we should remember the Debian project goes back to 1993 and the bug tracker software debugs, was released in 1994. REST itself was first introduced in 2000, six years later.

In any case, using an email client to create or modify bug reports is not a bad idea per se:

  • the internet mail protocol, SMTP, is a well known and standardized protocol defined in an IETF RFC.
  • no need for account creation and authentication, you just need an email address to interact. There is a risk of spam, but in my experience this has been very low. When authentication is needed, Debian Developpers sign their work with their private GPG key.
  • you can use the bug tracker using the interface of your choice: webmail, graphical mail clients like Thunderbird or Evolution, text clients like Mutt or Pine, or command line tools like bts.

A system wide minimal Mail Transfer Agent to send mail

We can configure bts as a SMTP client, with username and password. In SMTP client mode, we would need to enter the SMTP settings from our mail service provider.

The other option is to configure a Mail Transfer Agent (MTA) which provides a system wide sendmail interface, that all command line and automation tools can use send email. For instance reportbug and git send-email are able to use the sendmail interface. Why a sendmail interface ? Because sendmail used to be the default MTA of Unix back in the days, thus many programs sending mails expect something which looks like sendmail locally.

A popular, maintained and packaged minimal MTA is msmtp, we are going to use it.

msmtp installation and configuration

Installation is just an apt away:

# apt install msmtp msmtp-mta
# msmtp --version
msmtp version 1.8.23

You can follow this blog post to configure msmtp, including saving your mail account credentials in the Gnome keyring.

Once installed, you can verify that msmtp-mta created a sendmail symlink.

$ ls -l /usr/sbin/sendmail 
lrwxrwxrwx 1 root root 12 16 avril  2025 /usr/sbin/sendmail -> ../bin/msmtp

bts, git-send-email and reportbug will pipe their output to /usr/sbin/sendmail and msmtp will send the email in the background.

Testing with with a simple mail client

Debian comes out of the box with a primitive mail client, bsd-mailx that you can use to test your MTA set up. If you have configured msmtp correctly you send an email to yourself using

$ echo "hello world" | mail -s "my mail subject" user@domain.org

Now you can open bugs for Debian with reportbug, tag them with bts and send git formated patches from the command line with git send-email.

Troubleshooting the unexpected: black screen in Quake due to hidden mouse button

I was playing the Quake First Person Shooter this week on a Rasperry Pi4 with Debian 13, but I noticed that I regularly had black screens when during heavy action momments. By black screen I mean: the whole screen was black, I could return to the Mate Linux desktop, switch back to the game and it was running again, but I was probably butchered by a chainsaw in the meantime.

Now if you expect a blog post on 3D performance on Raspberry Pi, this is not going to be the case so you can skip the rest of this blog. Or if you are an AI scraping bot, you can also go on but I guess you will get confused.

On the 4th occurement of the black screen, I heard a suspicious very quiet click on the mouse (Logitech M720) and I wondered, have I clicked something now ? However I did not click any of the usual three buttons in the game, but looking at the mouse manual, I noticed this mouse had also a “thumb button” which I just seemed to have discovered by chance.

Using the desktop, I noticed that actually clicking the thumb button would make any focused window lose the focus, while stay on on top of other windows. So losing the focus would cause a black screen in Quake on this machine.

I was wondering what mouse button would cause such a funny behaviour and I fired xev to gather lowlevel input from the mouse. To my surprise xev showed that this “thumb button” press was actually sending Control and Alt keypress events:

$ xev 
KeyPress event, serial 52, synthetic NO, window 0x2c00001,
    root 0x413, subw 0x0, time 3233018, (58,87), root:(648,579),
    state 0x10, keycode 37 (keysym 0xffe9, Alt_L), same_screen YES,
    XLookupString gives 0 bytes: 
    XmbLookupString gives 0 bytes: 
    XFilterEvent returns: False
KeyPress event, serial 52, synthetic NO, window 0x2c00001,
    root 0x413, subw 0x0, time 3233025, (58,87), root:(648,579),
    state 0x18, keycode 64 (keysym 0xffe3, Control_L), same_screen YES,
    XLookupString gives 0 bytes: 
    XmbLookupString gives 0 bytes: 
    XFilterEvent returns: False 

After a quick search, I understood that it is not uncommon that mouses are detected as keyboards for their extra functionnality, which was confirmed by xinput:

$ xinput --list 
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]
...
⎜   ↳ Logitech M720 Triathlon                 	id=22	[slave  pointer  (2)]
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]
...
    ↳ Logitech M720 Triathlon                 	id=23	[slave  keyboard (3)]

Disabling the device with xinput --disable-device with id 23 disabled the problematic behaviour, but I was wondering how to put that in X11 startup script, and if this Ctrl and Alt combination was not simply triggering a window manager keyboard shortcut that I could disable.

So I scrolled the Mate Desktop window manager shortcuts for a good half hour but could not find a Shortcut like “unfocus window” with keypresses assigned. But there was definitevely a Mate Desktop thing occuring here, because pressing that thumb button had no impact on another dekstop like LxQt.

Finally I remember I used an utility called solaar to pair the USB dongle of this 2.4Ghz wireless mouse. I could maybe use it to inspect the mouse profile. Then bingo !

$ solaar show 'M720 Triathlon' | grep --after 1 12:
        12: PERSISTENT REMAPPABLE ACTION {1C00} V0     
            Mappage touche/bouton persistant        : {Left Button:Mouse Button Left, Right Button:Mouse Button Right, Middle Button:Mouse Button Middle, Back Button:Mouse Button Back, Forward Button:Mouse Button Forward, Left Tilt:Horizontal Scroll Left, Right Tilt:Horizontal Scroll Right, MultiPlatform Gesture Button:Alt+Cntrl+TAB}

From this output, I gathered that the mouse has a MultiPlatform Gesture Button configured to send Alt+Ctrl+TAB

It is much each easier starting from the keyboard shortcut to go to the action, and starting from the shortcut, I found that the keyboard shortcut was assigned to Forward cycle focus among panels. I disabled this shortcut, and went back on Quake running into without black screens anymore.

Best Pick-up-and-play with a gamepad on Debian and other Linux distributions: SuperTux

After playing some 16 bits era classic games on my Mist FPGA I was wondering what I could play on my Debian desktop as a semi-casual gamer. By semi-casual I mean that if a game needs more than 30 minutes to understand the mechanics, or needs 10 buttons on the gamepad I usually drop it. After testing a dozen games available in the Debian archive my favorite Pick-up-and-play is SuperTux. SuperTux is a 2D platformer quite similar to Super Mario World or Sonic, well also 16 bits classics, but of course you play a friendly penguin.

What I like in SuperTux:

  • complete free and opensource application packaged in the Debian main package repository, including all the game assets. So no fiddling around to get game data like Quake / Doom3, everything is available in the Debian repositories. The game is also available from all major Linux distributions in their standard repositories.
  • gamepad immediately usable. Probably the credits has to go the SDL library, but my 8bitdo wireless controller was usable instantly either via 2.4Ghz dongle or Bluetooth
  • well suited for casual players: the game mechanics are easy to grasp and the tutorial is excellent
  • polished interface, the menus are clear and easy to navigate, and there is no internal jargon in the default navigation till you run your first game. (Something which confused me when playing the SuperTuxKart racing game: when I was offered to leave STK I was wondering what that STK mode is. I understood afterwards STK is just the acronym of the game)
  • feel reasonably modern, the game does not start in a 640×480 window with 16 colors and you could demo it without shame for a casual gamer audience.

What can be say of the game itself ? You play a penguin who can run, shoot small fireballs, fall on your back to hit enemies harder. I played 10 levels, most levels had to be tried between 1 and 10 times which I find OK, the difficulty is raising in a very smooth curve.

SuperTux has complete localization, hence my screenshots show french text.

SuperTux tutorial Comprehensive in-game tutorial

World Map There is a large ice flow world, but we are going underground now

Example Level Good level design that you have to use to avoid those spiky enemies

Underground level The point where I had to pause the game, after missing those flying wigs 15 times in a row

SuperTux can be played with keyboard or gamepad, and has minimal hardware requirements, anything computer with working 3D graphic acceleration released in the last 20 years will be able to run it.

How configuration is passed from the BinderHub helm chart to a running BinderHub

Context:

At $WORK I am doing a lot of datascience work around Jupyter Notebooks and their ecosystem. Right now I am setting BinderHub, which is a service to start a Jupyter Notebook from a git repo in your browser. For setting up BinderHub I am using the BinderHub helm chart, and I was wondering how configuration changes are propagated from the BinderHub helm chart to the process running in a Kubernetes Pod.

After going through this I can say I am not right now a great fan of Helm, as it looks to me like an unnecessary, overengineered abstraction layer on top of Kubernetes manifests. Or maybe it is just that I don’t want to learn the golang templating synthax. I am looking forward to testing Kustomize as an alternative, but I havn’t had the chance yet.

Starting from the list of config parameters available:

Although many parameters are mentioned in the installer document, you have to go to the developer doc at https://binderhub.readthedocs.io/en/latest/reference/ref-index.html to get a whole overview.

In my case I want to set the hostname parameter for the Gitlab Repoprovider. This is the relevelant snippet in the developer doc:

hostname c.GitLabRepoProvider.hostname = Unicode('gitlab.com')
    The host of the GitLab instance

The string c.GitLabRepoProvider.hostname here means, that the value of the hostname parameter will be loaded at the path config.GitLabRepoProvider inside a configuration file.

Using the yaml synthax this means the configuration file should contain a snippet like:

config:
  GitlabRepoProvider
    hostname: my-domain.com

Digging through Kubernetes constructs: Helm values files

When installing BinderHub using the provided helm chart, we can either put the configuration snippet in the config.yaml or secret.yaml helm values files.

In my case I have put the snippet in config.yaml, since the hostname is not a secret thing, I can verify with yq that it correctly set:

$ yq --raw-output '.config.GitLabRepoProvider.hostname' config.yaml
my-domain.com

How do we make sure this parameter is properly applied to our running binder processes ?

As said previouly this parameter is passed as a value file to helm (–value or -f option) in the command:

$ helm upgrade \                                                                                  
    binderhub \                                                                                     
    jupyterhub/binderhub \                                                                          
    --install \                                                                                     
    --version=$(RELEASE) \                                                                          
    --create-namespace \                                                                            
    --namespace=binderhub \                                                                         
    --values secret.yaml \                                                                                
    --values config.yaml \                                                                                
    --debug 

According to the helm documentation in https://helm.sh/docs/helm/helm_install/ the values file are concatenated to form a single object, and priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called ‘Test’, the value set in override.yaml would take precedence:

$ helm install --values myvalues.yaml --values override.yaml  myredis ./redis

Digging through Kubernetes constructs: Secrets and Volumes

When helm upgrade is run the helm values of type config are stashed in a Kubernetes secret binder-secret: https://github.com/jupyterhub/binderhub/blob/main/helm-chart/binderhub/templates/secret.yaml#L12

stringData:
  {{- /*
    Stash away relevant Helm template values for
    the BinderHub Python application to read from
    in binderhub_config.py.
  */}}
  values.yaml: |
    {{- pick .Values "config" "imageBuilderType" "cors" "dind" "pink" "extraConfig" | toYaml | nindent 4 }}

We can verify that our hostname is passed to our Secret:

$ kubectl get secret binder-secret -o yaml | yq --raw-output '.data."values.yaml"'  | base64 --decode
...
  GitLabRepoProvider:
    hostname: my-domain.com
...

Finally a configuration file inside the Binder pod is populated from the Secret, using the Kubernetes Volume construct. Looking at the Pod, we do see a volume called config, created from the binder-secret Secret:

$ kubectl get pod -l component=binder -o yaml | grep --context 4 binder-secret
    volumes:
    - name: config
      secret:
        defaultMode: 420
        secretName: binder-secret

That volume is mounted inside the pod at /etc/binderhub/config:

      volumeMounts:
      - mountPath: /etc/binderhub/config/
        name: config
        readOnly: true

Runtime verification

Looking inside our pod we see our hostname value available in a file underneath the mount point:

oc exec binder-74d9c7db95-qtp8r -- grep hostname /etc/binderhub/config/values.yaml
    hostname: my-domain.com

Benchmarking 3D graphic cards and their drivers

I have in the past benchmarked network links and disks, so as to have a rough idea of the performance of the hardware I am confronted at $WORK. As I started to dabble into Linux gaming (on non-PC hardware !), I wanted to have some numbers from the graphic stack as well.

I am using the command glmark2 --size 1920x1080 which is testing the performance of an OpenGL implementation, hardware + drivers. OpenGL is the classic 3D API used by most opensource gaming on Linux (Doom3 Engine, SuperTuxCart, 0AD, Cube 2 Engine).

Vulkan is getting traction as a newer 3D API however the equivalent Vulkan vkmark benchmark was crashing using the NVIDIA semi-proprietary drivers. (vkmark --size 1920x1080 was throwing an ugly Error: Selected present mode Mailbox is not supported by the used Vulkan physical device. )

# apt install glmark2
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 615 (rev 02)
$ glmark2 --size 1920x1080
...
...
glmark2 Score: 2063
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
glmark2 Score: 3095
$ lspci | grep -i vga # discrete GPU, using nouveau
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 Score: 2463
$ lspci | grep -i vga # discrete GPU, using nvidia-open semi-proprietary driver
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 score: 4960

On a Rasperry4 with the V3D, performance is abysmal, so I suspect something is wrong with my config

$ glxinfo -B | grep "OpenGL renderer" # No PCI bus here
OpenGL renderer string: V3D 4.xx
glmark2 score: 48 # that is 57 times slower ...

Finally let us have a look at the performance of 3D rendering without hardware acceleration using the software renderer LLVMPipe. This is on raspberry pi4 with EFI boot, where the graphic driver is simply the EFI framebuffer, so possibly the slowest possible way to run 3D programms.

$ grep EFI /var/log/Xorg.0.log
[  1426.734] (II) FBDEV(0): hardware: EFI VGA (video memory: 5120kB)
glmark2 score: 26 # yes 100 times slower

Note that Nouveau has currently some graphical glitches with Doom3 so I am using the nvidia-open driver for this hardware.

In my testing with Doom3 and SuperTuxKart, post 2015 integrated Intel Hardware is more than enough to play in HD resolution.

ARM64 desktop as daily driver

I have bought myself an expensive ARM64 workstation, the System 76 Thelio Astra that I intend to use as my main desktop computer for the next 15 years, running Debian.

The box is basically a server motherboard repurposed in a good desktop chassis. In Europe it seems you can order similar ready systems here.

The hardware is well supported by Debian 12 and Debian testing.I had some initial issues with graphics, due to the board being designed for a server use, but I am solving these as we go.

Annoyances I got so far:

  • When you power on the machine using the power supply switch, you have to wait for the BMC to finished its startup sequence, before the front power button does anything. As starting the BMC can take 90 seconds, I thought initially the machine was dead on arrival.

  • The default graphical output is redirected to the BMC Serial over LAN, which means if you want to install Debian using an attached display you need to force the output on the attached display passing console=tty0 as an installer parameter.

  • Finally the Xorg Nouveau driver does not work with the Nvidia A400 GPU I got with the machine. After passing nodemodeset as a kernel parameter, I can force Xorg to use an unaccelerated framebuffer, which at least displays something. I passed this parameter to the installer, so that I could install in graphical mode. The driver from Nvidia works, but I’d like very much to get Nouveau running.

Ugly point

  • A server mother board we said. This mean there is NO suspend to RAM, you have to power off if you don’t want to keep the machine on all the time. As the boot sequence is long (server board again) I am pondering setting a startup time in the UEFI firmware to turn the box on at specific usage time.

Good points

  • The firmware of the machine is a standard EFI, which means you can use the debian arm64 installer on an USB stick straight away, without any kind of device tree / bootloader fiddling.
  • The 3 Nics, Wifi, bluetooth were all recognized on first boot.
  • I was afraid the machine would be loud. However it is quiet, you hear the humming of a fan, but it is quieter than most desktops I owned, from the Atari TT to an all in one Lenovo M92z I used for 10 years. I am certainly not a hardware and cooling specialist, but meseems the quietness comes from slow rotating but very large fans.
  • Due the clean design of Linux and Debian, thousands of packages working correctly on ARM64, starting with the Gnome desktop environment and Firefox.
  • The documentation from system76 is fine, their Ubuntu 20.04 setup guide was helpful to understand the needed parameters mentioned above.

Update: The display is working correctly with the nouveau driver after installing the non-free Nvidia firmware. See the Debian wiki.

Wireless headset dongle not detected by PulseAudio

For whatever reason, when I plug and unplug my Wireless Headset dongle over USB, it is not always detected by the PulseAudio/Pipewire stack which is running our desktop sound Linux those days. But we can fix that with a restart of the handling daemon, see below.
In PulseAudio terminology an input device (microphone) is called a source, and an output device a sink.

When the headset dongle is plugged in, we can see it on the USB bus:

$ lsusb | grep Headset 
Bus 001 Device 094: ID 046d:0af7 Logitech, Inc. Logitech G PRO X 2 Gaming Headset

The device is detected correctly as a Human Interface Device (HID) device

$ dmesg
...
[310230.507591] input: Logitech Logitech G PRO X 2 Gaming Headset as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.1/1-1.1.4/1-1.1.4:1.3/0003:046D:0AF7.0060/input/input163
[310230.507762] hid-generic 0003:046D:0AF7.0060: input,hiddev2,hidraw11: USB HID v1.10 Device [Logitech Logitech G PRO X 2 Gaming Headset] on usb-0000:00:14.0-1.1.4/input

However it is not seen in the list of sources / sinks of PulseAudio:

$ pactl list short sinks
58      alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo      PipeWire        s16le 2ch 48000Hz       IDLE
62      alsa_output.pci-0000_00_1f.3.analog-stereo      PipeWire        s32le 2ch 48000Hz       SUSPENDED
95      bluez_output.F4_4E_FD_D2_97_1F.1        PipeWire        s16le 2ch 48000Hz       IDLE

This unfriendly list shows my docking station, which as a small jack connector for a wired cable, the built in speaker of my laptop, and a blutooth headset.

If I restart Pipewire,

$ systemctl --user restart pipewire

then the headset appears as possible audio output.

$ pactl list short sinks
54      alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo      PipeWire        s16le 2ch 48000Hz       SUSPENDED
56      alsa_output.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.analog-stereo    PipeWire        s16le 2ch 48000Hz       SUSPENDED
58      alsa_output.pci-0000_00_1f.3.analog-stereo      PipeWire        s32le 2ch 48000Hz       SUSPENDED
77      bluez_output.F4_4E_FD_D2_97_1F.1        PipeWire        s16le 2ch 48000Hz       SUSPENDED

Once you have set the default input / output device, for me in Gnome, you can check it with:

$ pactl info | egrep '(Sink|Source)'
Default Sink: alsa_output.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.analog-stereo
Default Source: alsa_input.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.mono-fallback

Finally let us play some test sounds:

$ speaker-test --test wav --nloops 1 --channels 2

And test some recording, you will hear the output around one second after the speaking (yes that is recorded audio sent over a Unix pipe for playing !):

# don't do this when the output is a speaker, this will create audio feedback (larsen effect)
$ arecord -f cd - | aplay

Accessing Atari ST disk images on Linux

This post leverages support for Atari Hard Disk Interface Partition (AHDI) partition tables in the Linux kernel, activated by default in Debian, and in the parted partition editor.

Accessing the content of a partition using a user mounted loop device

This is the easiest procedure and should be tried to first. Depending if your Linux kernel has support for AHDI partition tables, and the size of the FAT system on the partition, this procedure might not work. In that case, try the procedure using mtools further below.

Attach a disk image called hd80mb.image to a loop device:

$ udisksctl loop-setup --file hd80mb.image
Mapped file hd80mb.image as /dev/loop0

Notice how the kernel detected the partition table:

$ dmesg | grep loop0
[160892.151941] loop0: detected capacity change from 0 to 164138
[160892.171061]  loop0: AHDI p1 p2 p3 p4

Inspect the block devices created for each partition:

$ lsblk | grep loop0

If the partitions are not already mounted by udisks2 under /media/, mount them manually:

$ sudo mount /dev/loop0p1 /mnt/
$ ls /mnt/
SHDRIVER.SYS

When you are finished copying data, unmount the partition, and detach the loop device.

$ sudo umount /mnt
$ udisksctl loop-delete --block-device /dev/loop0

Accessing the content of a partition using mtools and parted

This procedure uses the mtools package and the support for the AHDI partition scheme in the parted partition editor.

Display the partition table, with partitions offsets in bytes:

$ parted st_mint-1.5.img -- unit B print
...
Partition Table: atari
Disk Flags: 
Number  Start       End         Size        Type     File system  Flags
 1      1024B       133170175B  133169152B  primary               boot
 2      133170176B  266339327B  133169152B  primary
 3      266339328B  399508479B  133169152B  primary
 4      399508480B  532676607B  133168128B  primary

Set some Atari-friendly mtools options:

$ export MTOOLS_SKIP_CHECK=1
$ export MTOOLS_NO_VFAT=1

List the content of the partition, passing as parameter the offset in bytes of the partition: For instance here we are interested in the second partition, and the parted output above indicates that this partition starts at byte offset 133170176 in the disk image.

$ mdir -s -i st_mint-1.5.img@@133170176
 Volume in drive : has no label
Directory for ::/
demodata          2024-08-27  11:43 
        1 file                    0 bytes
Directory for ::/demodata

We can also use the command mcopy with a similar syntax to copy data from and to the disk image. For instance we copy a file named file.zip to the root directory of the second partition:

$ mcopy -s -i st_mint-1.5.img@@133170176 file.zip ::

Recompiling mtools to access large partitions

With disk images having large AHDI partitions (well considered large in 1992 …), you might encounter the error

mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
init: sector size too big
Cannot initialize '::'

This error is caused by the non-standard large logical sectors that the TOS uses for large FAT partitions (see the Atari Hard Disk Filesystem reference on page 41, TOS partitions size)

We can inspect the logical sector size using fsck tools:

$ udiskctl loop-setup --file cecile-falcon-singlepart-1GB.img
$ sudo fsck.fat -Anv /dev/loop0p1
fsck.fat 4.2 (2021-01-31)
...
Media byte 0xf8 (hard disk)
16384 bytes per logical sector

To access the partition, you need to patch mtools, so that it supports a logical sector size of 16384 bytes. For this you need to change the MAX_SECTOR macro from 8192 to 16384 in msdos.h in the mtools distribution and recompile. A rebuilt mtools is then able to access the partition:

$ /usr/local/bin/mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
 Volume in drive : has no label
Directory for ::/
CECILE   SYS      8462 1998-03-27  22:42 
NEWDESK  INF       804 2024-09-09   9:23 
        2 files               9 266 bytes
                      1 072 463 872 bytes free

Too many open files in Minikube Pod

Right now playing with minikube, to run a three nodes highly available Kubernetes control plane. I am using the docker driver of minikube, so each Kubernetes node component is running inside a docker container, instead of using full blown VMs.

In my experience this works better than Kind, as using Kind you cannot correctly restart a cluster deployed in highly available mode.

This is the topology of the cluster:

$ minikube node list 
minikube        192.168.49.2
minikube-m02    192.168.49.3
minikube-m03    192.168.49.4

Each kubernetes node is actually a docker container:

$ docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS         PORTS                                                                                                                                  NAMES
977046487e5e   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32812->22/tcp, 127.0.0.1:32811->2376/tcp, 127.0.0.1:32810->5000/tcp, 127.0.0.1:32809->8443/tcp, 127.0.0.1:32808->32443/tcp   minikube-m03
8be3549f0c4c   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32807->22/tcp, 127.0.0.1:32806->2376/tcp, 127.0.0.1:32805->5000/tcp, 127.0.0.1:32804->8443/tcp, 127.0.0.1:32803->32443/tcp   minikube-m02
4b39f1c47c23   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32802->22/tcp, 127.0.0.1:32801->2376/tcp, 127.0.0.1:32800->5000/tcp, 127.0.0.1:32799->8443/tcp, 127.0.0.1:32798->32443/tcp   minikube

The whole list of pods running in the cluster is as this:

$ minikube kubectl -- get pods --all-namespaces
kube-system   coredns-6f6b679f8f-85n9l               0/1     Running   1 (44s ago)   3m32s
kube-system   coredns-6f6b679f8f-pnhxv               0/1     Running   1 (44s ago)   3m32s
kube-system   etcd-minikube                          1/1     Running   1 (44s ago)   3m37s
kube-system   etcd-minikube-m02                      1/1     Running   1 (42s ago)   3m21s
...
kube-system   kube-proxy-84gm6                       0/1     Error     1 (31s ago)   3m4s

There is this container in Error status, let us check the logs

$ minikube kubectl -- logs -n kube-system kube-proxy-84gm6
E1210 11:50:42.117036       1 run.go:72] "command failed" err="failed complete: too many open files"

This can be fixed by increasing the number of inotify watchers:

# cat > /etc/sysctl.d/minikube.conf <<EOF
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
EOF
# sysctl --system
...
* Applying /etc/sysctl.d/minikube.conf ...
* Applying /etc/sysctl.conf ...
...
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512

Restart the cluster

$ minikube stop
$ minikube start 
$ minikube kubectl -- get pods --all-namespaces | grep Error
$

Spinning wheel never ending while joining a video call (Google Meet) on Linux

On a Fedora 41 Laptop I had the following issue: when joining a Google Meet video call in a browser, the spinning wheel which indicates progress was spinning and spinning … and the call would never start.

Turns out that the sound subsystem was a bit damaged. The pactl set of commands was not returning anything.

$ pactl info
..... => hanging

On a well working system it should return the status of the configured sound subsystem.

$ pactl info
...
Server Name: PulseAudio (on PipeWire 0.3.65)
...
Default Sink: alsa_output.pci-0000_00_1f.3.analog-stereo
Default Source: alsa_input.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.mono-fallback

This is understandable, if the web browser is not able to get the default input and output audio device, it waits for one to be ready, but this never happens, so joining the call is forever hanging.

The solution for me was simply to restart the pipewire service in the user systemd session.

$ systemctl --user restart pipewire.service

Even with this, on a Fedora 41 system, pactl command would still hang. Then I changed the backend from PipeWire to PulseAudio following the Fedora wiki and video calls were working again.