April 26, 2025

John Goerzen

NNCPNET Can Optionally Exchange Internet Email

A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary.

Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc.

It’s a volunteer-run, free, service. Give it a try!

26 April, 2025 01:01AM by John Goerzen

April 25, 2025

Simon Josefsson

GitLab Runner with Rootless Privilege-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.

Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I’m waiting for you! I wouldn’t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any other riscv64 hardware that has been proven to be able to run a libre OS, all of them appear to require special patches and/or non-mainline kernels.

  • Login on console using username ‘ubuntu‘ and password ‘ubuntu‘. You will be asked to change the password, so do that.
  • Start a terminal, gain root with sudo -i and change the hostname:
    echo jas-p550-01 > /etc/hostname
  • Connect ethernet and run: apt-get update && apt-get dist-upgrade -u.
  • If your system doesn’t have valid MAC address (they show as MAC ‘8c:00:00:00:00:00 if you run ‘ip a’), you can fix this to avoid collisions if you install multiple P550’s on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
  • For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
  • Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
  • With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr

Now with a reasonable setup ready, let’s install the GitLab Runner. The following is adapted from gitlab-runner’s official installation instructions documentation. The normal installation flow doesn’t work because they don’t publish riscv64 apt repositories, so you will have to perform upgrades manually.

# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb

Remember the NVMe device? Let’s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.

gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner

Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project’s Settings -> CI/CD -> Runners -> New project runner. I used the tags ‘riscv64‘ and a runner description of the hostname.

gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable

We install and configure gitlab-runner to use podman, and to use non-root user.

apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner

You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?

# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket

We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"

Note that unlike the documentation I do not add the ‘privileged = true‘ parameter here. I will come back to this later.

Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.

dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set

Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:

journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service

To stop the graphical environment and disable some unnecessary services, you can use:

systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord

At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!

I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian ‘aardvark-dns‘ binary instead.

wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian

My setup uses podman in rootless mode without passing the –privileged parameter or any –add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:

Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1

According to GitLab runner security considerations, you should not enable the ‘privileged = true’ parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, as I suppose running as root would too.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
privileged = true

Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]

Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest –isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.

Happy Riscv64 Building!

25 April, 2025 06:30PM by simon

Ian Wienand

Avoiding layer shift on Ender V3 KE after pause

With (at least) the V1.1.0.15 firmware on the Ender V3 KE 3d printer the PAUSE macro will cause the print head to run too far on the Y axis, which causes a small layer shift when the print returns. I guess the idea is to expose the build plate as much as possible by moving the head as far to the side and back as possible, but the overrun and consequent belt slip unfortunately makes it mostly useless; the main use of this probably being to switch filaments for two colour prints.

Luckily you can fairly easily enable root access on the control pad from the settings menu. After doing this you can ssh to it's IP address with the default password Creality2023.

From there you can modify the /usr/data/printer_data/config/gcode_macro.cfg file (vi is available) to change the details of the PAUSE macro. Find the section [gcode_macro PAUSE] and modify {% set y_park = 255 %} to a more reasonable value like 150. Save the file and reboot the pad so the printing daemons restart.

On PAUSE this then moves the head to the far left about half-way down, which works fine for filament changes. Hopefully a future firmware version will update this; I will update this post if I find it does.

c.f. Ender 3 V3 KE shifting layers after pause

25 April, 2025 11:30AM by Ian Wienand

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!

Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2025 page.

Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.

The new term for the project leader started on April 21st and will expire on April 20th 2026.

25 April, 2025 10:05AM by Jean-Pierre Giraud

April 24, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.26 on CRAN: Small Updates

A new minor release 0.4.26 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.

This release of RQuantLib brings updated Windows build support taking advantage of updated Rtools, thanks to a PR by Tomas Kalibera. We also updated expected results for three of the ‘schedule’ tests (in a way that is dependent on the upstream library version) as the just-released QuantLib 1.38 differs slightly.

Changes in RQuantLib version 0.4.26 (2025-04-24)

  • Use system QuantLib (if found by pkg-config) on Windows too (Tomas Kalibera in #192)

  • Accommodate same test changes for schedules in QuantLib 1.38

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

24 April, 2025 10:27PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I’d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I’ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just “Download our prebuilt components”. While I admire the efforts to get Home Assistant fully packaged for Debian I accept that’s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have “voice satellites” in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.

This is the start of a write-up of that. I’ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let’s start with some requirements:

  • All local processing; no call-outs to external services
  • Ability to have multiple voice satellites in the house
  • A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
  • As clean an install on a Debian stable based system as possible
  • Binaries built locally
  • No need for a GPU

My house server is an AMD Ryzen 7 5700G, so my expectation was that I’d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I’m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I’ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I’m missing.

Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don’t care about what’s under the hood, there are a bunch of more user friendly details on Home Assistant’s site itself, and they have pre-built images you can just deploy.

My first step was sorting out a “voice satellite”. This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I’d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn’t got around to setting up.

Here, we ignore a bit about delving into exactly what’s going on under the hood, even if we’re compiling locally. This is a constrained embedded device and while I’m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it’s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to “Hey Jarvis” rather than the default “Okay Nabu”, and that was a good reason to bother doing a local build. We’ll get into actually building a voice satellite on Debian in later posts.

I started with the default upstream assistant config and tweaked it a little for my setup:

diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: ${name}
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:


(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)

It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:

Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.


And then you can watch it boot (this is mine already configured up in Home Assistant):

Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 

[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 

[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set

[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6

[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32

[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6

[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6

[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760

[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760

[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440

[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled

[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled

[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06

[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5

[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4

[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD


That’s enough to get a voice satellite that can be configured up in Home Assistant; you’ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64 to generate mine).

If you’re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it’ll send audio from that port to Home Assistant, then receive back audio to the same port).

At this point you can now shout “Hey Jarvis, what time is it?” at the Echo, and the white light will turn flashing blue (indicating it’s heard the wake word). Which means we’re ready to teach Home Assistant how to do something with the incoming audio.

24 April, 2025 06:34PM

April 23, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.15 on CRAN: Calendar Updates

The fifteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.38 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.38.

Changes in version 0.0.15 (2025-04-23)

  • Synchronized with QuantLib 1.38 released today

  • Calendar updates for China, Hongkong, Thailand

  • Minor continuous integration update

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

23 April, 2025 06:12PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

23 April, 2025 01:21PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

hackergotchi for Michael Prokop

Michael Prokop

Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

23 April, 2025 06:11AM by mika

Russell Coker

Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

23 April, 2025 05:11AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.19 on CRAN: Mostly Maintenance

A new release 0.2.19 of RInside arrived on CRAN and in Debian today. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release fixes a minor bug that got tickled (after a decade and a half RInside) by environment variables (which we parse at compile time and encode in a C/C++ header file as constants) built using double quotes. CRAN currently needs that on one or two platforms, and RInside was erroring. This has been addressed. In the two years since the last release we also received two kind PRs updating the Qt examples to Qt6. And as always we also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.19 (2025-04-22)

  • The qt example now supports Qt6 (Joris Goosen in #54 closing #53)

  • CMake support was refined for more recent versions (Joris Goosen in #55)

  • The sandboxed-server example now states more clearly that RINSIDE_CALLBACKS needs to be defined

  • More routine update to package and continuous integration.

  • Some now-obsolete checks for C++11 have been removed

  • When parsing environment variables, use of double quotes is now supported

My CRANberries also provide a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

23 April, 2025 12:40AM

April 22, 2025

Melissa Wen

2025 FOSDEM: Don't let your motivation go, save time with kworkflow

2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.

During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.

The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.

For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.

Something I didn’t emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.

So, the video and slide deck of my FOSDEM presentation are available for download here.

And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.


Kworkflow at FOSDEM 2025: Speaker Notes and Demo

Hi, I’m Melissa, a GPU kernel driver developer at Igalia and today I’ll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.

So, you’re a kernel developer, or you want to be a kernel developer, or you don’t want to be a kernel developer. But you’re all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.

And that’s a given change for a given distribution, or for a given device, or for a given subsystem…

Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.

So, whether you are a kernel developer or not, at some point you may come across this type of situation:

There is a userspace developer who wants to report a kernel issue and says:

  • Oh, there is a problem in your driver that can only be reproduced by running this specific distribution. And the kernel developer asks:
  • Oh, have you checked if this issue is still present in the latest kernel version of this branch?

But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:

  • Sorry for the delay, this is the first time I have installed a custom kernel. I am not sure if I did it right, but the issue is still present in the kernel of the branch you pointed out.

And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.

What’s the problem of this situation? The problem is that you keep creating new scripts!

Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!

You know, you have a lot of babies, you have a collection of “my precious scripts”, like Sméagol (Lord of the Rings) with the precious ring.

Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.

With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.

What is kworkflow?

Kworkflow is a collection of tools and software combined to:

  • Optimize Linux kernel development workflow.
  • Reduce time spent on repetitive tasks, since we are spending our lives compiling kernels.
  • Standardize best practices.
  • Ensure reliable data exchange across kernel workflow. For example: two people describe the same setup, but they are not seeing the same thing, kworkflow can ensure both are actually with the same kernel, modules and options enabled.

I don’t know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.

What is the main feature of kworflow?

There are many, but these are the most important for me:

  • Build & deploy custom kernels across devices & distros.
  • Handle cross-compilation seamlessly.
  • Manage multiple architecture, settings and target devices in the same work tree.
  • Organize kernel configuration files.
  • Facilitate remote debugging & code inspection.
  • Standardize Linux kernel patch submission guidelines. You don’t need to double check documentantion neither Greg needs to tell you that you are not following Linux kernel guidelines.
  • Upcoming: Interface to bookmark, apply and “reviewed-by” patches from mailing lists (lore.kernel.org).

This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.

# Manage kw and kw configurations
kw init             - Initialize kw config file
kw self-update (u)  - Update kw
kw config (g)       - Manage kernel .config files

The second subset is to build and deploy custom kernels.

# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b)        - Build kernel
kw deploy (d)       - Deploy kernel image (local/remote)
kw bd               - Build and deploy kernel

We have some tools to manage and interact with target machines.

# Manage and interact with target machines
kw ssh (s)          - SSH support
kw remote (r)       - Manage machines available via ssh
kw vm               - QEMU support

To inspect and debug a kernel.

# Inspect and debug
kw device           - Show basic hardware information
kw explore (e)      - Explore string patterns in the work tree and git logs
kw debug            - Linux kernel debug utilities
kw drm              - Set of commands to work with DRM drivers

To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.

# Automatize best practices for patch submission
kw codestyle (c)    - Check code style
kw maintainers (m)  - Get maintainers/mailing list
kw send-patch       - Send patches via email

And the last one, the upcoming patch hub.

# Upcoming
kw patch-hub        - Interact with patches (lore.kernel.org)

How can you save time with Kworkflow?

So how can you save time building and deploying a custom kernel?

First, you need a .config file.

  • Without kworkflow: You may be manually extracting and managing .config files from different targets and saving them with different suffixes to link the kernel to the target device or distribution, or any descriptive suffix to help identify which is which. Or even copying and pasting from somewhere.
  • With kworkflow: you can use the kernel-config-manager command, or simply kw k, to store, describe and retrieve a specific .config file very easily, according to your current needs.

Then you want to build the kernel:

  • Without kworkflow: You are probably now memorizing a combination of commands and options.
  • With kworkflow: you just need kw b (kw build) to build the kernel with the correct settings for cross-compilation, compilation warnings, cflags, etc. It also shows some information about the kernel, like number of modules.

Finally, to deploy the kernel in a target machine.

  • Without kworkflow: You might be doing things like: SSH connecting to the remote machine, copying and removing files according to distributions and architecture, and manually updating the bootloader for the target distribution.
  • With kworkflow: you just need kw d which does a lot of things for you, like: deploying the kernel, preparing the target machine for the new installation, listing available kernels and uninstall them, creating a tarball, rebooting the machine after deploying the kernel, etc.

You can also save time on debugging kernels locally or remotely.

  • Without kworkflow: you do: ssh, manual setup and traces enablement, copy&paste logs.
  • With kworkflow: more straighforward access to debug utilities: events, trace, dmesg.

You can save time on managing multiple kernel images in the same work tree.

  • Without kworkflow: now you can be cloning multiple times the same repository so you don’t lose compiled files when changing kernel configuration or compilation options and manually managing build and deployment scripts.
  • With kworkflow: you can use kw env to isolate multiple contexts in the same worktree as environments, so you can keep different configurations in the same worktree and switch between them easily without losing anything from the last time you worked in a specific context.

Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.

This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.


Demo

Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.

Setup

  • Three devices:
    • laptop (debian x86 intel local)
    • SteamDeck (steamos x86 amd remote)
    • RaspberryPi 4 (raspbian arm64 broadcomm remote)
  • Goal: To validate a change on DRM/VKMS using a single kernel tree.
  • Kworkflow commands:
    • kw env
    • kw d
    • kw bd
    • kw device
    • kw debug
    • kw drm

Demo script

In the same terminal and worktree.

First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change

Q&A

Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.

The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I’ve actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.

Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn’t very clear that kworkflow doesn’t support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.

Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn’t manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don’t see currently users of KW working with Android, I don’t think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.

The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn’t support yet.

Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.

With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.

Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.

22 April, 2025 07:30PM

hackergotchi for Joey Hess

Joey Hess

offgrid electric car

Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.

My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:

first feeble charging offgrid

It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.

My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).

me standing in front of solar fence

first charging from solar fence

Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.

When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.

charging up gradually through the month of November

This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.

December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.

bleak December charging

Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.

Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.

That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.

My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.

But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!


By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.

And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.

January
February
March
April

22 April, 2025 04:45PM

April 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Want your title? Here, have some XML!

As it seems ChatGPT would phrase it… Sweet Mother of God!

I received a mail from my University’s Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! 👨‍🎓), and before printing the corresponding documents, I should review all of the information is correct.

Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd… Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right?

XML sample

Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address…

But… What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities…) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):

The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html

Seriously! Asking people getting a title in just about any area of knowledge to… Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat).

Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes — the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea? 😠

21 April, 2025 06:33PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

One last Bookworm for the road — report from the Montreal 2025 BSP

Hello, hello, hello!

This report for the Bug Squashing Party we held in Montreal on March 28-29th is very late ... but better late than never? We're now at our fifth BSP in a row1, which is both nice and somewhat terrifying.

Have I really been around for five Debian releases already? Geez...

This year, around 13 different people showed up, including some brand new folks! All in all, we ended up working on 77 bugs, 61 of which have since been closed.

This is somewhat skewed by the large number of Lintian bugs I closed by merging and releasing the very many patches submitted by Maytham Alsudany (hello Maytham!), but that was still work :D

For our past few events, we have been renting a space at Ateliers de la transition socio-écologique. This building used to be nunnery (thus the huge cross on the top floor), but has since been transformed into a multi-faceted project.

A drawing of the building where the BSP was hosted

BSPs are great and this one was no exception. You should try to join an upcoming event or to organise one if you can. It is loads of fun and you will be helping the Debian project release its next stable version sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Pictures

Here are a bunch of pictures of the BSP, mixed in with some other pictures I took at this venue during a previous event.

Some of the people present on Friday, in the smaller room we had that day

A picture of a previous event, which includes many of the folks present at the BSP and the larger room we used on Saturday

A sticker on the door of the bathroom with text saying 'All Employees Must Wash Away Sin Before Returning To Work', a tongue-in-cheek reference to the building's previous purpose

A wall with posters for upcoming events

A drawing on one of the single-occupancy rooms in the building, warning people the door can't be opened from the inside (yikes!)

A table at the entrance with many flyers for social and political events


  1. See our previous BSPs in 2017, 2019, 2021 and 2023

21 April, 2025 05:00AM by Louis-Philippe Véronneau

April 20, 2025

Russ Allbery

Review: Up the Down Staircase

Review: Up the Down Staircase, by Bel Kaufman

Publisher: Vintage Books
Copyright: 1964, 1991, 2019
Printing: 2019
ISBN: 0-525-56566-3
Format: Kindle
Pages: 360

Up the Down Staircase is a novel (in an unconventional format, which I'll describe in a moment) about the experiences of a new teacher in a fictional New York City high school. It was a massive best-seller in the 1960s, including a 1967 movie, but seems to have dropped out of the public discussion. I read it from the library sometime in the late 1980s or early 1990s and have thought about it periodically ever since. It was Bel Kaufman's first novel.

Sylvia Barrett is a new graduate with a master's degree in English, where she specialized in Chaucer. As Up the Down Staircase opens, it is her first day as an English teacher in Calvin Coolidge High School. As she says in a letter to a college friend:

What I really had in mind was to do a little teaching. "And gladly wolde he lerne, and gladly teche" — like Chaucer's Clerke of Oxenford. I had come eager to share all I know and feel; to imbue the young with a love for their language and literature; to instruct and to inspire. What happened in real life (when I had asked why they were taking English, a boy said: "To help us in real life") was something else again, and even if I could describe it, you would think I am exaggerating.

She instead encounters chaos and bureaucracy, broken windows and mindless regulations, a librarian who is so protective of her books that she doesn't let any students touch them, a school guidance counselor who thinks she's Freud, and a principal whose sole interaction with the school is to occasionally float through on a cushion of cliches, dispensing utterly useless wisdom only to vanish again.

I want to take this opportunity to extend a warm welcome to all faculty and staff, and the sincere hope that you have returned from a healthful and fruitful summer vacation with renewed vim and vigor, ready to gird your loins and tackle the many important and vital tasks that lie ahead undaunted. Thank you for your help and cooperation in the past and future.

Maxwell E. Clarke
Principal

In practice, the school is run by James J. McHare, Clarke's administrative assistant, who signs his messages JJ McH, Adm. Asst. and who Sylvia immediately starts calling Admiral Ass. McHare is a micro-managing control freak who spends the book desperately attempting to impose order over school procedures, the teachers, and the students, with very little success. The title of the book comes from one of his detention slips:

Please admit bearer to class—

Detained by me for going Up the Down staircase and subsequent insolence.

JJ McH

The conceit of this book is that, except for the first and last chapters, it consists only of memos, letters, notes, circulars, and other paper detritus, often said to come from Sylvia's wastepaper basket. Sylvia serves as the first-person narrator through her long letters to her college friend, and through shorter but more frequent exchanges via intraschool memo with Beatrice Schachter, another English teacher at the same school, but much of the book lies outside her narration. The reader has to piece together what's happening from the discarded paper of a dysfunctional institution.

Amid the bureaucratic and personal communications, there are frequent chapters with notes from the students, usually from the suggestion box that Sylvia establishes early in the book. These start as chaotic glimpses of often-misspelled wariness or open hostility, but over the course of Up the Down Staircase, some of the students become characters with fragmentary but still visible story arcs. This remains confusing throughout the novel — there are too many students to keep them entirely straight, and several of them use pseudonyms for the suggestion box — but it's the sort of confusion that feels like an intentional authorial choice. It mirrors the difficulty a teacher has in piecing together and remembering the stories of individual students in overstuffed classrooms, even if (like Sylvia and unlike several of her colleagues) the teacher is trying to pay attention.

At the start, Up the Down Staircase reads as mostly-disconnected humor. There is a strong "kids say the darnedest things" vibe, which didn't entirely work for me, but the send-up of chaotic bureaucracy is both more sophisticated and more entertaining. It has the "laugh so that you don't cry" absurdity of a system with insufficient resources, entirely absent management, and colleagues who have let their quirks take over their personalities. Sylvia alternates between incredulity and stubbornness, and I think this book is at its best when it shows the small acts of practical defiance that one uses to carve out space and coherence from mismanaged bureaucracy.

But this book is not just a collection of humorous anecdotes about teaching high school. Sylvia is sincere in her desire to teach, which crystallizes around, but is not limited to, a quixotic attempt to reach one delinquent that everyone else in the school has written off. She slowly finds her footing, she has a few breakthroughs in reaching her students, and the book slowly turns into an earnest portrayal of an attempt to make the system work despite its obvious unfitness for purpose. This part of the book is hard to review. Parts of it worked brilliantly; I could feel myself both adjusting my expectations alongside Sylvia to something less idealistic and also celebrating the rare breakthrough with her. Parts of it were weirdly uncomfortable in ways that I'm not sure I enjoyed. That includes Sylvia's climactic conversation with the boy she's been trying to reach, which was weirdly charged and ambiguous in a way that felt like the author's reach exceeding their grasp.

One thing that didn't help my enjoyment is Sylvia's relationship with Paul Barringer, another of the English teachers and a frustrated novelist and poet. Everyone who works at the school has found their own way to cope with the stress and chaos, and many of the ways that seem humorous turn out to have a deeper logic and even heroism. Paul's, however, is to retreat into indifference and alcohol. He is a believable character who works with Kaufman's themes, but he's also entirely unlikable. I never understood why Sylvia tolerated that creepy asshole, let alone kept having lunch with him. It is clear from the plot of the book that Kaufman at least partially understands Paul's deficiencies, but that did not help me enjoy reading about him.

This is a great example of a book that tried to do something unusual and risky and didn't entirely pull it off. I like books that take a risk, and sometimes Up the Down Staircase is very funny or suddenly insightful in a way that I'm not sure Kaufman could have reached with a more traditional novel. It takes a hard look at what it means to try to make a system work when it's clearly broken and you can't change it, and the way all of the characters arrive at different answers that are much deeper than their initial impressions was subtle and effective. It's the sort of book that sticks in your head, as shown by the fact I bought it on a whim to re-read some 35 years after I first read it. But it's not consistently great. Some parts of it drag, the characters are frustratingly hard to keep track of, and the emotional climax points are odd and unsatisfying, at least to me.

I'm not sure whether to recommend it or not, but it's certainly unusual. I'm glad I read it again, but I probably won't re-read it for another 35 years, at least.

If you are considering getting this book, be aware that it has a lot of drawings and several hand-written letters. The publisher of the edition I read did a reasonably good job formatting this for an ebook, but some of the pages, particularly the hand-written letters, were extremely hard to read on a Kindle. Consider paper, or at least reading on a tablet or computer screen, if you don't want to have to puzzle over low-resolution images.

The 1991 trade paperback had a new introduction by the author, reproduced in the edition I read as an afterward (which is a better choice than an introduction). It is a long and fascinating essay from Kaufman about her experience with the reaction to this book, culminating in a passionate plea for supporting public schools and public school teachers. Kaufman's personal account adds a lot of depth to the story; I highly recommend it.

Content note: Self-harm, plus several scenes that are closely adjacent to student-teacher relationships. Kaufman deals frankly with the problems of mostly-poor high school kids, including sexuality, so be warned that this is not the humorous romp that it might appear on first glance. A couple of the scenes made me uncomfortable; there isn't anything explicit, but the emotional overtones can be pretty disturbing.

Rating: 7 out of 10

20 April, 2025 03:43AM

April 19, 2025

hackergotchi for Ahmed Siam

Ahmed Siam

My first post and writing plans

This is my first post in this blog and I think it will be useful to share what I will write about during the next months.

Here are some titles:

  • My Debian experimental internship experience as an intern.
  • Using IRC: What, Why and How.
  • How to internationalize CLI tools written in C++ using ICU4C.

If you are interested in such topics, feel free to subscribe to my RSS feed and/or follow me in any of my social media accounts.

Stay tuned!

19 April, 2025 09:52AM

April 18, 2025

Sven Hoexter

Trixie Upgrade and X11 Clipboard Manager Madness

Due to my own laziness and a few functionality issues my "for work laptop" is still using a 15+ year old setup with X11 and awesome. Since trixie is now starting its freeze, it's time to update that odd machine as well and look at the fallout. Good news: It's mostly my own resistance to change which required some kick in the back to move on.

Clipboard Manager Madness

For the past decade or so I used parcellite which served me well. Now that is no longer available in trixie and I started to look into one of the dead end streets of X11 related tooling, searching for an alternative.

Parcellite

Seems upstream is doing sporadic fixes, but holds GTK2 tight. The Debian package was patched to be GTK3 compatible, but has unfixed ftbfs issues with GCC 14.

clipit

Next I checked for a parcellite fork named clipit, and that's when it started to get funky. It's packaged in Debian, QA maintained, and recently received at least two uploads to keep it working. Installed it and found it's greeting me with a nag screen that I should migrate to diodon. The real clipit tool is still shipped as a binary named clipit.real, so if you know it you can still use it. To achieve the nag screen it depends on zenity and to ease the migration it depends on diodon. Two things I do not really need. Also the package description prominently mentions that you should not use the package.

diodon

The nag screen of clipit made me look at diodon. It claims it was written for the Ubuntu Unity desktop, something where I've no idea how alive and relevant it still is. While there is still something on launchpad, it seems to receive sporadic commits on github. Not sure if it's dead or just feature complete.

Interim Solution: clipit

Settled with clipit for now, but decided to fork the Debian package to remove the nag screen and the dependency on diodon and zenity (package build). My hope is to convert this last X11 setup to wayland within the lifetime of trixie.

I also contacted the last uploader regarding a removal of the nag screen, who then brought in the last maintainer who added the nag screen. While I first thought clipit is somewhat maintained upstream, Andrej quickly pointed out that this is not really the case. Still that leaves us in trixie with a rather odd situation. We ship now for the second stable release a package that recommends to move to a different tool while still shipping the original tool. Plus it's getting patched by some of its users who refuse to migrate to the alternative envisioned by the former maintainer.

VirtualBox and moving to libvirt

I always liked the GUI of VirtualBox, and it really made desktop virtualization easy. But with Linux 6.12, which enables KVM by default, it seems to get even more painful to get it up and running. In the past I just took the latest release from unstable and rebuild that one on the current stable. Currently the last release in unstable is 7.0.20, while the Linux 6.12 fixes only started to appear in VirtualBox 7.1.4 and later. The good thing is with virt-manager and the whole libvirt ecosystem there is a good enough replacement available, and it works fine with related tooling like vagrant. There are instructions available on how to set it up. I can only add that it makes sense to export VAGRANT_DEFAULT_PROVIDER=libvirt in your .bashrc to make that provider change permanent.

18 April, 2025 05:00PM

April 17, 2025

Simon Josefsson

Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

17 April, 2025 07:24PM by simon

Scarlett Gately Moore

KDE Applications 25.04 Snaps and Kubuntu Plucky Puffin 25.04 Released!

Very busy releasetastic week! The versions being the same is a complete coincidence 😆

https://kde.org/announcements/gear/25.04.0

Which can be downloaded here: https://snapcraft.io/publisher/kde !

In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.

Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful 🙂

17 April, 2025 07:00PM by sgmoore

Petter Reinholdtsen

Gearing up OpenSnitch for a 1.6.8 release in Trixie

Sadly, the interactive application firewall OpenSnitch have in practice been unmaintained in Debian for a while. A few days ago I decided to do something about it, and today I am happy with the result. This package monitor network traffic going in and out of a Linux machine, and show a popup dialog to the logged in desktop user, asking to approve or deny any new connections. It has proved very valuable in discovering programs calling home, giving me more control of how information leak out of my Linux machine.

So far the new version is only available in Debian experimental, but I plan to upload it to unstable as soon as I know it is working on a few more machines, and make sure the new version make it into the next stable release of Debian. The package freeze is approaching, and it is not a lot of time left. If you read this blog post, I hope you can be one of the testers.

The new version should be using eBPF on architectures where this is working (amd64 and arm64), and fall back to /proc/ probing where the opensnitch-ebpf-modules package is missing (so far only armhf, a unrelated bug blocks building on riscv64 and s390x). Using eBPF should provide more accurate attribution of packages responsible for network traffic for short lived processes, which some times were unavailable in /proc/ when opensnitch tried to probe for information. I have limited experience with the new version, having used it myself for a day or so. It is easily backportable to Debian 12 Bookworm without code changes, all it need is a simple 'debuild' thanks to the optional build dependencies.

Due to a misfeature of llc on armhf, there is no eBPF support available there. I have not investigated the details, nor reported any bug yet, but for some reason -march=bpf is an unknown option on this architecture, causing the build in the ebpf_prog subdirectory build to fail.

The package is maintained under the umbrella of Debian Go team, and you can meet the current maintainers on the #debian-golang and #opensnitch IRC channels on irc.debian.org.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

17 April, 2025 05:50PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Hledger UI themes

Last year I intended to write an update on my use of hledger, but that was waylaid for various reasons and I need to revisit how (if) I'm using it, so that's put off for longer. I do want to mention one contribution I made upstream: a dark theme for the UI, and some unfinished work on consistent colours.

Consistent terminal colours are an interesting issue: the most common terminal colour modes (8 and 256) use indexing into a palette, but the definition of the colours is ambiguous: the 8-colour palette is formally specified by ANSI as names (red, green, etc.); the 256-colour palette is effectively defined by xterm (a useful chart) but I'm not sure all terminal emulators that support it have chosen the same colour values.

A consequence of indexed-colour is that the end-user may redefine what the colour values are. Whether this is a good thing or a bad thing depends on your point of view. As an end-user, it's attractive to be able to tune the colour scheme; but as a software author, it means you have no real idea what your users are going to see, and matters like ensuring contrast are impossible.

Some terminals support 24-bit "true" colour, in which the colours are specified as an RGB triplet. Using these mean the software author can be reasonably sure all users will see the same thing (for a fungible definition of "same"), at the cost of user configurability. However, since it's less well supported, we start having to worry about fallback behaviour.

In the case of hledger-ui, which provides several colour schemes, that's probably OK, because the user configurability is achieved by choosing one of the schemes. (or writing your own, in extremis). However, the dark theme I contributed uses the 8-colour palette, in common with the other themes, and my explorations into using predictable colours are unfinished.

17 April, 2025 09:35AM

Arturo Borrero González

My experience in the Debian LTS and ELTS projects

Debian

Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry.

I was curious to explore how contributors were working internally — especially how they managed security patching and remediation for older software. I’ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand.

As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster.

During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include:

There are several technical highlights I’d like to share — things I learned or had to apply while participating:

  • CI/CD pipelines: We used CI/CD pipelines on salsa.debian.org all the times to automate tasks such as building, linting, and testing packages. For any package I worked on that lacked CI/CD integration, setting it up became my first step.

  • autopkgtest: There’s a strong emphasis on autopkgtest as the mechanism for running functional tests and ensuring that security patches don’t introduce regressions. I contributed by both extending existing test suites and writing new ones from scratch.

  • Toolchain complexity for older releases: Working with older Debian versions like Jessie brought some unique challenges. Getting a development environment up and running often meant troubleshooting issues with sbuild chroots, qemu images, and other tools that don’t “just work” like they tend to on Debian stable.

  • Community collaboration: The people involved in LTS and ELTS are extremely helpful and collaborative. Requests for help, code reviews, and general feedback were usually answered quickly.

  • Shared ownership: This collaborative culture also meant that contributors would regularly pick up work left by others or hand off their own tasks when needed. That mutual support made a big difference.

  • Backporting security fixes: This is probably the most intense —and most rewarding— activity. It involves manually adapting patches to work on older codebases when upstream patches don’t apply cleanly. This requires deep code understanding and thorough testing.

  • Upstream collaboration: Reaching out to upstream developers was a key part of my workflow. I often asked if they could provide patches for older versions or at least review my backports. Sometimes they were available, but most of the time, the responsibility remained on us.

  • Diverse tech stack: The work exposed me to a wide range of programming languages and frameworks—Python, Java, C, Perl, and more. Unsurprisingly, some modern languages (like Go) are less prevalent in older releases like Jessie.

  • Security team interaction: I had frequent contact with the core Debian Security Team—the folks responsible for security in Debian stable. This gave me a broader perspective on how Debian handles security holistically.

In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others.

I’m very grateful for the warm welcome I received from the LTS/ELTS community, and I don’t rule out the possibility of rejoining the LTS/ELTS efforts in the future.

The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

17 April, 2025 09:00AM

April 16, 2025

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Going Full-Time as an Open Source Developer

Featured image of post Going Full-Time as an Open Source Developer

After careful consideration, I’ve decided to embark on a new chapter in my professional journey. I’ve left my position at AWS to dedicate at least the next six months to developing open source software and strengthening digital ecosystems. My focus will be on contributing to Linux distributions (primarily Debian) and other critical infrastructure components that our modern society depends on, but which may not receive adequate attention or resources.

The Evolution of Open Source

Open source won. Over the 25+ years I’ve been involved in the open source movement, I’ve witnessed its remarkable evolution. Today, Linux powers billions of devices — from tiny embedded systems and Android smartphones to massive cloud datacenters and even space stations. Examine any modern large-scale digital system, and you’ll discover it’s built upon thousands of open source projects.

I feel the priority for the open source movement should no longer be increasing adoption, but rather solving how to best maintain the vast ecosystem of software. This requires building robust institutions and processes to secure proper resourcing and ensure the collaborative development process remains efficient and leads to ever-increasing quality of software.

What is Special About Debian?

Debian, established in 1993 by Ian Murdock, stands as one of these institutions that has demonstrated exceptional resilience. There is no single authority, but instead a complex web of various stakeholders, each with their own goals and sources of funding. Every idea needs to be championed at length to a wide audience and implemented through a process of organic evolution.

Thanks to this approach, Debian has been consistently delivering production-quality, universally useful software for over three decades. Having been a Debian Developer for more than ten years, I’m well-positioned to contribute meaningfully to this community.

If your organization relies on Debian or its derivatives such as Ubuntu, and you’re interested in funding cyber infrastructure maintenance by sponsoring Debian work, please don’t hesitate to reach out. This could include package maintenance and version currency, improving automated upgrade testing, general quality assurance and supply chain security enhancements.

Best way to reach me is by e-mail otto at debian.org. You can also book a 15-minute chat with me for a quick introduction.

Grow or Die

My four-year tenure as a Software Development Manager at Amazon Web Services was very interesting. I’m grateful for my time at AWS and proud of my team’s accomplishments, particularly for creating an open source contribution process that got Amazon from zero to the largest external contributor to the MariaDB open source database.

During this time, I got to experience and witness a plethora of interesting things. I will surely share some of my key learnings in future blog posts. Unfortunately, the rate of progress in this mammoth 1.5 million employee organization was slowing down, and I didn’t feel I learned much new in the last years. This realization, combined with the opportunity cost of not spending enough time on new cutting-edge technology, motivated me to take this leap.

Being a full-time open source developer may not be financially the most lucrative idea, but I think it is an excellent way to force myself to truly assess what is important on a global scale and what areas I want to contribute to.

Working fully on open source presents a fascinating duality: you’re not bound by any external resource or schedule limitations, and can the progress you make is directly proportional to how much energy you decide to invest. Yet, you also depend on collaboration with people you might never meet and who are not financially incentivized to collaborate. This will undoubtedly expose me to all kinds of challenges. But what would be better in fostering holistic personal growth? I know that deep down in my DNA, I am not made to stay cozy or to do easy things. I need momentum.

OK, let’s get going 🙂

16 April, 2025 12:00AM

April 15, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

submitted

Today I submitted my PhD thesis, 8 years since I started (give or take). Next step, Viva.

Normal service may resume shortly…

15 April, 2025 03:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.30.2-1 on CRAN: New Upstream

Another new (stable) release of the AsioHeaders package arrived at CRAN just now. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

The update last week, kindly prepared by Charlie Gao, had overlooked / not covered one other nag discovered by CRAN. This new release, based on the current stable upstream release, does that.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.30.2-0 (2025-04-15

  • Upgraded to Asio 1.30.2 (Dirk in #13 fixing #12)

  • Added two new badges to README.md

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 April, 2025 11:05AM

Russell Coker

What Desktop PCs Need

It seems to me that we haven’t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25″ floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren’t really modern) and carriers for 4*2.5″ drives both of which most people don’t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There’s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25″ drive bays, inefficient PSUs, hardware that doesn’t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn’t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly.

Here are some of the things that I think should be in a modern PC System Design Guide.

Power Supply

The power supply is a core part of the computer and it’s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn’t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs.

A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn’t needed) that have been around for ~20 years but there hasn’t been a standard so all white-box PC systems have had really large PSUs.

PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it’s not unreasonable to expect a PC to be able to charge a phone at it’s maximum speed.

GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior.

All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice.

Motherboard Features

Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature).

On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future.

ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that.

There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it.

Case Features

The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap.

The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn’t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go.

GPU Placement

In modern systems it’s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots).

A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn’t be an afterthought, it should be central to the design.

Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU?

External Cooling

There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it’s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good.

Noise

For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it’s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like “under typical load” or “with a typical feature set” that excuse them from liability if the noise is louder than expected. It doesn’t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same.

We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn’t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold.

What Else?

Please comment about other things you think should be standard PC features.

15 April, 2025 10:19AM by etbe

April 13, 2025

hackergotchi for Keith Packard

Keith Packard

sanitizer-fun

Fun with -fsanitize=undefined and Picolibc

Both GCC and Clang support the -fsanitize=undefined flag which instruments the generated code to detect places where the program wanders into parts of the C language specification which are either undefined or implementation defined. Many of these are also common programming errors. It would be great if there were sanitizers for other easily detected bugs, but for now, at least the undefined sanitizer does catch several useful problems.

Supporting the sanitizer

The sanitizer can be built to either trap on any error or call handlers. In both modes, the same problems are identified, but when trap mode is enabled, the compiler inserts a trap instruction and doesn't expect the program to continue running. When handlers are in use, each identified issue is tagged with a bunch of useful data and then a specific sanitizer handling function is called.

The specific functions are not all that well documented, nor are the parameters they receive. Maybe this is because both compilers provide an implementation of all of the functions they use and don't really expect external implementations to exist? However, to make these useful in an embedded environment, picolibc needs to provide a complete set of handlers that support all versions both gcc and clang as the compiler-provided versions depend upon specific C (and C++) libraries.

Of course, programs can be built in trap-on-error mode, but that makes it much more difficult to figure out what went wrong.

Fixing Sanitizer Issues

Once the sanitizer handlers were implemented, picolibc could be built with them enabled and all of the picolibc tests run to uncover issues within the library.

As with the static analyzer adventure from last year, the vast bulk of sanitizer complaints came from invoking undefined or implementation-defined behavior in harmless ways:

  • Computing pointers past &array[size+1]. I found no cases where the resulting pointers were actually used, but the mere computation is still undefined behavior. These were fixed by adjusting the code to avoid computing pointers like this. The result was clearer code, which is good.

  • Signed arithmetic overflow in PRNG code. There are several linear congruential PRNGs in the library which used signed integer arithmetic. The rand48 generator carefully used unsigned short values. Of course, in C, the arithmetic performed on them is done with signed ints if int is wider than short. C specifies signed overflow as undefined, but both gcc and clang generate the expected code anyways. The fixes here were simple; just switch the computations to unsigned arithmetic, adjusting types and inserting casts as required.

  • Passing pointers to the middle of a data structure. For example, free takes a pointer to the start of an allocation. The management structure appears just before that in memory; computing the address of which appears to be undefined behavior to the compiler. The only fix I could do here was to disable the sanitizer in functions doing these computations -- the sanitizer was mis-detecting correct code and it doesn't provide a way to skip checks on a per-operator basis.

  • Null pointer plus or minus zero. C says that any arithmetic with the NULL pointer is undefined, even when the value being added or subtracted is zero. The fix here was to create a macro, enabled only when the sanitizer is enabled, which checks for this case and skips the arithmetic.

  • Discarded computations which overflow. A couple of places computed a value, then checked if that would have overflowed and discard the result. Even though the program doesn't depend upon the computation, its mere presence is undefined behavior. These were fixed by moving the computation into an else clause in the overflow check. This inserts an extra branch instruction, which is annoying.

  • Signed integer overflow in math code. There's a common pattern in various functions that want to compare against 1.0. Instead of using the floating point equality operator, they do the computation using the two 32-bit halves with ((hi - 0x3ff00000) | lo) == 0. It's efficient, but because most of these functions store the 'hi' piece in a signed integer (to make checking the sign bit fast), the result is undefined when hi is a large negative value. These were fixed by inserting casts to unsigned types as the results were always tested for equality.

Signed integer shifts

This is one area where the C language spec is just wrong.

For left shift, before C99, it worked on signed integers as a bit-wise operator, equivalent to the operator on unsigned integers. After that, left shift of negative integers became undefined. Fortunately, it's straightforward (if tedious) to work around this issue by just casting the operand to unsigned, performing the shift and casting it back to the original type. Picolibc now has an internal macro, lsl, which does this:

    #define lsl(__x,__s) ((sizeof(__x) == sizeof(char)) ?                   \
                          (__typeof(__x)) ((unsigned char) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(short)) ?                  \
                          (__typeof(__x)) ((unsigned short) (__x) << (__s)) : \
                          (sizeof(__x) == sizeof(int)) ?                    \
                          (__typeof(__x)) ((unsigned int) (__x) << (__s)) :   \
                          (sizeof(__x) == sizeof(long)) ?                   \
                          (__typeof(__x)) ((unsigned long) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(long long)) ?              \
                          (__typeof(__x)) ((unsigned long long) (__x) << (__s)) : \
                          __undefined_shift_size(__x, __s))

Right shift is significantly more complicated to implement. What we want is an arithmetic shift with the sign bit being replicated as the value is shifted rightwards. C defines no such operator. Instead, right shift of negative integers is implementation defined. Fortunately, both gcc and clang define the >> operator on signed integers as arithmetic shift. Also fortunately, C hasn't made this undefined, so the program itself doesn't end up undefined.

The trouble with arithmetic right shift is that it is not equivalent to right shift of unsigned values. Here's what Per Vognsen came up with using standard C operators:

    int
    __asr_int(int x, int s) {
        return x < 0 ? ~(~x >> s) : x >> s;
    }

When the value is negative, we invert all of the bits (making it positive), shift right, then flip all of the bits back. Both GCC and Clang seem to compile this to a single asr instruction. This function is replicated for each of the five standard integer types and then the set of them wrapped in another sizeof-selecting macro:

    #define asr(__x,__s) ((sizeof(__x) == sizeof(char)) ?           \
                          (__typeof(__x))__asr_char(__x, __s) :       \
                          (sizeof(__x) == sizeof(short)) ?          \
                          (__typeof(__x))__asr_short(__x, __s) :      \
                          (sizeof(__x) == sizeof(int)) ?            \
                          (__typeof(__x))__asr_int(__x, __s) :        \
                          (sizeof(__x) == sizeof(long)) ?           \
                          (__typeof(__x))__asr_long(__x, __s) :       \
                          (sizeof(__x) == sizeof(long long)) ?      \
                          (__typeof(__x))__asr_long_long(__x, __s):   \
                          __undefined_shift_size(__x, __s))

The lsl and asr macros use sizeof instead of the type-generic mechanism to remain compatible with compilers that lack type-generic support.

Once these macros were written, they needed to be applied where required. To preserve the benefits of detecting programming errors, they were only applied where required, not blindly across the whole codebase.

There are a couple of common patterns in the math code using shift operators. One is when computing the exponent value for subnormal numbers.

for (ix = -1022, i = hx << 11; i > 0; i <<= 1)
    ix -= 1;

This code computes the exponent by shifting the significand left by 11 bits (the width of the exponent field) and then incrementally shifting it one bit at a time until the sign flips, which indicates that the most-significant bit is set. Use of the pre-C99 definition of the left shift operator is intentional here; so both shifts are replaced with our lsl operator.

In the implementation of pow, the final exponent is computed as the sum of the two exponents, both of which are in the allowed range. The resulting sum is then tested to see if it is zero or negative to see if the final value is sub-normal:

hx += n << 20;
if (hx >> 20 <= 0)
    /* do sub-normal things */

In this case, the exponent adjustment, n, is a signed value and so that shift is replaced with the lsl macro. The test value needs to compute the correct the sign bit, so we replace this with the asr macro.

Because the right shift operation is not undefined, we only use our fancy macro above when the undefined behavior sanitizer is enabled. On the other hand, the lsl macro should have zero cost and covers undefined behavior, so it is always used.

Actual Bugs Found!

The goal of this little adventure was both to make using the undefined behavior sanitizer with picolibc possible as well as to use the sanitizer to identify bugs in the library code. I fully expected that most of the effort would be spent masking harmless undefined behavior instances, but was hopeful that the effort would also uncover real bugs in the code. I was not disappointed. Through this work, I found (and fixed) eight bugs in the code:

  1. setlocale/newlocale didn't check for NULL locale names

  2. qsort was using uintptr_t to swap data around. On MSP430 in 'large' mode, that's a 20-bit type inside a 32-bit representation.

  3. random() was returning values in int range rather than long.

  4. m68k assembly for memcpy was broken for sizes > 64kB.

  5. freopen returned NULL, even on success

  6. The optimized version of memrchr was always performing unaligned accesses.

  7. String to float conversion had a table missing four values. This caused an array access overflow which resulted in imprecise values in some cases.

  8. vfwscanf mis-parsed floating point values by assuming that wchar_t was unsigned.

Sanitizer Wishes

While it's great to have a way to detect places in your C code which evoke undefined and implementation defined behaviors, it seems like this tooling could easily be extended to detect other common programming mistakes, even where the code is well defined according to the language spec. An obvious example is in unsigned arithmetic. How many bugs come from this seemingly innocuous line of code?

    p = malloc(sizeof(*p) * c);

Because sizeof returns an unsigned value, the resulting computation never results in undefined behavior, even when the multiplication wraps around, so even with the undefined behavior sanitizer enabled, this bug will not be caught. Clang seems to have an unsigned integer overflow sanitizer which should do this, but I couldn't find anything like this in gcc.

Summary

The undefined behavior sanitizers present in clang and gcc both provide useful diagnostics which uncover some common programming errors. In most cases, replacing undefined behavior with defined behavior is straightforward, although the lack of an arithmetic right shift operator in standard C is irksome. I recommend anyone using C to give it a try.

13 April, 2025 09:24PM

hackergotchi for Michael Prokop

Michael Prokop

OpenSSH penalty behavior in Debian/trixie #newintrixie

This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian’s BSP in Vienna, so here we are. :)

Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:

debug1: kex_exchange_identification: banner line 0: Not allowed at this time

This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code.

FTR, on the SSH server side, you’ll see messages like that:

Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication

This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren’t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don’t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you’re verifying still might be working underneath, but you don’t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied.

To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.

13 April, 2025 02:05PM by mika

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in March 2025

13 April, 2025 04:38AM by Ben Hutchings

FOSS activity in February 2025

13 April, 2025 04:30AM by Ben Hutchings

FOSS activity in November 2024

13 April, 2025 04:23AM by Ben Hutchings

April 12, 2025

hackergotchi for Kalyani Kenekar

Kalyani Kenekar

Nextcloud Installation HowTo: Secure Your Data with a Private Cloud

Logo NGinx

Nextcloud is an open-source software suite that enables you to set up and manage your own cloud storage and collaboration platform. It offers a range of features similar to popular cloud services like Google Drive or Dropbox but with the added benefit of complete control over your data and the server where it’s hosted.

I wanted to have a look at Nextcloud and the steps to setup a own instance with a PostgreSQL based database together with NGinx as the webserver to serve the WebUI. Before doing a full productive setup I wanted to play around locally with all the needed steps and worked out all the steps within KVM machine.

While doing this I wrote down some notes to mostly document for myself what I need to do to get a Nextcloud installation running and usable. So this manual describes how to setup a Nextcloud installation on Debian 12 Bookworm based on NGinx and PostgreSQL.

Nextcloud Installation

Install PHP and PHP extensions for Nextcloud

Nextcloud is basically a PHP application so we need to install PHP packages to get it working in the end. The following steps are based on the upstream documentation about how to install a own Nextcloud instance.

Installing the virtual package package php on a Debian Bookworm system would pull in the depending meta package php8.2. This package itself would then pull also the package libapache2-mod-php8.2 as an dependency which then would pull in also the apache2 webserver as a depending package. This is something I don’t wanted to have as I want to use NGinx that is already installed on the system instead.

To get this we need to explicitly exclude the package libapache2-mod-php8.2 from the list of packages which we want to install, to achieve this we have to append a hyphen - at the end of the package name, so we need to use libapache2-mod-php8.2- within the package list that is telling apt to ignore this package as an dependency. I ended up with this call to get all needed dependencies installed.

$ sudo apt install php php-cli php-fpm php-json php-common php-zip \
  php-gd php-intl php-curl php-xml php-mbstring php-bcmath php-gmp \
  php-pgsql libapache2-mod-php8.2-
  • Check php version (optional step)

    $ php -v

PHP 8.2.28 (cli) (built: Mar 13 2025 18:21:38) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.28, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.28, Copyright (c), by Zend Technologies
  • After installing all the packages, edit the php.ini file:

    $ sudo vi /etc/php/8.2/fpm/php.ini

  • Change the following settings per your requirements:

max_execution_time = 300
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
  • To make these settings effective, restart the php-fpm service

    $ sudo systemctl restart php8.2-fpm


Install PostgreSQL, Create a database and user

This manual assumes we will use a PostgreSQL server on localhost, if you have a server instance on some remote site you can skip the installation step here.

$ sudo apt install postgresql postgresql-contrib postgresql-client

  • Check version after installation (optinal step):

    $ sudo -i -u postgres

    $ psql -version

  • This output will be seen:

    psql (15.12 (Debian 15.12-0+deb12u2))

  • Exit the PSQL shell by using the command \q.

    postgres=# \q

  • Exit the CLI of the postgres user:

    postgres@host:~$ exit

Create a PostgreSQL Database and User:

  1. Create a new PostgreSQL user (Use a strong password!):

    $ sudo -u postgres psql -c "CREATE USER nextcloud_user PASSWORD '1234';"

  2. Create new database and grant access:

    $ sudo -u postgres psql -c "CREATE DATABASE nextcloud_db WITH OWNER nextcloud_user ENCODING=UTF8;"

  3. (Optional) Check if we now can connect to the database server and the database in detail (you will get a question about the password for the database user!). If this is not working it makes no sense to proceed further! We need to fix first the access then!

    $ psql -h localhost -U nextcloud_user -d nextcloud_db

    or

    $ psql -h 127.0.0.1 -U nextcloud_user -d nextcloud_db

  • Log out from postgres shell using the command \q.

Download and install Nextcloud

  • Use the following command to download the latest version of Nextcloud:

    $ wget https://download.nextcloud.com/server/releases/latest.zip

  • Extract file into the folder /var/www/html with the following command:

    $ sudo unzip latest.zip -d /var/www/html

  • Change ownership of the /var/www/html/nextcloud directory to www-data.

    $ sudo chown -R www-data:www-data /var/www/html/nextcloud

Configure NGinx for Nextcloud to use a certificate

In case you want to use self signed certificate, e.g. if you play around to setup Nextcloud locally for testing purposes you can do the following steps.

  • Generate the private key and certificate:

    $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nextcloud.key -out nextcloud.crt

    $ sudo cp nextcloud.crt /etc/ssl/certs/ && sudo cp nextcloud.key /etc/ssl/private/

  • If you want or need to use the service of Let’s Encrypt (or similar) drop the step above and create your required key data by using this command:

    $ sudo certbot --nginx -d nextcloud.your-domain.com

    You will need to adjust the path to the key and certificate in the next step!

  • Change the NGinx configuration:

    $ sudo vi /etc/nginx/sites-available/nextcloud.conf

  • Add the following snippet into the file and save it.

# /etc/nginx/sites-available/nextcloud.conf
upstream php-handler {
    #server 127.0.0.1:9000;
    server unix:/run/php/php8.2-fpm.sock;
}

# Set the `immutable` cache control options only for assets with a cache
# busting `v` argument

map $arg_v $asset_immutable {
    "" "";
    default ", immutable";
}

server {
    listen 80;
    listen [::]:80;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Path to the root of your installation
    root /var/www/html/nextcloud;

    # Use Mozilla's guidelines for SSL/TLS settings
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # Adjust the usage and paths of the correct key data! E.g. it you want to use Let's Encrypt key material!
    ssl_certificate /etc/ssl/certs/nextcloud.crt;
    ssl_certificate_key /etc/ssl/private/nextcloud.key;
    # ssl_certificate /etc/letsencrypt/live/nextcloud.your-domain.com/fullchain.pem; 
    # ssl_certificate_key /etc/letsencrypt/live/nextcloud.your-domain.com/privkey.pem;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in https://hstspreload.org/. This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the `ngx_pagespeed` module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwidth.
    # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
    # for tuning hints
    client_body_buffer_size 512k;

    # HTTP response headers borrowed from Nextcloud `.htaccess`
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Set .mjs and .wasm MIME types
    # Either include it in the default mime.types list
    # and include that list explicitly or add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types {
        text/javascript js mjs;
        application/wasm wasm;
    }

    # Specify how to handle directories -- specifying `/index.php$request_uri`
    # here as the fallback means that NGinx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    # `/updater`, `/ocs-provider`), and thus
    # `try_files $uri $uri/ /index.php$request_uri`
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;
        }
    }

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
    # `location ~ /(\.|autotest|...)` which would otherwise handle requests
    # for `/.well-known`.
    location ^~ /.well-known {
        # The rules in this block are an adaptation of the rules
        # in `.htaccess` that concern `/.well-known`.

        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }

        location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
        location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

        # Let Nextcloud's API for `/.well-known` URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
    }

    # Rules borrowed from `.htaccess` to hide certain paths from clients
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then NGinx will encounter an infinite rewriting loop when it prepend `/index.php`
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$|/) {
        # Required for legacy support
        rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+|.+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;
    }

    # Serve static files
    location ~ \.(?:css|js|mjs|svg|gif|png|jpg|ico|wasm|tflite|map|ogg|flac)$ {
        try_files $uri /index.php$request_uri;
        # HTTP response headers borrowed from Nextcloud `.htaccess`
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header X-XSS-Protection                  "1; mode=block"     always;
        access_log off;     # Optional: Don't log access to assets
    }

    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets
    }

    # Rule borrowed from `.htaccess`
    location /remote {
        return 301 /remote.php$request_uri;
    }

    location / {
        try_files $uri $uri/ /index.php$request_uri;
    }
}
  • Symlink configuration site available to site enabled.

    $ ln -s /etc/nginx/sites-available/nextcloud.conf /etc/nginx/sites-enabled/

  • Restart NGinx and access the URI in the browser.

  • Go through the installation of Nextcloud.

  • The user data on the installation dialog should point e.g to administrator or similar, that user will become administrative access rights in Nextcloud!

  • To adjust the database connection detail you have to edit the file $install_folder/config/config.php. Means here in the example within this post you would need to modify /var/www/html/nextcloud/config/config.php to control or change the database connection.

---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(Or your remote PostgreSQL server address if you have.)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for database user.)
--->%---

After the installation and setup of the Nextcloud PHP application there are more steps to be done. Have a look into the WebUI what you will need to do as additional steps like create a cronjob or tuning of some more PHP configurations.

If you’ve done all things correct you should see a login page similar to this:

Login Page of your Nextcloud instance


Optional other steps for more enhanced configuration modifications

Move the data folder to somewhere else

The data folder is the root folder for all user content. By default it is located in $install_folder/data, so in our case here it is in /var/www/html/nextcloud/data.

  • Move the data directory outside the web server document root.

    $ sudo mv /var/www/html/nextcloud/data /var/nextcloud_data

  • Ensure access permissions, mostly not needed if you move the folder.

    $ sudo chown -R www-data:www-data /var/nextcloud_data

    $ sudo chown -R www-data:www-data /var/www/html/nextcloud/

  • Update the Nextcloud configuration:

    1. Open the config/config.php file of your Nextcloud installation.

      $ sudo vi /var/www/html/nextcloud/config/config.php

    2. Update the ‘datadirectory’ parameter to point to the new location of your data directory.

  ---%<---
     'datadirectory' => '/var/nextcloud_data'
  --->%---
  • Restart NGinx service:

    $ sudo systemctl restart nginx

Make the installation available for multiple FQDNs on the same server

  • Adjust the Nextcloud configuration to listen and accept requests for different domain names. Configure and adjust the key trusted_domains accordingly.

    $ sudo vi /var/www/html/nextcloud/config/config.php

  ---%<---
    'trusted_domains' => 
    array (
      0 => 'domain.your-domain.com',
      1 => 'domain.other-domain.com',
    ),
  --->%---
  • Create and adjust the needed site configurations for the webserver.
  • Restart the NGinx unit.

An error message about .ocdata might occur

  • .ocdata is not found inside the data directory

    • Create file using touch and set necessary permissions.

      $ sudo touch /var/nextcloud_data/.ocdata

      $ sudo chown -R www-data:www-data /var/nextcloud_data/

The password for the administrator user is unknown

  1. Log in to your server:

    • SSH into the server where your PostgreSQL database is hosted.
  2. Switch to the PostgreSQL user:

    • $ sudo -i -u postgres
  3. Access the PostgreSQL command line

    • psql
  4. List the databases: (If you’re unsure which database is being used by Nextcloud, you can list all the databases by the list command.)

    • \l
  5. Switch to the Nextcloud database:

    • Switch to the specific database that Nextcloud is using.
    • \c nextclouddb
  6. Reset the password for the Nextcloud database user:

    • ALTER USER nextcloud_user WITH PASSWORD 'new_password';
  7. Exit the PostgreSQL command line:

    • \q
  8. Verify Database Configuration:

    • Check the database connection details in the config.php file to ensure they are correct.

      sudo vi /var/www/html/nextcloud/config/config.php

    • Replace nextcloud_db, nextcloud_user, and your_password with your actual database name, user, and password.

---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(or your PostgreSQL server address)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for nextcloud_user.)
--->%---
  1. Restart NGinx and access the UI through the browser.

12 April, 2025 06:30PM

April 11, 2025

Reproducible Builds

Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Debian bookworm live images now fully reproducible from their binary packages
  2. “How NixOS and reproducible builds could have detected the xz backdoor”
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of “Supply Chain Attacks on Linux distributions”
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages

Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages.

Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, “full” or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.)

Nevertheless, in response, Roland’s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here.

The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.


How NixOS and reproducible builds could have detected the xz backdoor

Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title “How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all”.

Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien’s article goes on to describe how we might avoid the xz “catastrophe” in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds.

The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).


LWN: Fedora change aims for 99% package reproducibility

Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has “been working toward reproducible builds for more than a decade”, the Fedora project has now:

…progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora’s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal—with minimal pain for packagers—rather than whether to attempt it.

The Change Proposal itself is worth reading:

Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.

Further discussion can be found on the Fedora mailing list as well as on Fedora’s Discourse instance.


Python adopts PEP standard for specifying package dependencies

Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing “a file format to record Python dependencies for installation reproducibility”. As the abstract of the proposal writes:

This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.

The PEP, which itself supersedes PEP 665, mentions that “there are at least five well-known solutions to this problem in the community”.


OSS Rebuild real-time validation and tooling improvements

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations.

Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages.

Improvements were also made to some of the core tools in the project:

  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.


SimpleX Chat server components now reproducible

SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.


Three new scholarly papers

Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper’s abstract notes that “Software Supply Chain attacks are increasingly threatening the security of software systems” and goes on to compare build- and run-time integrity:

Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.

Aman’s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.


In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:

The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.

A full PDF of the paper is available.


Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo’s thesis:

addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.


Distribution roundup

In Debian this month:


The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark — specifically, more than 42% of the apps in the repository is now reproducible

Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project’s README file lists a number of “fields will always or almost always vary” (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.


An overview of Supply Chain Attacks on Linux distributions

Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:

[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:

  • Bug fixes:

    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. []
    • Also consider .aar files as APK files, at least for the sake of diffoscope. []
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. []
  • Codebase improvements:

    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [][]
    • Correct an import masking issue. []
    • Add a missing subprocess import. []
    • Reformat openssl.py. []
    • Update copyright years. [][][]

In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. []


Website updates

Once again, there were a number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add links to two related bugs about buildinfos.debian.net. []
    • Add an extra sync to the database backup. []
    • Overhaul description of what the service is about. [][][][][][]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [][]
    • Improve the statistics page by breaking down output by architecture. []
    • Add a copyright statement. []
    • Add a space after the package name so one can search for specific packages more easily. []
    • Add a script to work around/implement a missing feature of debrebuild. []
  • Misc:

    • Run debian-repro-status at the end of the chroot-install tests. [][]
    • Document that we have unused diskspace at Ionos. []

In addition:

And finally, node maintenance was performed by Holger Levsen [][][] and Mattia Rizzolo [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 April, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for March (sorry for the delay, I was waiting for some additional input).

Conferences

In March, I attended two conferences, each with a distinct motivation.

I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this.

I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community –a place where contributors meet, collaborate, and exchange ideas.

I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA:

    Cross distribution experience exchange

As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF –whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.

This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal.

In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration.

FOSSASIA Summit

The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more.

With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem.

I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker café, initiated by participants eager to deepen their skills.

There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!"

To further my goal of increasing diversity within Debian –particularly by encouraging more non-male contributors– I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved.

Chemnitzer Linuxtage

The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year.

Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users.

Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me.

I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers –who have attended many previous CLT events– and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors.

As a small point of comparison –while FOSSASIA and CLT are fundamentally different events– the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe.

At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience.

Kind regards Andreas.

11 April, 2025 10:00PM by Andreas Tille

hackergotchi for Gunnar Wolf

Gunnar Wolf

Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's «Computing Reviews». I do want, though, to share it with people that follow my general interests and such stuff.

This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself — not too different from what we are now witnessing on a global level.

I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides — And Ártica (and Mariana Fossatti) are clearly among them.

Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre.

Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual’s possibility to act without interferences or coertion, and is limited by other people’s freedom, while positive freedom is the real capacity to exercise one’s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom.

Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it’s legal (in order not to infringe other’s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will.

The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights.

She closes the article by stating (and I’ll happily sign as if they were my own words) that we are Free Culture militants «not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods» (…) «We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective».

11 April, 2025 02:41PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 Registration and Call for Proposals are open

The 26th edition of the Debian annual conference will be held in Brest, France, from July 14th to July 20th, 2025. The main conference will be preceded by DebCamp, from July 7th to July 13th. We invite everyone interested to register for the event to attend DebConf25 in person. You can also submit a talk or event proposal if you're interested in presenting your work in Debian at DebConf25.

Registration can be done by creating an account on the DebConf25 website and clicking on "Register" in the profile section.

As always, basic registration is free of charge. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories. This helps cover the costs of organizing the event while also supporting subsidizing other community members attendance. The last day to register with guaranteed swag is 9th June.

We encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are available. More details can be found on the bursary information page. The last day to apply for a bursary is April 14th. Applicants should receive feedback on their bursary application by April 25th.

The call for proposals for talks, discussions and other activities is also open. To submit a proposal, you need to create an account on the website and click the "Submit Talk Proposal" button in the profile section. The last day to submit and have your proposal considered for the main conference schedule, with video coverage guaranteed, is May 25th.

DebConf25 is also looking for sponsors; if you are interested or think you know of others who would be willing to help, please get in touch with sponsors@debconf.org.

All important dates can be found on the link here.

See you in Brest!

11 April, 2025 10:00AM by Anupa Ann Joseph, Sahil Dhiman

Reproducible Builds (diffoscope)

diffoscope 294 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 294. This version includes the following changes:

[ Chris Lamb ]
* Correct longstanding issue where many ">"-based version tests used in
  conditional fixtures were broken due to the lack of a __gt__ method.
  Thanks, Colin Watson! (Closes: #1102658)
* Address a long-hidden issue in the test_versions testsuite where we weren't
  actually testing ">" as it was masked by the tests for equality in the
  testsuite.
* Update copyright years.

You find out more by visiting the project homepage.

11 April, 2025 12:00AM

April 10, 2025

Thorsten Alteholz

My Debian Activities in March 2025

Debian LTS

This was my hundred-twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4096-1] librabbitmq security update to one CVE related to credential visibility when using tools on the command line.
  • [DLA 4103-1] suricata security update to fix second CVEs related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops, buffer overflows, unintended file access and using large amount of memory.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eightieth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1360-1] ffmpeg security update to fix three CVEs in Stretch related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1361-1] ffmpeg security update to fix four CVEs in Buster related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1362-1] librabbitmq security update to fix two CVEs in Stretch and Buster related to heap memory corruption due to integer overflow and credential visibility when using the tools on the command line.
  • [ELA-1363-1] librabbitmq security update to fix one CVE in Jessie related to credential visibility when using the tools on the command line.
  • [ELA-1367-1] suricata security update to fix five CVEs in Buster related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops and buffer overflows.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • cups-filters to make it work with a new upstream version of qpdf again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

Unfortunately I had a rather bad experience with package hijacking this month. Of course errors can always happen, but when I am forced into a discussion about the advantages of hijacking, I am speechless about such self-centered behavior. Oh fellow Debian Developers, is it really that hard to acknowledge a fault and tidy up afterwards? What a sad trend.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new upstream or bugfix versions of almost all packages. First I uploaded them to experimental and afterwards to unstable to get the latest upstream versions into Trixie.

misc

This month I uploaded new packages or new upstream or bugfix versions of:

meep and meep-mpi-default are no longer supported on 32bit architectures.

FTP master

This month I accepted 343 and rejected 38 packages. The overall number of packages that got accepted was 347.

10 April, 2025 10:42PM by alteholz

John Goerzen

Announcing the NNCPNET Email Network

From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places.

But running an email server got difficult. You can’t just run it on a residential IP. Now there’s SPF, DKIM, DMARC, and TLS to worry about. I recently reviewed mail hosting services, and don’t get me wrong: I still use one, and probably will, because things like email from my bank are critical.

But we’ve lost the ability to tinker, to experiment, to have fun with email.

Not anymore. NNCPNET is an email system that runs atop NNCP. I’ve written a lot about NNCP, including a less-ambitious article about point-to-point email over NNCP 5 years ago. NNCP is to UUCP what ssh is to telnet: a modernization, with modern security and features. NNCP is an asynchronous, onion-routed, store-and-forward network. It can use as a transport anything from the Internet to a USB stick.

NNCPNET is a set of standards, scripts, and tools to facilitate a broader email network using NNCP as the transport. You can read more about NNCPNET on its wiki!

The “easy mode” is to use the Docker container (multi-arch, so you can use it on your Raspberry Pi) I provide, which bundles:

  • Exim mail server
  • NNCP
  • Verification and routing tools I wrote. Because NNCP packets are encrypted and signed, we get sender verification “for free”; my tools ensure the From: header corresponds with the sending node.
  • Automated nodelist tools; it will request daily nodelist updates and update its configurations accordingly, so new members can be communicated with
  • Integration with the optional, opt-in Internet email bridge

It is open to all. The homepage has a more extensive list of features.

I even have mailing lists running on NNCPNET; see the interesting addresses page for more details.

There is extensive documentation, and of course the source to the whole thing is available.

The gateway to Internet SMTP mail is off by default, but can easily be enabled for any node. It is a full participant, in both directions, with SPF, DKIM, DMARC, and TLS.

You don’t need any inbound ports for any of this. You don’t need an always-on Internet connection. You don’t even need an Internet connection at all. You can run it from your laptop and still use Thunderbird to talk to it via its optional built-in IMAP server.

10 April, 2025 12:52AM by John Goerzen

April 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.28.2-1 on CRAN: New Upstream

A new release of the AsioHeaders package arrived at CRAN earlier today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This update brings a new upstream version which helps the three dependent packages using AsiooHeaders to remain compliant at CRAN, and has been prepared by Charlie Gao. Otherwise I made some routine updates to packaging since the last release in late 2022.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.28.2-1 (2025-04-08)

  • Standard maintenance to CI and other packaging aspects

  • Upgraded to Asio 1.28.2 (Charlie Gao in #11 fixing #10)

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

09 April, 2025 01:50AM

Taavi Väänänen

Writing a custom rsync server to automatically update a static site

Inspired by some friends,1 I too wanted to make a tiny website telling which event I am at this exact moment. Thankfully I already had an another toy project with that information easily available, so generating the web page was a matter of just querying that project's API and feeding that data to a HTML template.

Now the obvious way to host that would be to hook up the HTML-generating code to a web server, maybe add some caching for the API calls, and then route external HTTPS traffic to it. However, that'd a) require that server to be constantly available to serve traffic, and b) be boring.

For context: I have an existing setup called staticweb, which is effectively a fancy name for a couple of Puppet-managed servers that run Apache httpd to serve static web pages and have a bunch of systemd timers running rsync to ensure they're serving the same content. It works really well and I use it for things ranging from my website or whyisbetabroken.com to things like my internal apt repository.

Now, there are two ways to get new content into that mechanism: it can be manually pushed in from e.g. a CI job, or the system can be configured to periodically pull it from a separate server. The latter mechanism was initially created so that I could pull the Debian packages from my separate reprepro server into the staticweb setup. It turns out that the latter makes a really neat method for handling other dynamically-generated static sites as well.

So, for my "where is Taavi at" site, I ended up writing the server part in Go, and included an rsync server using the gokrazy/rsync package. Initially I just implemented a static temporary directory with a timer to regularly update the HTML file in it, but then I got an even more cursed idea: what if the HTML was dynamically generated when an rsync client connected to the server? So I did just that.

For deployment, I slapped the entire server part in a container and deployed it to my Kubernetes cluster. The rsync server is exposed directly as a service to my internal network with no authentication or encryption - I think that's fine since that's a read-only service in a private LAN and the resulting HTML is going to be publicly exposed anyway. (Thanks to some DNS magic, just creating a LoadBalancer Service object with a special annotation is enough to have a DNS name provisioned for the assigned IP address, which is neat.)

Overall the setup works nice, at least for now. I need to add some sort of a cache to not fetch unchanged information from the API since for every update. And I guess I could write some cursed rsyncd reverse proxy with per-module rules if I end up creating more sites like this to avoid creating new LoadBalancer services for each of them.


  1. Mostly from Sammy's where.fops.at↩︎

09 April, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Preparations for Trixie, Updated debvm, DebConf 25 registration website updates and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-03

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for Trixie, by Raphaël Hertzog

As we are approaching the trixie freeze, it is customary for Debian developers to review their packages and clean them up in preparation for the next stable release.

That’s precisely what Raphaël did with publican, a package that had not seen any change since the last Debian release and that partially stopped working along the way due to a major Perl upgrade. While upstream’s activity is close to zero, hope is not yet entirely gone as the git repository moved to a new location a couple of months ago and contained the required fix. Raphaël also developed another fix to avoid an annoying warning that was seen at runtime.

Raphaël also ensured that the last upstream version of zim was uploaded to Debian unstable, and developed a fix for gnome-shell-extension-hamster to make it work with GNOME 48 and thus ensure that the package does not get removed from trixie.

Abseil and re2 transition in Debian, by Stefano Rivera

One of the last transitions to happen for trixie was an update to abseil, bringing it up to 202407. This library is a dependency for one of Freexian’s customers, as well as blocking newer versions of re2, a package maintained by Stefano.

The transition had been stalled for several months while some issues with reverse dependencies were investigated and dealt with. It took a final push to make the transition happen, including fixing a few newly discovered problems downstream. The abseil package’s autopkgtests were (trivially) broken by newer cmake versions, and some tests started failing on PPC64 (a known issue upstream).

debvm uploaded, by Helmut Grohne

debvm is a command line tool for quickly creating a Debian-based virtual machine for testing purposes. Over time, it accumulated quite a few minor issues as well as CI failures. The most notorious one was an ARM32 failure present since August. It was diagnosed down to a glibc bug by Tj and Chris Hofstaedtler and little has happened since then. To have debvm work somewhat, it now contains a workaround for this situation. Few changes are expected to be noticeable, but related tools such as apt, file, linux, passwd, and qemu required quite a few adaptations all over the place. Much of the necessary debugging was contributed by others.

DebConf 25 Registration website, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25, the annual Debian developer conference, is now open for registration. Other than preparing the conference website, getting there always requires some last minute changes to the software behind the registration interface and this year was no exception. Every year, the conference is a little different to previous years, and has some different details that need to be captured from attendees. And every year we make minor incremental improvements to fix long-standing problems.

New concepts this year included: brunch, the closing talks on the departure day, venue security clearance, partial contributions towards food and accommodation bursaries, and attendee-selected bursary budgets.

Miscellaneous contributions

  • Helmut uploaded guess-concurrency incorporating feedback from others.
  • Helmut reacted to rebootstrap CI results and adapted it to cope with changes in unstable.
  • Helmut researched real world /usr-move fallout though little was actually attributable. He also NMUed systemd unsuccessfully.
  • Helmut sent 12 cross build patches.
  • Helmut looked into undeclared file conflicts in Debian more systematically and filed quite some bugs.
  • Helmut attended the cross/bootstrap sprint in Würzburg. A report of the event is pending.
  • Lucas worked on the CFP and tracks definition for DebConf 25.
  • Lucas worked on some bits involving Rails 7 transition.
  • Carles investigated why the job piuparts on salsa-ci/pipeline was passing but was failing on piuparts.debian.org for simplemonitor package. Created an issue and MR with a suggested fix, under discussion.
  • Carles improved the documentation of salsa-ci/pipeline: added documentation for different variables.
  • Carles made debian-history package reproducible (with help from Chris Lamb).
  • Carles updated simplemonitor package (new upstream version), prepared a new qdacco version (fixed bugs in qdacco, packaged with the upgrade from Qt 5 to Qt 6).
  • Carles reviewed and submitted translations to Catalan for adduser, apt, shadow, apt-listchanges.
  • Carles reviewed, created merge-requests for translations to Catalan of 38 packages (using po-debconf-manager tooling). Created 40 bug reports for some merge requests that haven’t been actioned for some time.
  • Colin Watson fixed 59 RC bugs (including 26 packages broken by the long-overdue removal of dh-python’s dependency on python3-setuptools), and upgraded 38 packages (mostly Python-related) to new upstream versions.
  • Colin worked with Pranav P to track down and fix a dnspython autopkgtest regression on s390x caused by an endianness bug in pylsqpack.
  • Colin fixed a time-based test failure in python-dateutil that would have triggered in 2027, and contributed the fix upstream.
  • Colin fixed debconf to automatically use the noninteractive frontend if stdin is not a terminal.
  • Stefano bisected and fixed a pypy translation regression on Debian stable and older on 32-bit ARM.
  • Emilio coordinated and helped finish various transitions in light of the transition freeze.
  • Thorsten Alteholz uploaded cups-filters to fix an FTBFS with a new upstream version of qpdf.
  • With the aim of enhancing the support for packages related to Software Bill of Materials (SBOMs) in recent industrial standards, Santiago has worked on finishing the packaging of and uploaded CycloneDX python library. There is on-going work about SPDX python tools, but it requires (build-)dependencies currently not shipped in Debian, such as owlrl and pyshacl.
  • Anupa worked with the Publicity team to announce the Debian 12.10 point release.
  • Anupa with the support of Santiago prepared an announcement and announced the opening of CfP and Registrations for DebConf 25.

09 April, 2025 12:00AM by Anupa Ann Joseph

April 08, 2025

Petter Reinholdtsen

Some notes on Linux LUKS cracking

A few months ago, I found myself in the unfortunate position that I had to try to recover the password used to encrypt a Linux hard drive. Tonight a few friends of mine asked for details on this effort. I guess it is a good idea to expose the recipe I found to a wider audience, so here are a few relevant links and key findings. I've forgotten a lot, so part of this is taken from memory.

I found a good recipe in a blog post written in 2019 by diverto, titled Cracking LUKS/dm-crypt passphrases. I tried both the john the ripper approach where it generated password candidates and passed it to cryptsetup and the luks2jack.py approach (which did not work for me, if I remember correctly), but believe I had most success with the hashcat approach. I had it running for several days on my Thinkpad X230 laptop from 2012. I do not remember the exact hash rate, but when I tested it again just now on the same machine by running "hashcat -a 0 hashcat.luks longlist --force", I got a hash rate of 7 per second. Testing it on a newer machine with a 32 core AMD CPU, I got a hash rate of 289 per second. Using the ROCM OpenCL approach on the same machine I managed to get a hash rate of 2821 per second.

Session..........: hashcat                                
Status...........: Quit
Hash.Mode........: 14600 (LUKS v1 (legacy))
Hash.Target......: hashcat.luks
Time.Started.....: Tue Apr  8 23:06:08 2025 (1 min, 10 secs)
Time.Estimated...: Tue Apr  8 23:12:49 2025 (5 mins, 31 secs)
Kernel.Feature...: Pure Kernel
Guess.Base.......: File (/usr/share/dict/bokmål)
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........:     2821 H/s (8.18ms) @ Accel:128 Loops:128 Thr:32 Vec:1
Recovered........: 0/1 (0.00%) Digests (total), 0/1 (0.00%) Digests (new)
Progress.........: 0/935405 (0.00%)
Rejected.........: 0/0 (0.00%)
Restore.Point....: 0/935405 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:0-1 Iteration:972928-973056
Candidate.Engine.: Device Generator
Candidates.#1....: A-aksje -> fiskebil
Hardware.Mon.#1..: Temp: 73c Fan: 77% Util: 99% Core:2625MHz Mem: 456MHz Bus:16

Note that for this last test I picked the largest word list I had on my machine (dict/bokmål) as a fairly random work list and not because it is useful for cracking my particular use case from a few months ago.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

08 April, 2025 09:20PM

April 07, 2025

Scarlett Gately Moore

KDE Snap Updates, Kubuntu Updates, More life updates!

Icy morning Witch Wells AzIcy morning Witch Wells Az

Life:

Last week we were enjoying springtime, this week winter has made a comeback! Good news on the broken arm front, the infection is gone, so they can finally deal with the broken issue again. I will have a less invasive surgery April 25th to pull the bones back together so they can properly knit back together! If you can spare any change please consider a donation to my continued healing and recovery, or just support my work 🙂

Kubuntu:

While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico

KDE Snaps:

Added sctp support in Qt https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/commit/bbcb1dc39044b930ab718c8ffabfa20ccd2b0f75

This will allow me to finish a pyside6 snap and fix FreeCAD build.

Changed build type to Release in the kf6-core24-sdk which will reduce the size of kf6-core24 significantly.

Fixed a few startup errors in kf5-core24 and kf6-core24 snapcraft-desktop-integration.

Soumyadeep fixed wayland icons in https://invent.kde.org/neon/snap-packaging/kf6-core-sdk/-/merge_requests/3

KDE Applications 25.03.90 RC released to –candidate ( I know it says 24.12.3, version won’t be updated until 25.04.0 release )

Kasts core24 fixed in –candidate

Kate now core24 with Breeze theme! –candidate

Neochat: Fixed missing QML and 25.04 dependencies in –candidate

Kdenlive now with Galxnimate animations! –candidate

Digikam 8.6.0 now with scanner support in –stable

Kstars 3.7.6 released to –stable for realz, removed store rejected plugs.

Thanks for stopping by!

07 April, 2025 12:13PM by sgmoore

April 05, 2025

Russell Coker

HP z840

Many PCs with DDR4 RAM have started going cheap on ebay recently. I don’t know how much of that is due to Windows 11 hardware requirements and how much is people replacing DDR4 systems with DDR5 systems.

I recently bought a z840 system on ebay, it’s much like the z640 that I recently made my workstation [1] but is designed strictly as a 2 CPU system. The z640 can run with 2 CPUs if you have a special expansion board for a second CPU which is very expensive on eBay and and which doesn’t appear to have good airflow potential for cooling. The z840 also has a slightly larger case which supports more DIMM sockets and allows better cooling.

The z640 and z840 take the same CPUs if you use the E5-2xxx series of CPU that is designed for running in 2-CPU mode. The z840 runs DDR4 RAM at 2400 as opposed to 2133 for the z640 for reasons that are not explained. The z840 has more PCIe slots which includes 4*16x slots that support bifurcation.

The z840 that I have has the HP Z-Cooler [2] installed. The coolers are mounted on a 45 degree angle (the model depicted at the right top of the first page of that PDF) and the system has a CPU shroud with fans that mount exactly on top of the CPU heatsinks and duct the hot air out without going over other parts. The technology of the z840 cooling is very impressive. When running two E5-2699A CPUs which are listed as “145W typical TDP” with all 44 cores in use the system is very quiet. It’s noticeably louder than the z640 but is definitely fine to have at your desk. In a typical office you probably wouldn’t hear it when it’s running full bore. If I was to have one desktop PC or server in my home the z840 would definitely be the machine I choose for that.

I decided to make the z840 a build server to share the resource with friends and to use for group coding projects. I often have friends visit with laptops to work on FOSS stuff and a 44 core build server is very useful for that.

The system is by far the fastest system I’ve ever owned even though I don’t have fast storage for it yet. But 256G of RAM allows enough caching that storage speed doesn’t matter too much.

Here is building the SE Linux “refpolicy” package on the z640 with E5-2696 v3 CPU and the z840 with two E5-2699A v4 CPUs:

257.10user 47.18system 1:40.21elapsed 303%CPU (0avgtext+0avgdata 416408maxresident)k
66904inputs+1519912outputs (74major+8154395minor)pagefaults 0swaps

222.15user 24.17system 1:13.80elapsed 333%CPU (0avgtext+0avgdata 416192maxresident)k
5416inputs+0outputs (64major+8030451minor)pagefaults 0swaps

Here is building Warzone2100 on the z640 and the z840:

6887.71user 178.72system 16:15.09elapsed 724%CPU (0avgtext+0avgdata 1682160maxresident)k
1555480inputs+8918768outputs (114major+27133734minor)pagefaults 0swaps

6055.96user 77.05system 8:00.20elapsed 1277%CPU (0avgtext+0avgdata 1682100maxresident)k
117640inputs+0outputs (46major+11460968minor)pagefaults 0swaps

It seems that the refpolicy package can’t use many more than 18 cores as it is only 37% faster when building with 44 cores available. Building Warzone is slightly more than twice as fast so it can really use all the available cores. According to Passmark the E5-2699A v4 is 22% faster than the E5-2696 v3.

I highly recommend buying a z640 if you see one at a good price.

05 April, 2025 10:52AM by etbe

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Cisco 2504 password extraction

I needed this recently, so I took a trip into Ghidra and learned enough to pass it on:

If you have an AireOS-based wireless controller (Cisco 2504, vWLC, etc.; basically any of the now-obsolete Cisco WLC series), and you need to pick out the password, you can go look in the XML files in /mnt/application/xml/aaaapiFileDbCfgData.xml (if you have a 2504, you can just take out the CompactFlash card and mount the fourth partition or run strings on it; if it's a vWLC you can use the disk image similarly). You will find something like (hashes have been changed to not leak my own passwords :-) ):

    <userDatabase index="0" arraySize="2048">
      <userName>61646d696e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</userName>
      <serviceType>6</serviceType>
      <WLAN-id>0</WLAN-id>
      <accountCreationTimestamp>946686833</accountCreationTimestamp>
      <passwordStore>
        <ps_type>PS_STATIC_AES128CBC_SHA1</ps_type>
        <iv>3f7b4fcfcd3b944751a8614ebf80a0a0</iv>
        <mac>874d482bbc56b24ee776e80bbf1f5162</mac>
        <max_passwd_len>50</max_passwd_len>
        <passwd_len>16</passwd_len>
        <passwd>8614c0d0337989017e9576b82662bc120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</passwd>
      </passwordStore>
      <telnetEnable>1</telnetEnable>
    </userDatabase>

“userName” is obviously just “admin” in plain hex. Ignore the HMAC; it's seemingly only used for integrity checking. The password is encrypted with a static key embedded in /sbin/switchdrvr, namely 834156f9940f09c0a8d00f019f850005. So you can just ask OpenSSL to decrypt it:

> printf $( echo '8614c0d0337989017e9576b82662bc12' | sed 's/\(..\)/\\x&/g' ) | openssl aes-128-cbc -d -K 834156f9940f09c0a8d00f019f850005 -iv 3f7b4fcfcd3b944751a8614ebf80a0a0 | xxd -g 1
00000000: 70 61 73 73 77 6f 72 64                          password

And voila. (There are some other passwords floating around there in the XML files, where I believe that this master key is used to encrypt other keys, and occasionally things seem to be double-hex-encoded, but I haven't really bothered looking at it.)

When you have the actual key, it's easy to just search for and others have found the same thing, but for “show run” output, so searching for e.g. “PS_STATIC_AES128CBC_SHA1” found nothing. But now at least you know.

Update: Just to close the loop: The contents of <mac> is a HMAC-SHA1 of a concatenation of 00 00 00 01 <iv> <passwd> (supposedly maybe 01 00 00 00 instead, depending on endian of the underlying system; both MIPS and x86 controllers exist), where <passwd> is the encrypted password (without the extra tacked-on zeros, and the HMAC key is 44C60835E800EC06FFFF89444CE6F789. So it's doubly useless for password cracking; just decrypt the plaintext password instead. :-)

05 April, 2025 09:57AM

Russell Coker

More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It’s now March 2025 and I have replaced it.

Hardware Issues

My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it’s cable wasn’t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn’t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape.

The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don’t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that’s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each.

The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it’s doing but was confusing initially.

Booting

One insidious problem is that when booting in “legacy” mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe.

Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it’s not booting and needs to have the power button pressed again – which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on.

Noise

This was always a noisy system. When I upgraded the CPU from an 8 core with 85W “typical TDP” to an 18 core with 145W “typical TDP” it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy.

Replacement

I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don’t know if all that HP made have it) and which the cheaper models in the ML-110 line don’t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn’t seem viable.

I replaced it with one of the HP z640 workstations I got in 2023 [5].

The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I’m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs.

At this time I’m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits.

The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet.

The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I’m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that’s a significant performance reduction. But the benefits outweigh the disadvantages.

Conclusion

I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

05 April, 2025 09:13AM by etbe

April 04, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it’s one of the good memories of the early 2010s. And there is an opportunity to re-engage! 😃

I came across Evgeni’s post “naming things is hard” in Planet Debian. So, what names have I given my computers?

I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn’t hve a formal name. Naming my computers something nice Linux gave me.

I have forgotten many. Some of the names I have used:

  • My years in Iztacala: I worked as a sysadmin between 1999 and 2003. When I arrived, we already had two servers, campus and tlali, and one computer pending installation, ollin. The credit for their names is not mine.
    • campus: A mighty SPARCstation 5! Because it was the main (and for some time, the only!) server in our campus.
    • tlali: A regular PC used as a Linux server. “Tlali” means something like lands in náhuatl, the prehispanic language spoken in central Mexico. My workplace was Iztacala, which translates as “the place where there are white houses”; “tlali” and “cali” are related words.
    • ollin: was a big IBM RS/6000 system running AIX. It came to us, probably already obsolete, as a (useless) donation from Fundación UNAM; I don’t recall the exact model, but it looked very much like this one. Ran on AIX. We had no software for it, and frankly… never really got it to be productive. Funnily, its name “Ollin” means “movement” in Náhuatl. I added some servers to the lineup during the two years I was in Iztacala:
    • tlamantli: An Alpha 21164 server that doubled as my desktop. Given the tradition in Iztacala of naming things in Náhuatl, but trying to be somewhat funny, tlamantli just means a thing; I understand the word is usually bound to a quantifier.
    • tepancuate: A regular PC system we set up with OpenBSD as a firewall. It means “wall” in Náhuatl.
  • Following the first CONSOL (National Free Software Conference), I was invited to work as a programmer at UPN, Universidad Pedagógica Nacional in 2003–2004. There I was not directly in charge of any of the servers (I mostly used ajusco, managed by Víctor, named after the mountain on whose slopes our campus was). But my only computer there was:
    • shmate: , meaning old rag in yiddish. The word shmate is used like thingy, although it would usually mean old and slightly worn-out thingy. It was a quite nice machine, though. I had a Pentium 4 with 512MB RAM, not bad for 2003!
  • I started my present work at Instituto de Investigaciones Económicas, UNAM 20 years ago(!), in 2005. Here I am a systems administrator, so naturally I am in charge of the servers. And over the years, we have had a fair share of machines:
    • mosca: is my desktop. It has changed hardware several times (of course) over the years, but it’s still the same Debian Sid install I did in January 2005 (I must have reinstalled once, when I got it replaced by an AMD64). Its name is the Spanish name for the common fly. I have often used it to describe my work, since I got in the early 1990s an automated bilingual translator called TRANSLATE; it came on seven 5.25” floppies. As a teenager, I somehow got my hands on a copy, and installed it in my 80386SX. Fed it its own README to see how it fared. And the first sentence made me burst in laughter: «TRANSLATE performs on the fly translation» ⇒ «TRADUCE realiza traducción sobre la mosca». Starting then, I always think of «on the fly» as «sobre la mosca». As Groucho said, I guess… Time flies like an arrow, but fruit flies like a banana.
    • lafa When I got there, we didn’t have any servers; for some time, I took one of the computer lab’s systems to serve our web page and receive mail. But when we got some budget approved, we bought a fsckin-big server. Big as in four-rack-units. Double CPUs (not multicore, but two independent early Xeon CPUs, if I’m not mistaken. Still, it was still a 32 bits system). לאפה (lafa) is a big, more flexible kind of Arab bread than pita; I loved it when I lived in Israel. And there is an album (and song) by Teapacks, an Israeli group I am very fond of, «hajaim shelja belafa» (your life in a lafa), saying, «hey, brother! Your life is in a lafa. You throw everything in a big pita. You didn’t have time to chew, you already swallowed it».
    • joma: Our firewall. חומה means wall in Hebrew.
    • baktun: lafa was great, but over the years, it got old. After many years, I finally got the Institute to buy a second server. We got it in December 2012. There was a lot of noise around then because the world was supposed to end on 2012.12.21, as the Mayan calendar reached a full long cycle. This long cycle is called /baktun/. So, it was fitting as the name of the new server.
    • teom: As lafa was almost immediately decomissioned and turned into a virtual machine in the much bigger baktun,, I wanted to split services, make off-hardware backups, and such. Almost two years later, my request was approved and we bought a second server. But instead of buying it from a “regular” provider, we got it off a stash of machines bought by our university’s central IT entity. To my surprise, it had the exact same hardware configuration as baktun, bought two years earlier. Even the serial number was absurdly close. So, I had it as baktun’s long-lost twin. Hence, תְאוֹם (transliterated as teom), the Hebrew word for twin. About a year after teom arrived to my life, my twin children were also born, but their naming followed a completely different logic process than my computers 😉
  • At home or on the road: I am sure I am missing several systems over the years.
    • pato: The earliest system I had that I remember giving a name to. I built a 80386SX in 1991, buying each component separately. The box had a 1-inch square for integrators to put their branding — And after some time, I carefully printed and applied a label that said Catarmáquina PATO (the first word, very small). Pato (duck) is how we’d call a no-brand system. Catarmáquina because it was the system where I ran my BBS, CatarSYS (1992-1994).
    • malenkaya: In 2008 I got a 9” Acer Aspire One netbook (Atom N270 i386, 1GB RAM). I really loved that machine! Although it was quite limited, it was my main computer while on the road for almost five years. malenkaya means small (for female) in Russian.
    • matlalli: After malenkaya started being too limited for my regular use, I bought its successor Acer Aspire One model. This one was way larger (10.1 inches screen) and I wasn’t too happy about it at the beginning, but I ended up loving it. So much, in fact, that we bought at least four very similar such computers for us and our family members. This computer was dirt cheap, and endured five further years of lugging everywhere. matlalli is due to its turquoise color: it is the Náhuatl word for blue or green.
    • cajita: In 2014 I got a beautiful Cubox i4 Pro computer. It took me some time to get it to boot and be generally useful, but it ended up being my home server for many years, until I had a power supply malfunction which bricked it. cajita means little box in Spanish.
    • pitentzin: Another 10.1” Acer Aspire One (the last in the lineup; the CPU is a Celeron 877, so it does run AMD64, and it supports up to 16GB RAM, I think I have it with 12). We originally bought it for my family in Argentina, but they didn’t really use it much, and after a couple of years we got it back. We decided it would be the computer for the kids, at least for the time being. And although it is a 2013 laptop, it’s still our everyday media station driver. Oh, and the name pitentzin? Náhuatl for /children/.
    • tliltik: In 2018, I bought a second-hand Thinkpad X230. It was my daily driver for about three years. I reflashed its firmware with CoreBoot, and repeated the experience for seven people IIRC in DebConf18. With it, I learned to love the Thinkpad keyboard. Naturally for a thinkpad, tliltik means black in Náhuatl.
    • uesebe: When COVID struck, we were all sent home, and my university lent me a nice recently bought Intel i7 HP laptop. At first, I didn’t want to mess up its Windows install (so I set up a USB-drive-based installation, hence the name uesebe); when it was clear the lockdown was going to be long (and that tliltik had too many aches to be used for my daily work), I transferred the install to its HDD and used it throughout the pandemic, until mid 2022.
    • bolex: I bought this computer for my father in 2020. After he passed away in May 2022, I took his computer, and named it bolex because that’s the brand of the 8mm cinema camera he loved and had since 1955, and with which he created most of his films. It is really an entry-level machine, though (a single-core, dual-threaded Celeron), and it was too limited when I started distance-teaching again, so I had to store it as an emergency system.
    • yogurtu: During the pandemics, I spent quite a bit of time fiddling with the Raspberry Pi family. But all in all, while they are nice machines for many uses, they are too limited to be daily drivers. Or even enough for taking i.e. to Debconf and have them be my conference computer. I bought an almost-new-but-used (≈2 year old) Yoga C630 ARM laptop. I often brag about my happy experience with it, and how it brings a reasonably powerful ARM Linux system to my everyday life. In our last DebConf, I didn’t even pick up my USB-C power connector every day; the battery just lasts over ten hours of active work. But I’m not here doing ads, right? yogurtu naturally is derived from the Yoga brand it has, but is taken from Yogurtu Nghé, a fictional character by the Argentinian comical-musical group Les Luthiers, that has marked my life.
    • misnenet: Towards mid 2023, when it was clear that bolex would not be a good daily driver, and considering we would be spending six months in Argentina, I bought a new desktop system. It seems I have something for small computers: I decided for a refurbished HP EliteDesk 800 G5 Mini i7 system. I picked it because, at close to 18×18×3.5cm it perfectly fits in my DebConf18 bag. A laptop, it is clearly not, but it can easily travel with me when needed. Oh, and the name? Because for this model, HP uses different enclosures based on the kind of processor: The i3 model has a flat, black aluminum top… But mine has lots of tiny holes, covering two areas of roughly 15×7cm, with a tiny hole every ~2mm, and with a solid strip between them. Of course, מִסנֶנֶת (misnenet, in Hebrew) means strainer.

04 April, 2025 07:17PM

hackergotchi for Guido Günther

Guido Günther

Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging.

One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this.

To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it.

Flashing the factory image

If you still have Android on the device you can skip this step.

You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:

unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh

This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below.

Enabling USB debugging

Now boot Android and enable Developer mode by going to SettingsAbout then touching Build Number (at the very bottom) 7 times.

Go back one level, then go to SystemDeveloper Options and enable "USB Debugging".

Obtaining boot.img

There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:

unzip image-sargo-sp2a.220505.008.zip boot.img

If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post).

Becoming root with Magisk

Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current.

Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):

adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download

In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK.

We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded.

The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:

adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img

Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):

fastboot flash boot magisk_patched-28100_3ucVs.img

Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell.

If you now connect to your phone via adb again and now su should work:

adb shell
su

As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:

fastboot boot magisk_patched-28100_3ucVs.img

and then perform the same adb shell su check as above.

Building the custom kernel

For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:

sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync

With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here.

We then build the kernel:

PATH=/usr/sbin:$PATH ./build_bonito.sh

The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb.

In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below).

Preparing a new boot.img

To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:

ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "${ARGS}" | sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" ${CLEAN_PARAMS} --cmdline "${ARGS}"

This will give you a boot.patched.img with the just built kernel.

Boot the new kernel via fastboot

We can now boot the new boot.patched.img. No need to flash that onto the device for that:

fastboot boot boot.patched.img

Fetching the kernel logs

With that we can fetch the kernel logs with the debug output via adb:

adb shell su -c 'dmesg -t' > dmesg_dump.xml

or already filtering out the QMI commands:

adb shell su -c 'dmesg -t'  | grep "@QMI@" | sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml

That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar.

In case you just need a rooted boot.img for sargo you can find a patched one here.

If this procedure can be improved / streamlined somehow please let me know.

Appendix: Fetching boot.img from the phone

If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone.

First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:

fastboot boot twrp-3.7.1_12-1-sargo.img

Within TWRP select BackupBoot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:

adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win

You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

04 April, 2025 04:46PM

hackergotchi for Johannes Schauer Marin Rodrigues

Johannes Schauer Marin Rodrigues

To boldly build what no one has built before

Last week, we (Helmut, Jochen, Holger, Gioele and josch) met in Würzburg for a Debian crossbuilding & bootstrap sprint. We would like to thank Angestöpselt e. V. for generously providing us with their hacker space which we were able to use exclusively during the four-day-sprint. We’d further like to thank Debian for their sponsorship of accommodation of Helmut and Jochen.

The most important topics that we worked on together were:

  • publicity and funding for bootstrappable and cross-buildable Debian, driven by Gioele, including the creation of a list of usecases and slogans [everyone]
  • proof-of-concept for substituting coreutils with alternative implementations such as busybox, toybox or uutils [Helmut, Jochen, josch]
  • writing a patch for documenting the Multi-Arch field in Debian policy #749826 [Helmut, Holger, Jochen, josch]
  • turning build profile spec text into a patch for Debian policy #757760 [Helmut, Jochen, josch]

Our TODO items for after the sprint are:

  • josch needs to fix bootstrap.debian.net
  • josch exports the package lists computed by bootstrap.debian.net in a machine readable format for Holger
  • writing a mail to d-devel about making coreutils non-essential

In addition to what was already listed above, people worked on the following tasks specifically:

  • Holger now wants a crossbootstrap pkg set for reproducible builds.
  • Holger worked on some reproducible builds issues, uploaded ~10 sequoia related packages and did a devscripts upload.
  • Jochen worked on creating initrds
  • Jochen helped Holger with sequoia/rust packaging
  • Jochen worked on sbuild
  • Jochen discussed cross bootstrapping with Helmut and josch
  • Jochen fixed bugs in devscripts (debrebuild/debootstrap, build-rdeps, proxy.py)
  • Jochen worked on reproduce.d.n
  • Jochen worked on src:kokkos resulting in #1101487
  • Gioele gathered information and material for possible funding for bootstrapping-related projects.
  • Gioele ported src:libreplaygain from cdbs to dh.
  • Helmut dug into lingering debvm issues some. Jochen tracked down the ARM32 autopkgtest regression to #1079443 which is now worked around.
  • Helmut collected feedback on linux-libc-dev being a:all.
  • Helmut collected feedback on dropping libcrypt-dev from build-essential and initiated work with Santiago Vila
  • Helmut collected feedback on how sbuild would want to interface with a better build containment
  • josch reviewed and merged the following MRs:
  • josch worked on making the Debian Linux kernel packaging use hooks installed in /usr/share/kernel/*.d and gathered feedback from the other sprint participants in how to best move this forward, culminating in the opening of #1101733 against src:linux.

Thank you all for attending this sprint, for making it so productive and for the amazing atmosphere and enlightening discussions!

04 April, 2025 10:17AM

hackergotchi for Evgeni Golov

Evgeni Golov

naming things is hard

I got a new laptop (a Lenovo Thinkpad X1 Carbon Gen 12, more on that later) and as always with new pets, it needed a name.

My naming scheme is roughly "short japanese words that somehow relate to the machine".

The current (other) machines at home are (not all really in use):

  • Thinkpad X1 Carbon G9 - tanso (炭素), means carbon
  • Thinkpad T480s - yatsu (八), means 8, as it's a T480s
  • Thinkpad X201s - nana (七), means 7, as it was my first i7 CPU
  • Thinkpad X61t - obon (御盆), means tray, which in German is "Tablett" and is close to "tablet"
  • Thinkpad X300 - atae (与え) means gift, as it was given to me at a very low price, almost a gift
  • Thinkstation P410 - kangae (考え), means thinking, and well, it's a Thinkstation
  • self-built homeserver - sai (さい), means dice, which in German is "Würfel", which is the same as cube, and the machine used to have an almost cubic case
  • Raspberry Pi 4 - aita (開いた), means open, it's running OpenWRT
  • Sun Netra T1 - nisshoku (日食), means solar eclipse
  • Apple iBook G4 13 - ringo (林檎), means apple

Then, I happen to rent a few servers:

  • ippai (一杯), means "a cup full", the VM is hosted at "netcup.de"
  • genshi (原子), means "atom", the machine has an Atom CPU
  • shokki (織機), means loom, which in German is Webstuhl or Webmaschine, and it's the webserver

I also had machines in the past, that are no longer with me:

  • Thinkpad X220 - rodo (労働) means work, my first work laptop
  • Thinkpad X31 - chiisai (小さい) means small, my first X series
  • Thinkpad Z61m - shinkupaddo (シンクパッド) means Thinkpad, my first Thinkpad

And also servers from the past:

  • chikara (力) means power, as it was a rather powerful (for that time) Xeon server
  • hozen (保全), means preservation, it was a backup host

So, what shall I call the new one? It will be "juuni" (十二), which means 12. Creative, huh?

04 April, 2025 07:59AM by evgeni

April 03, 2025

hackergotchi for Gregor Herrmann

Gregor Herrmann

Debian MountainCamp, Innsbruck, 16–18 May 2025

the days are getting warmer (in the northern hemisphere), debian is getting colder, & quite a few debian events are taking place.

in innsbruck, we are organizing MountainCamp, an event in the tradition of SunCamp & SnowCamp: no schedule, no talks, meet other debian people, fix bugs, come up with crazy ideas, have fun, develop things.

interested? head over to the information & signup page on the debian wiki.

03 April, 2025 09:42PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

03 April, 2025 01:29AM by Junichi Uekawa

April 02, 2025

Paul Wise

FLOSS Activities March 2025

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 April, 2025 01:04AM

April 01, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

01 April, 2025 12:17PM by Colin Watson