Status update, 17/12/2025

Welcome to mid-December! Where I am this month is a pretty cold affair… at night it’s 2 or 3 degrees above freezing. Maybe you’re in a tropical place and the nights are 30 degrees warmer. Or maybe you’re somewhere that drops down to 20 or 30 below freezing. The world is a big place! (Or maybe you’re in one of those 10 remaining countries that use fahrenheit to measure temperature.. if so, I’m sorry for you ; -)

I didn’t do much in the world of open source this month besides reviewing a few patches.

I am still using GNOME and Fedora every day for my work… at zero cost! If I’d paid for Microsoft Windows I’d be down almost 200€. So I made a few one off donations split between:

Thanks to Hari’s blog post for reminding us how important it is to donate.

Who did I miss that is contributing to making excellent desktop software in difficult times?

(I know that regular donations are more helpful… I have a few dozen of those already, listed here. The list can always change : -).


Bollocks to Github

I am spending the evening deleting my Github.com account.

There are plenty of reasons you might want to delete your Github account. I’d love to say that this is a coherently orchestrated boycott on my part, in sympathy with the No Azure for Apartheid movement. Microsoft, owner of Github, is a big pile of cash happy to do business with an apartheid state. That’s a great reason to delete your Github.com account.

I will be honest with you though, the thing that pushed me over the edge was a spam email they sent entitled “GitHub Copilot: What’s in your free plan 🤖”. I was in a petty mood this morning.

Offering free LLM access is a money loser. The long play is this: Microsoft would like to create a generation of computer users hooked on GitHub Copilot. And, I have to hand it to them, they have an excellent track record in monopolising how we interact with our PCs.

Deleting my Github.com account isn’t going to solve any of that. But it feels good to be leaving, anyway. The one billionth Github repository was created recently and it has a single line README containing the word “shit”. I think that summarizes the situation more poetically than I could.

I had 145 repositories in the ssssam/ namespace to delete. The oldest was Iris; forked in 2011.

carview.php?tsp=

Quite a story to that one. A fork of a project by legendary GNOME hacker Christian Hergert. In early 2011, I’d finished university, had no desire to work in the software industry, and was hacking on a GTK based music player app from time to time. A rite of passage that every developer has to go through, I suppose. At some point I decided to overcomplicate some aspect of the app and ended up integrating libiris, a library to manage concurrent tasks. Then I started working professionally and abandoned the thing.

Its fun to look at that with 13 years perspective. I since learned, largely thanks to Rust, that I cannot possibly ever write correct concurrent thread-based C code. (All my libiris changes had weird race conditions). I met Christian various times. Christian created libdex which does the same thing, but much better. I revived the music player app as a playlist generation toolkit. We all lived happily ever after.

Except for the Github fork, which is gone.

What else?

carview.php?tsp=

This guy was an early attempt at creating a sort of GObject mapping for SPARQL data as exposed by Localsearch (then Tracker). Also created around 2011. Years later, I did a much better implementation in TrackerResource which we still use today.

The Sam of 2011 would be surprised to hear that we organized GUADEC in Manchester only 6 years later. Back in those days we for some reason maintained our own registration system written in Node.js. I spent the first few weeks of 2017 hacking in support for accommodation bookings.

carview.php?tsp=

I discovered another 10 year old gem called “Software Integration Ontology.” Nowadays we’d call that an SBOM. Did that term exist 10 years ago? I have spent too much time working on software integration.

Various other artifacts research into software integration and complexity. A vestigal “Software Dependency Visualizer” project. (Which took on a life of its own, and many years later the idea is alive in KDE Codevis). A fork of Aboriginal Linux, which we unwittingly brought to an end back in 2016. Bits of Baserock, which never went very far but also led to the beginning of BuildStream.

A fork of xdg-app, which is the original name of Flatpak. A library binding GLib to the MS RPC API on Windows, from the 2nd professional project I ever did. These things are now dust.

I had over 100 repositories on github.com. I sponsored one person, who I can’t sponsor any more as they only accept Github money. (I sponsor plenty of people in other platforms)

Anyway, lots of nice people use Github, you can keep using Github. I will probably have to create the occasional burner account to push PRs where projects haven’t migrated away. Some of my projects are now in Gitlab.com and in GNOME’s Gitlab. Others are gone!

It’s a weird thing but in 2025 I’m actually happy knowing that there’s a little bit less code in the world.

Status update — 23/11/2025

Bo día.

I am writing this from a high speed train heading towards Madrid, en route to Manchester. I have a mild hangover, and a two hundred page printout of “The STPA Handbook”… so I will have no problem sleeping through the journey. I think the only thing keeping me awake is the stunning view.

Sadly I havent got time to go all the way by train, in Madrid i will transfer to Easy Jet. It is indeed easy compared to trying to get from Madrid into France by train. Apparently this is mainly the fault of France’s SNCF.

On the Spain side, fair play. The ministro de fomento (I think this translates as “Guy in charge of trains?”) just announced major works in Barcelona, including a new station in La Sagrera with space for more trains than they have now, and a more direct access from Madrid, and a speed boost via some new type of railway sleeper, which would make the top speed 350km/h instead of 300km/h as it is now. And some changes in Madrid, which would reduce the transfer time when arriving from the west and heading out further east. You can argue with many things about the trains in Spain… perhaps it would be useful if the regional trains here ran more than once per day… but you cant argue with the commitment to fast inter-city travel.

If only we had similar investment to fix the cross border links between Spain and France, which are something of a joke. Engineers around the world will know this story. The problem is social: two different organizations, who speak different languages, have to agree on something. There is already a perfectly usable modern train line across the border. How many trains per day? Uh… two. Hope you planned your trip in advance because they’re fully booked next week.

Anyway, this isn’t meant to be a post on the status of the railways of western Europe.

Digital Resilience Forum

Last month I hopped on another Madrid-bound train to attend the Digital Resilience Forum. It’s a one day conference organized by Bitergia who you might know as world leaders in open source community analysis.

I have mixed feelings about “community metrics” projects. As Nick Wellnhofer said regarding libxml, when you participate as a volunteer in a project that is being monitored, its easy to feel like you’re being somehow manipulated by the corporations who sponsor these things. How come you guys will spend time and money analyzing my project’s development processes and Git history, but you won’t spend time actually fixing bugs and improvements upstream? As the ffmpeg developers said: How come you will pay top calibre security researchers to read our code and find very specific exploits, but then wait for volunteers to fix them?

The Bitergia team are great people who genuinely care about open source, and I really enjoyed the conference. The main themes were: digital sovereignty, geopolitics, the rise of open source, and that XKCD where all our digital
infrastructure depends on a single unpaid volunteer in Nebraska. (https://xkcd.com/2347/). (Coincidentally, one of the Bitergia guys actually does live in Nebraska).

It was a day in a world where I am not used to participating: less engineering, more politics and campaigning. Yes, the Sovereign Tech Agency were around. We played a cool role play game simulating various hypothetical software crisis that might happen in the year 2027 (spoiler: in most cases a vendor-neutral, state-funded organization focused on open source was able to save the day : -). It is amazing what they’ve done so far with a relatively small investment, but it is a small organization and they maintain that citizens of every country should be campaigning and organizing to setting up an equivalent. Let’s not tie the health of open source infrastructure too closely to German politics.

Also present, various campaign groups with “Open” at the start of their name: OpenForum Europe, OpenUK, OpenIreland, OpenRail. When I think about the future of Free Software platforms, such as our beloved GNOME, my mind always goes to funding contributors. There’s very little money here and meanwhile Apple and Microsoft have nearly all of the money and I feel like still GNOME succeeds largely thanks to the evenings and weekends of a small core of dedicated hackers; including some whose day job involves working on some other part of GNOME. It’s a bit depressing sometimes to see things this way, because the global economy gets more unequal every day, and how do you convince people who are already squeezed for cash to pay for something that’s freely available online? How do you get students facing a super competitive job market to hack on GTK instead of studying for university exams?

There’s another side which I talk about less, and that’s education. There are more desktop Linux users than ever — apparently 5% of all desktop users or something — but there’s still very little agreement or understanding what “open source” is. Most computer users couldn’t tell you what an “operating system” does, and don’t know why “source code” can be an interesting thing to share and modify.

I don’t like to espouse any dogmatic rule that the right way to solve any problem is to release software under the GPLv3. I think the problems society has today with technology come from over-complexity and under-study. (See also, my rant from last month. ). To tackle that, it is important to have software infrastructure like drivers and compilers available under free software licenses. The Free Software movement has spent the last 40 years doing a pretty amazing job of that, and I think its surprising how widely software engineers accept that as normal and fight to maintain it. Things could easily be worse. But this effort is one part of a larger problem, of helping those people who think of themselves as “non-technical” to understand the fundamentals of computing and not see it as a magic box. Most people alive today have learned to read and write one or more languages, to do mathematics, to operate a car, to build spreadsheets, and operate a smartphone. Most people I know under 45 have learned to prompt a large language model in the last few years.

With a basic grounding in how a computer operates, you can understand what an operating system does. And then you can see that whoever controls your OS has complete control over your digital life. And you will start to think twice about leaving that control to Apple, Google and Microsoft — big piles of cash where the concept of “ethics” barely has a name.

Reading was once a special skill reserved largely for monks. And it was difficult: we only started spaces between the words later on. Now everyone knows what a capital letter is. We need to teach how computers work, we need to stop making them so complicated, and the idea of open development will come into focus for everyone.

(and yes i realize this sounds a bit like the permacomputing manifesto).

Codethink work

This is a long rant, isn’t it? My train only just left Zamora and I didnt fall asleep yet, so there’s more to come.

I had a nice few months hacking on Endless OS 7, which has progressed from an experiment to a working system, bootable on bare metal, albeit with a various open issues that would block a stable release as yet. The overview docs in the repo tell you how to play with it.

This is now fully in the hands of the team at Endless, and my next move is going to be in some internal research that has been ongoing for a number of years. Not much of it is secret, in fact quite a lot is being developed in the open, and it relates in part to regulatory compliance and safety-critical softare systems.

Codethink dedicates more to open source than most companies its size. We never have trouble getting sponsorship for events like GUADEC. But I do wish I could spend more time maintaining open infrastructure that I use every day, like, you know, GNOME.

This project isn’t going to solve that tomorrow, but it does occupy an interesting space in the intersection between industry and open source. The education gap I talked to you above is very much present in some industries where we work. Back in February a guy in a German car firm told me, “Nobody here wants open source. What they want is somebody to blame when the thing goes wrong.”

Open source software comes with a big disclaimer that says, roughly, that if it breaks you get to keep both pieces. You get to blame yourself.

And that’s a good thing! The people who understand a final, integrated system are the only people who can really define “correct behaviour”. If you’ve worked in the same industries I have you might recognise a common anti-pattern: teams who spend all their time arguing about ownership of a particular bug, and team A are convinced it’s a misbehaviour of component B and team B will try to prove the exact opposite. Meanwhile nobody actually spends the 15 minutes it would take to actually fix the bug. Another anti-pattern: team A would love to fix the bug in component B, but team B won’t let them even look at the source code. This happens muuuuuuuch more than you might think.

So we’re not trying to teach the world how computers work, on this project, but we are trying to increase adoption and understanding at least in the software industry. There are some interesting ideas. Looking at software systems from new angles. This is where STPA comes in, by the way — it’s a way of breaking a system down not into components but rather into one or more control loops. Its going to take a while to make sense of everything in this new space… but you can expect some more 1500 word blog posts on the topic.

Slow Fedora VMs

Good morning!

I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s
~: time bzip2 test.in
0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k
112inputs+20576outputs (0major+1706minor)pagefaults 0swap

We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

Have a fun day!

edit: The problem came back with the new kernel as well. I guess this not going to be a fun day.

Status update, 17/10/2025

Greetings readers. I’m writing to you from a hotel room in Manchester which I’m currently sharing with a variant of COVID 19. We are listening to disco funk music.

This virus prevents me from working or socializing, but I at least I have time to do some cyber-janitorial tasks like updating my “dotfiles” (which holds configuration for all the programs i use on Linux, stored in Git… for those who aren’t yet converts).

I also caught up with some big upcoming changes in the GNOME 50 release cycle — more on that below.

nvim

I picked up Vim as my text editor ten years ago while working on a very boring project. This article by Jon Beltran de Heredia, “Why, oh WHY, do those #?@! nutheads use vi?” sold me on the key ideas: you use “normal mode” for everything, which gives you powerful and composable edit operations. I printed out this Vim quick reference card by Michael Goerz and resolved to learn one new operation every day.

It worked and I’ve been a convert ever since. Doing consultancy work makes you a nomad: often working via SSH or WSL on other people’s computers. So I never had the luxury of setting up an IDE like GNOME Builder, or using something that isn’t packaged in 99% of distros. Luckily Vim is everywhere.

Over the years, I read a newletter named Vimtricks and I picked up various Vim plugins like ALE, ctrlp, and sideways. But there’s a problem: some of these depend on extra Vim features like Python support. If a required feature is missing, you get an error message that appears on like… every keystroke:

carview.php?tsp=

In this case, on a Debian 12 build machine, I could work around by installing the vim-gtk3 package. But it’s frustrating enough that I decided it was time to try Neovim.

The Neovim project began around the time I was switching to Vim, and is based on the premise that “Vim is, without question, the worst C codebase I have seen.”.

So far its been painless to switch and everything works a little better. The :terminal feels better integrated. I didn’t need to immediately disable mouse mode. I can link to online documentation! The ALE plugin (which provides language server integration) is even ready packaged in Fedora.

I’d send a screenshot but my editor looks… exactly the same as before. Boring!

carview.php?tsp=

I also briefly tried out Helix, which appears to take the good bits of Vim (modal editing) and run in a different direction (visible selection and multiple cursors). I need a more boring project before I’ll be able to learn a completely new editor. Give me 10 years.

Endless OS 7

I’ve been working flat out on Endless OS 7, as last month. Now that the basics work and the system boots, we were mainly looking at integrating Endless-specific Pay as you Go functionality that they use for affordable laptop programs.

I learned more than I wanted to about Linux early boot process, particularly the dracut-ng initramfs generator (one of many Linux components that seems to be named after a town in Massachusetts).

GNOME OS actually dropped Dracut altogether, in “vm-secure: Get rid of dracut and use systemd’s ukify” by Valentin David, and now uses a simple Python script. A lot of Dracut’s features aren’t necessary for building atomic, image-based distros. For EOS we decided to stick with Dracut, at least for now.

So we get to deal with fun changes such as the initramfs growing from 90MB to 390MB after we updated to latest Dracut. Something which is affecting Fedora too (LWN: “Last-minute /boot boost for Fedora 43”).

I requested time after the contract finishes to write up a technical article on the work we did, so I won’t go into more details yet. Watch this space!

GNOME 50

I haven’t had a minute to look at upstream GNOME this month, but there are some interesting things cooking there.

Jordan merged the GNOME OS openQA tests into the main gnome-build-meta repo. This is a simple solution to a number of basic questions we had around testing, such as, “how do we target tests to specific versions of GNOME?”.

We separated the tests out of gnome-build-meta because, at the time, each new CI pipeline would track new versions of each GNOME module. This meant, firstly that pipelines could take anywhere from 10 minutes to 4 hours rebuilding a disk image before the tests even started, and secondly that the system under test would change every time you ran the pipeline.

While that sounds dumb, it worked this way for historical reasons: GNOME OS has been an under-resourced ad-hoc project ongoing since 2011, whose original goal was simply to continuously build: already a huge challenge if you remember GNOME in the early 2010s. Of course, such as CI pipeline is highly counterproductive if you’re trying to develop and review changes to the tests, and not the system: so the separate openqa-tests repo was a necessary step.

Thanks to Abderrahim’s work in 2022 (“Commit refs to the repository” and “Add script to update refs”), plus my work on a tool to run the openQA tests locally before pushing to CI (ssam_openqa), I hope we’re not going to have those kinds of problems any more. We enter a brave new world of testing!

The next thing the openQA tests need, in my opinion, is dedicated test infrastructure. The shared Gitlab CI runners we have are in high demand. The openQA tests have timeouts, as they ultimately are doing this in a loop:

  • Send an input event
  • Wait for the system under test to react

If a VM is running on a test runner with overloaded CPU or IO then tests will start to time out in unhelpful ways. So, if you want to have better testing for GNOME, finding some dedicated hardware to run tests would be a significant help.

There are also some changes cooking in Localsearch thanks to Carlos Garnacho:

The first of these is a nicely engineered way to allow searching files on removable disks like external HDs. This should be opt-in: so you can opt in to indexing your external hard drive full of music, but your machine wouldn’t be vulnerable to an attack where someone connects a malicious USB stick while your back is turned. (The sandboxing in localsearch makes it non-trivial to construct such an attack, but it would require a significantly greater level of security auditing before I’d make any guarantees about that).

The second of these changes is pretty big: in GNOME 50, localsearch will now consider everything in your homedir for indexing.

As Carlos notes in the commit message, he has spent years working on performance optimisations and bug fixes in localsearch to get to a point where he considers it reasonable to enable by default. From a design point of view, discussed in the issue “Be more encompassing about what get indexed“, it’s hard to justify a search feature that only surfaces a subset of your files.

I don’t know if it’s a great time to do this, but nothing is perfect and sometimes you have to take a few risks to move forwards.

There’s a design, testing and user support element to all of this, and it’s going to require help from the GNOME community and our various downstream distributors, particularly around:

  • Widely testing the new feature before the GNOME 50 release.
  • Making sure users are aware of the change and how to manage the search config.
  • Handling an expected increase in bug reports and support requests.
  • Highlighting how privacy-focused localsearch is.

I never got time to extend the openQA tests to cover media indexing; it’s not a trivial job. We will rely on volunteers and downstream testers to try out the config change as widely as possible over the next 6 months.

One thing that makes me support this change is that the indexer in Android devices already works like this: everything is scanned into a local cache, unless there’s a .nomedia file. Unfortunately Google don’t document how the Android media scanner works. But it’s not like this is GNOME treading a radical new path.

The localsearch index lives in the same filesystem as the data, and never leaves your PC. In a world where Microsoft Windows can now send your boss screenshots of everything you looked at, GNOME is still very much on your side. Let’s see if we can tell that story.

Status update, 22/09/2025

For the first time in many years I can talk publicly about what I’m doing at work: a short engagement funded by Endless and Codethink to rebuild Endless OS as a GNOME OS derivative, instead of a Debian derivative.

There is nothing wrong with Debian, of course, just that today GNOME OS aligns more closely with the direction the Endless OS team want to go in. A lot of the innovations from earlier versions of Endless OS over the last decade were copied and re-used in GNOME OS, so in a sense this is work coming full circle.

I’ll tell you a bit more about the project but first I have a rant about complexity.

Complexity

I work for a consultancy and the way consultancy projects work is like this: you agree what the work is, you estimate how long the work will take, you agree a budget, and then you do the work.

The problem with this approach is that in software engineering, most of your work is research. Endless OS is the work of thousands of different people, and hundreds of millions of lines of code. We reason and communicate about the code using abstractions, and there are hundreds of millions of abstractions too.

If you ask me “how long will it take to change this thing in that abstraction over there”, I can research those abstractions and come up with an estimate for the job. How long to change a lightbulb? How long to rename a variable? How long to add an option in this command line tool ? Some hours of work.

Most real world tasks involve many abstractions and, by the time youve researched them all, you’ve done 90% of the work. How long to port this app to Gtk4? How long to implement this new optimization in GCC? How long to write a driver for this new USB beard trimmer device? Some months or years of work.

And then you have projects where it’s not even possible to research the related abstractions. So much changed between Debian 12 and GNOME OS 48 that you’d be a year just writing a comprehensive changelog. So, how can you possibly estimate the work involved when you can’t know in advance what the work is?

Of course, you can’t, you can only start and see what happens.

But, allocating people to projects in a consultancy business is also a hard problem. You need to know project start and end dates because you are lining up more projects in advance, and your clients want to know when their work will start.

So for projects involving such a huge number of abstractions, we have to effectively make up a number and hope for the best. When people say things like “try to do the best estimation you can”, it’s a bit like saying “try to count the sand on this beach as best as you can”.

Another difficulty is around finding people who know the right abstractions. If you’re adding a feature to a program written in Rust, management won’t assign someone who never touched Rust before. If they do, you can ask for extra time to learn some Rust as part of the project. (Although since software is largely a cowboy industry, there are always managers who will tell you to just learn by doing.)

But what abstractions do you need to know for OS development and integration? These projects can be harder than programming work, because the abstractions involved are larger, more complicated and more numerous. If you can code in C, can you can be a Linux integrator? I don’t know, but can a bus driver can fly a helicopter?

If a project is so complex that you can’t predict in advance which abstractions are going to be problematic and which ones you won’t need to touch, then even if you wanted to include teaching time in your estimation you’ll need a crystal ball to know how much time the work will take.

For this project, my knowledge of BuildStream and Freedesktop SDK is proving valuable. There’s a good reference manual for BuildStream, but no tutorials on how to use it for OS development. How do we expect people to learn it? Have we solved anything by introducing new abstractions that aren’t widely understood — even if they’re genuinely better in some use cases?

Endless OS 7

Given I’ve started with a rant you might ask how the project is going. Actually, quite some good progress. Endless OS 7 exists, it’s being built and pushed as an ostree from eos-build-meta to Endless’ ostree server. You can install it as an update to eos6 if you like to live dangerously — see the “Switch master” documentation. (You can probably install it on other ostree based systems if you like to live really dangerously, but I’m not going to tell you how). I have it running on an IBM Thinkpad laptop. Actually my first time testing any GNOME OS derivative on hardware!

Thinkpad P14s running Endless OS 7

For a multitude of reasons the work has been more stressful than it needed to be, but I’m optimistic for a successful outcome. (Where success means, we don’t give up and decide the Debian base was easier after all). I think GNOME OS and Endless OS will both benefit from closer integration.

The tooling is working well for me: reliability and repeatability were core principles when BuildStream was being designed, and it shows. Once you learn it you can do integration work fast. You don’t get flaky builds. I’ve never deleted my cache to fix a weird problem. It’s an advanced tool, and in some ways it’s less flexible than its friends in the integration tool world, but it’s a really good way to build an operating system.

I’ve learned a bunch about some important new abstractions on this project too. UEFI and Secure Boot. The systemd-sysusers service and userdb. Dracut and initramfs debugging.

I haven’t been able to contribute any effort upstream to GNOME OS so far. I did contribute some documentation comments to Freedesktop SDK, and I’m trying to at least document Endless OS 7 as clearly as I can. Nobody has ever had much to time to document how GNOME OS is built or tested, hopefully the documentation in eos-build-meta is a useful step forwards for GNOME OS as well.

As always the GNOME OS community are super helpful. I’m sure it’s a big part of the success of GNOME OS that Valentín is so helpful whenever things break. I’m also privileged to be working with the highly talented engineers at Endless who built all this stuff.

Abstractions

Broadly, the software industry is fucked as long as we keep making an infinite number of new abstractions. I haven’t had a particularly good time on any project since I returned to software engineering five years ago, and I suspect it’s because we just can’t control the complexity enough to reason properly about what we are doing.

This complexity is starting to inconvenience billionaires. In the UK the entire car industry has been stopped for weeks because system owners didn’t understand their work well enough to do a good job of securing systems. I wonder if it’s going to occur to them eventually that simplification is the best route to security. Capitalism doesn’t tend to reward that way of thinking — but it can reward anything that gives you a business advantage.

I suppose computing abstractions are like living things, with a tendency to boundlessly multiply until they reach some natural limit, or destroy their habitat entirely. Maybe the last year of continual security breaches could be that natural limit. If your system is too complex for anyone to keep it secure, then your system is going to fail.

Status update, 19/08/2025

Hello! I’m working on an interesting project this month, related to open source Linux operating systems. Actually I’m not working on it this week, I’m instead battling with vinyl floor tiles. Don’t let anyone tell you they are easy to fit. But I think on the 5th attempt I’ve got the technique. The wooden mallet is essential.

Vinyl floor tiles, frustrated face

When I started these “status update” posts, back in 2021, I imagined they’d be to talk about open technology projects I was working on, mixed with a bit of music and art and so on. In fact I write more about politics these days. Let me explain why.

In my book review earlier this year I mentioned economics dude Gary Stevenson. I still didn’t read his book but I do watch all his videos now and I’m learning a lot.

I learned a bit about the housing crisis, for example. The housing crisis in Manchester had several major effects on my life. I just read today in The Manchester Mill that the average rent in Salford jumped from £640/mo to £1,121/mo in a decade.

(Lucky for me, I got into computers early in life, and nobody understands their secrets so they still have to pay a good salary to those of us who do. So far I’ve weathered the crisis. Many of my friends don’t have that luck, and some of them have been struggling for 15 years already. Some even had to become project managers.)

Until about 2020, I assumed the Manchester housing crisis was caused by people moving up from London. Galicia had some of the lowest rent I’d ever seen, when I first moved here, and it’s only around 2021, when rents started suddenly doubling just as I’d seen happen in Manchester, that I realised the same crisis was about to play out here as well, and perhaps it wasn’t entirely the fault of Gen-X Londoners. I thought, maybe it’s a Europe-wide housing crisis?

Let me summarize the video Gary Stevenson did about the housing crisis (this one), to save you 28 minutes. It’s not just houses but all types of asset which are rapidly going up in price, and it’s happening worldwide. We notice the houses because we need them to live normal lives, unlike other types of asset such as gold or government bonds which most of us can live without.

The most recent video is titled like this: “Is the economy causing a mental health crisis?“. I’ve embedded it below. (It’s hosted on a platform controlled by Google, but Gary is good enough to turn off the worst of the YouTube ads, those bastards that pop up during a video in mid-sentence or while you’re figuring out a yoga pose.)

My answer to that question, when I saw it, was “Yes, obviously“. For example, if rent increases by 75% in your city and you’re forced back into living with your parents age 35, it’s tough to deal with alright. What do you think?

But the video is about more than that. The reason asset prices are through the roof is because the super rich are taking all the assets. The 1% have more money than ever. Wealth inequality is rapidly rising, and nothing is stopping it. For thousands of years, the aristocracy owned all the land and castles and manor houses, and the rest of us had a few cabbages and, perhaps if you were middle class, a pig.

The second half of the 20th century levelled the playing field and put in place systems which evened things out and meant your grandparents maybe could buy a house. The people in charge of those systems have given up, or have been overpowered by the super rich.

In fact, the video “Is the economy causing a mental health crisis?” is about the effect on your mental health when you realize that all of society as you know it is headed towards complete collapse.

(Lucky for me, I grew up thinking society was headed for collapse due to the climate crisis, so I listened to a lot of punk rock and over-developed my capacity for nihilism. Maybe my mind looks for crises everywhere? Or maybe I was born in a time well-supplied with global crises. I share a birthday with the Chernobyl disaster.)

So how does all this relate back to open technology?

Maybe it doesn’t. I went to the GNOME conference last month and had very little overtly “political” conversations. We chatted about GNOME OS, live-streaming, openQA, the GNOME Foundation, the history of the GNOME project, accessibility at conferences, our jobs, and so on. Which was great, I for some reason find all that stuff mega interesting. (Hence why I went there instead of a conference about 21st century world politics).

Or maybe it does. Tech is part of everyone’s lives now. Some of the most powerful organizations in the world now are tech companies and they get their power from being omnipresent. Software engineers built all of this. What were we thinking?

I think we just enjoyed getting paid to work on fun problems. I suppose none of today’s tech billionaires seemed like particularly evil-minded people in the mid 2000s. Spotify used to talk about reducing MP3 piracy, not gutting the income streams of 95% of professional recording artists. Google used to have a now laughable policy of “Don’t be evil”.

carview.php?tsp=

There is one exception who were clearly evil bastards in the 2000s as well. The US anti-trust case against Microsoft, settled in 2001, is an evidence trail of lies and anti-competitive behaviour under Bill Gates’ leadership. Perhaps in an attempt to one-up his predecessor, the Satya Nadella Microsoft is now helping the far-right government of Israel to commit war crimes every day. No Azure for Apartheid. At least they are consistent, I suppose.

In fact, I first got interested in Linux due to Microsoft. Initially for selfish reasons. I was a child with a dialup internet connection, and I just wanted to have 5 browser windows open without the OS crashing. (For younger readers — browser tabs weren’t invented until the 21st century).

Something has kept me interested in open operating systems, even in this modern era when you can download an MP3 in 5 seconds instead of 5 consecutive evenings. It’s partly the community of fun people around Linux. It’s partly that it led me to the job that has seen me through the housing crisis so far. And its partly the sense that we are the alternative to Big Tech.

Open source isn’t going to “take over the world”. That’s what Microsoft, Spotify and Google were always going to do (and have now done). I’m honestly not sure where open source is going. Linux will go wherever hardware manufacturers force it to go, as it always has done.

GNOME may or may not make it mainstream one day. I’m all for it, if it means some funding for localsearch maintainers. If it doesn’t, that’s also fine, and we don’t need to be part of some coherent plan to save the world or to achieve a particular political aim. Nothing goes according to plan anyway. Its fine to work on stuff just cus its interesting.

What we are doing is leading by example, showing that its possible to develop high quality software independently of any single corporation. You can create institutions where contributors do what we think is right, instead of doing what lunatics like Sam Altman or Mark Zockerborg think.

At the same time, everything is political.

What would happen if I travelled back to 2008 and asked the PHP developers building Facebook: “Do you think this thing could play a determining role in a genocide in Myanmar?”

I met someone this weekend who had just quit Spotify. She isn’t a tech person. She didn’t even know Bandcamp exists. Just didn’t want to give more money to a company that’s clearly evil. This is the future of tech, if there is any. People who pay attention to the world, who are willing to do things the hard way and stop giving money to people who are clearly evil.

Thoughts during GUADEC 2025

Greetings readers of the future from my favourite open technology event of the year. I am hanging out with the people who develop the GNOME platform talking about interesting stuff.

Being realistic, I won’t have time to make a readable writeup of the event. So I’m going to set myself a challenge: how much can I write up of the event so far, in 15 minutes?

Let’s go!

Conversations and knowledge

Conferences involve a series of talks, usually monologues on different topics, with slides and demos. A good talk leads to multi-way conversations.

One thing I love about open source is: it encourages you to understand how things work. Big tech companies want you to understand nothing about your devices beyond how to put in your credit card details and send them money. Sharing knowledge is cool, though. If you know how things work then you can fix it yourself.

Structures

Last year, I also attended the conference and was left with a big question for the GNOME project: “What is our story?” (Inspired by an excellent keynote from Ryan Sipes about the Thunderbird email app, and how it’s supported by donations).

We didn’t answer that directly, but I have some new thoughts.

Open source desktops are more popular than ever. Apparently we have like 5% of the desktop market share now. Big tech firms are nowadays run as huge piles of cash, whose story is that they need to make more cash, in order to give it to shareholders, so that one day you can, allegedly, have a pension. Their main goal isn’t to make computers do interesting things. The modern for-profit corporation is a super complex institution, with great power, which is often abused.

Open communities like GNOME are an antidote to that. With way fewer people, they nevertheless manage to produce better software in many cases, but in a way that’s demanding, fun, chaotic, mostly leaderless and which frequently burns out volunteers who contribute.

Is the GNOME project’s goal to make computers do interesting things? For me, the most interesting part of the conference so far was the focus on project structure. I think we learned some things about how independent, non-profit communities can work, and how they can fail, and how we can make things better.

In a world where political structures are being heavily tested and, in many cases, are crumbling, we would do well to talk more about structures, and to introspect a bit more on what works and what doesn’t. And to highlight the amazing work that the GNOME Foundation’s many volunteer directors have achieved over the last 30 years to create an institution that still functions today, and in many ways functions a lot better than organizations with significantly more resources.

Relevant talks

  • Stephen Deobold’s keynote
  • Emmanuele’s talk on teams

Teams

Emmanuele Bassi tried, in a friendly way, to set fire to long-standing structures around how the GNOME community agrees and disagrees changes to the platform. Based on ideas from other successful projects that are driven by independent, non-profit communities such as the Rust and Python programming languages.

Part of this idea is to create well-defined teams of people who collaborate on different parts of the GNOME platform.

I’ve been contributing to GNOME in different ways for a loooong time, partly due to my day job, where I sometimes work with the technology stack, and partly because its a great group of people, we get to meet around the world once a year, and make software that’s a little more independent from the excesses and the exploitation of modern capitalism, or technofuedalism.

And I think it’s going to be really helpful to organize my contributions according to a team structure with a defined form.

Search

I really hope we’ll have a search team.

I don’t have much news about search. GNOME’s indexer (localsearch) might start indexing the whole home directory soon. Carlos Garnacho continues to heroically make it work really well.

QA / Testing / Developer Experience

I did a talk at the conference (and half of another one with Martín Abente Lahaye) , about end-to-end testing using openQA.

The talks were pretty successful, they lead to some interesting conversations with new people. I hope we’ll continue to grow the Linux QA call and try to keep these conversations going, and try to share knowledge and create better structures so that paid QA engineers who are testing products built with GNOME can collaborate on testing upstream.

Freeform notes

I’m 8 minutes over time already so the rest of this will be freeform notes from my notepad.

Live-coding streams aren’t something I watch or create. It’s an interesting way to share knowledge with the new generation of people who have grown up with internet videos as a primary knowledge source. I don’t have age stats for this blog, but I’m curious how many readers under 30 have read this far down. (Leave a comment if you read this and prove me wrong! : -)

systemd-sysexts for development are going to catch on.

There should be karaoke every year.

Fedora Silverblue isn’t actively developed at the moment. bootc is something to keep an eye on.

GNOME Shell Extensions are really popular and are a good “gateway drug” to get newcomers involved. Nobody figured out a good automated testing story for these yet. I wonder if there’s a QA project there? I wonder if there’s a low-cost way to allow extension developers to test extensions?

Legacy code is “code without tests”. I’m not sure I agree with that.

“Toolkits are transient, apps are forever”. That’s spot-on.

There is a spectrum between being a user and a developer. It’s not a black-and-white distinction.

BuildStream is still difficult to learn and the documentation isn’t a helpful getting started guide for newcomers.

We need more live demos of accessibility tools. I still don’t know how you use the screen reader. I’d like to have the computer read to me.

That’s it for now. It took 34 minutes to empty my brain into my blog, more than planned, but a necessary step. Hope some of it was interesting. See you soon!

Status update, 15/07/2025

This month has involved very little programming and a huge amount of writing.

I am accidentally writing a long-form novel about openQA testing. It’s up to 1000 words already and we’re still on the basics.

The idea was to prepare for my talk at the GNOME conference this year called “Let’s build an openQA testsuite, from scratch”, by writing a tutorial that everyone can follow along at home. My goal for the talk is to share everything I’ve learned about automated GUI testing in the 4 years since we started the GNOME openqa-tests project. There’s a lot to share.

I don’t have any time to work on the tests myself — nobody seems interested in giving me paid time to work on them, its not exactly a fun weekend project, and my weekends are busy anyway — so my hope is that sharing knowledge will keep at least some momentum around automated GUI testing. Since we don’t seem yet to have mastered writing apps without bugs : -)

I did a few talks about openQA over the years, always at a high level. “This is how it looks in a web browser”, and so on. Check out “The best testing tools we’ve ever had: an introduction to OpenQA for GNOME” from GUADEC 2023, for example. I told you why openQA is interesting but I didn’t have time to talk about how to use it.

Me trying to convince you to use openQA in 2023

So this time I will be taking the opposite approach. I’m not going to spend time discussing whether you might use it or not. We’re just going to jump straight in with a minimal Linux system and start testing the hell out of it. Hopefully we’ll have time to jump from there to GNOME OS and write a test for Nautilus as well. I’m just going to live demo everything, and everyone in the talk can follow along with their laptops in real time.

Anyway, I’ve done enough talks to know that this can’t possibly go completely according to plan. So the tutorial is the backup plan, which you can follow along before or after or during the talk. You can even watch Emmanuele’s talk “Getting Things Done In GNOME” instead, and still learn everything I have to teach, in your own time.

Tutorials need to make a comeback! As a youth in the 90s, trying to make my own videogames because I didn’t have any, I loved tutorials like Denthor’s Tutorial on VGA Programming and Pete’s QBasic Site and so on. Way back in those dark ages, I even wrote a tutorial about fonts in QBasic. (Don’t judge me… wait, you judged me a while back already, didn’t you).

Anyway, what I forgot, since those days, is that writing a tutorial takes fucking ages!

dnf uninstall

I am a long time user of the Fedora operating system. It’s very good quality, with a lot of funding from Red Hat (who use it to crowd-source testing for their commercial product Red Hat Enterprise Linux).

On Fedora you use a command named dnf to install and remove packages. The absolute worst design decision of Fedora is this:

  • To install a package: dnf install
  • To uninstall a package: dnf remove

If I had a dollar for every time I typed dnf uninstall foo and got an error then I’d be able to stage a lavish wedding in Venice by now.

As a Nushell user, I finally spent 5 minutes to fix this forever by adding the following to my ~/.config/nushell/config.nu file:

def "dnf uninstall" […packages: string] {
    dnf remove …$packages
}

(I also read online about a dnf alias command that might solve this, but it isn’t available for me for whatever reason).

That’s all for today!

Status update, 15/06/2025

This month I created a personal data map where I tried to list all my important digital identities.

carview.php?tsp=

(It’s actually now a spreadsheet, which I’ll show you later. I didn’t want to start the blog post with something as dry as a screenshot of a spreadsheet.)

Anyway, I made my personal data map for several reasons.

The first reason was to stay safe from cybercrime. In a world of increasing global unfairness and inequality, of course crime and scams are increasing too. Schools don’t teach how digital tech actually works, so it’s a great time to be a cyber criminal. Imagine being a house burglar in a town where nobody knows how doors work.

carview.php?tsp=

Lucky for me, I’m a professional door guy. So I don’t worry too much beyond having a really really good email password (it has numbers and letters). But its useful to double check if I have my credit card details on a site where the password is still “sam2003”.

The second reason is to help me migrate to services based in Europe. Democracy over here is what it is, there are good days and bad days, but unlike the USA we have at least more options than a repressive death cult and a fundraising business. (Shout to @angusm@mastodon.social for that one). You can’t completely own your digital identity and your data, but you can at least try to keep it close to home.

The third reason was to see who has the power to influence my online behaviour.

This was an insight from reading the book Technofeudalism. I’ve always been uneasy about websites tracking everything I do. Most of us are, to the point that we have made myths like “your phone microphone is always listening so Instagram can target adverts”. (As McSweeney’s Internet Tendency confirms, it’s not! It’s just tracking everything you type, every app you use, every website you visit, and everywhere you go in the physical world).

I used to struggle to explain why all that tracking feels bad. Technofeudalism frames a concept of cloud capital, saying this is now more powerful than other kinds of capital because cloud capitalists can do something Henry Ford, Walt Disney and The Monopoly Guy can only dream of: mine their data stockpile to produce precisely targeted recommendations, search bubbles and adverts which can influence your behaviour before you’ve even noticed.

This might sound paranoid when you first hear it, but consider how social media platforms reward you for expressing anger and outrage. Remember the first time you saw a post on Twitter from a stranger that you disagreed with? And your witty takedown attracted likes and praise? This stuff can be habit-forming.

In the 20th century, ad agencies changed people’s buying patterns and political views using billboards, TV channel and newspapers. But all that is like a primitive blunderbuss compared to recommendation algorithms, feedback loops and targeted ads on social media and video apps.

I lived through the days when web search for “Who won the last election” would just return you 10 pages that included the word “election”. (If you’re nostalgic for those days… you’ll be happy to know that GNOME’s desktop search engine still works like that today! : -) I can spot when apps trying to ‘nudge’ me with dark patterns. But kids aren’t born with that skill, and they aren’t necessarily going to understand the nature of Tech Billionaire power unless we help them to see it. We need a framework to think critically and discuss the power that Meta, Amazon and Microsoft have over everyone’s lives. Schools don’t teach how digital tech actually works, but maybe a “personal data map” can be a useful teaching tool?

By the way, here’s what my cobbled-together “Personal data map” looks like, taking into account security, what data is stored and who controls it. (With some fake data… I don’t want this blog post to be a “How to steal my identity” guide.)

NameRisksSensitivity ratingEthical ratingLocationControllerFirst factorSecond factorCredentials cached?Data stored
Bank accountFinancial loss102EuropeBank FingerprintNoneOn phoneMoney, transactions
InstagramIdentity theft5-10USAMetaPasswordEmailOn phonePosts, likes, replies, friends, views, time spent, locations, searches.
Google Mail (sam@gmail.com)Reset passwords9-5USAGooglePasswordNoneYes – cookiesConversations, secrets
GithubImpersonation33USAMicrosoftPasswordOTPYes – cookiesCredit card, projects, searches.

How is it going migrating off USA based cloud services?

“The internet was always a project of US power”, says Paris Marx, a keynote at PublicSpaces conference, which I never heard of before.

Closing my Amazon account took an unnecessary amount of steps, and it was sad to say goodbye to the list of 12 different address I called home at various times since 2006, but I don’t miss it; I’ve been avoiding Amazon for years anyway. When I need English-language books, I get them from an Irish online bookstore named Kenny’s. (Ireland, cleverly, did not leave the EU so they can still ship books to Spain without incurring import taxes).

Dropbox took a while because I had years of important stuff in there. I actually don’t think they’re too bad of a company, and it was certainly quick to delete my account. (And my data… right? You guys did delete all my data?).

I was using Dropbox to sync notes with the Joplin notes app, and switched to the paid Joplin Cloud option, which seems a nice way to support a useful open source project.

I still needed a way to store sensitive data, and realized I have access to Protondrive. I can’t recommend that as a service because the parent company Proton AG don’t seem so serious about Linux support, but I got it to work thanks to some heroes who added a protondrive backend to rclone.

Instead of using Google cloud services to share photos, and to avoid anything so primitive as an actual cable, I learned that KDE Connect can transfer files from my Android phone over my laptop really neatly. KDE Connect is really good. On the desktop I use GSConnect which integrates with GNOME Shell really well. I think I’ve not been so impressed by a volunteer-driven open source project in years. Thanks to everyone who worked on these great apps!

I also migrated my VPS from a US-based host Tornado VPS to one in Europe. Tornado VPS (formally prgmr.com) are a great company, but storing data in the USA doesn’t seem like the way forwards.

That’s about it so far. Feels a bit better.

What’s next?

I’m not sure whats next!

I can’t leave Github and Gitlab.com, but my days of “Write some interesting new code and push it straight to Github” are long gone. I didn’t sign up to train somebody else’s LLM for free, and neither should you. (I’m still interested in sharing interesting code with nice people, of course, but let’s not make it so easy for Corporate America to take our stuff without credit or compensation. Bring back the “sneakernet“!)

Leaving Meta platforms and dropping YouTube doesn’t feel directly useful. It’s like individually renouncing debit cards, or air travel: a lot of inconvenience for you, but the business owners don’t even notice. The important thing is to use the alternatives more. Hence why I still write a blog in 2025 and mostly read RSS feeds and the Fediverse. Gigs where I live are mostly only promoted on Instagram, but I’m sure that’s temporary.

In the first quarter of 2025, rich people put more money into AI startups than everything else put together (see: Pivot to AI). Investors love a good bubble, but there’s also an element of power here.

If programmers only know how to write code using Copilot, then whoever controls Microsoft has the power to decide what code we can and can’t write. (This currently this seems limited to not using the word ‘gender’. But I can imagine a future where it catches you reverse-engineering proprietary software, or jailbreaking locked-down devices, or trying write a new Bittorrent client).

If everyone gets their facts from ChatGPT, then whoever controls OpenAI has the power to tweak everyone’s facts, an ability that is currently limited only to presidents of major world superpowers. If we let ourselves avoid critical thinking and rely on ChatGPT to generate answers to hard questions instead, which teachers say is very much exactly what’s happening in schools now… then what?

Status update, 22/05/2025

Hello. It is May, my favourite month. I’m in Manchester, mainly as I’m moving projects at work, and its useful to do that face-to-face.

For the last 2 and a half years, my job has mostly involved a huge, old application inside a big company, which I can’t tell you anything about. I learned a lot about how to tackle really, really big software problems where nobody can tell you how the system works and nobody can clearly describe the problem they want you to solve. It was the first time in a long time that I worked on production infrastructure, in that, we could have caused major outages if we rolled out bad changes. Our team didn’t cause any major outages in all that time. I will take that as a sign of success. (There’s still plenty of legacy application to decommission, but it’s no longer my problem).

A green tiled outside wall with graffiti

During that project I tried to make time to work on end to end testing of GNOME using openQA as well… with some success, in the sense that GNOME OS still has working openQA tests, but I didn’t do very well at making improvements, and I still don’t know if or when I’ll ever have time to look further at end-to-end testing for graphical desktops. We did a great Outreachy internship at least with Tanju and Dorothy adding quite a few new tests.

Several distros test GNOME downstream, but we still don’t have much of a story of how they could collaborate upstream. We do still have the monthly Linux QA call so we have a space to coordinate work in that area… but we need people who can do the work.

My job now, for the moment, involves a Linux-based operating system that is intended to be used in safety-critical contexts. I know a bit about operating systems and not much about functional safety. I have seen enough to know there is nothing magic about a “safety certificate” — it represents some thinking about risks and how to detect and mitigate them. I know Codethink is doing some original thinking in this area. It’s interesting to join in and learn about what we did so far and where it’s all going.

Giving credit to people

The new GNOME website, which I really like, describes the project as “An independent computing platform for everyone”.

There is something political about that statement: it’s implying that we should work towards equal access to computer technology. Something which is not currently very equal. Writing software isn’t going to solve that on its own, but it feels like a necessary part of the puzzle.

If I was writing a more literal tagline for the GNOME project, I might write: “A largely anarchic group maintaining complex software used by millions of people, often for little or no money.” I suppose that describes many open source projects.

Something that always bugs me is how a lot of this work is invisible. That’s a problem everywhere: from big companies and governments, down to families and local community groups, there’s usually somebody who does more work than they get credit for.

But we can work to give credit where credit is due. And recently several people have done that!

Outgoing ED Richard Littauer in “So Long and Thanks For All the Fish” shouted out a load of people who work hard in the GNOME Foundation to make stuff work.

Then incoming GNOME ED, Steven Deobald wrote a very detailed “2025-05-09 Foundation Report” (well done for using the correct date format, as well), giving you some idea about how much time it takes to onboard a new director, and how many people are involved.

And then Georges wrote about some people working hard on accessibility in “In celebration of accessibility”.

Giving credit is important and helpful. In fact, that’s just given me an idea, but explaining that will have to wait til next month.

canal in manchester

Book club, 2025 edition

It’s strange being alive while so much bad shit is going on in the world, right? With our big brains that invented smartphones, quantum computers and Haskell, surely we could figure out how to stop Benjamin Netenyahu from starving hundreds of thousands of children? (Or, causing “high levels of acute food insecurity” as the UN refer to it).

Nothing in the world is simple, though, is it.

Back in 1914 when European leaders kicked off the First World War, the collective imagination of a war dated back to an era where the soldiers wore colourful jackets and the most sophisticated weapon was a gun with a knife attached. The reality of WWI was machine guns, tanks and poison gas. All that new technology took people by surprise, and made for one of the deadliest wars in history.

If you’re reading this, then however old or young you are, your life has been marked by rapid technological changes. Things are still changing rapidly. And therein lies the problem.

In amongst the bad news I am seeing some reasons to be optimistic. The best defense against exploitation is education. As a society it feels like we’re starting to get a grip on why living standards for everyone except the rich are nosediving.

Lets go back to an older technology which changed the world centuries ago: books. I am going to recommend a few books.

Technofeudalism (by Yannis Varoufakis)

Cover of Technofeudalism

The book’s preface outlines the theory: capital has mutated into a much more powerful and dangerous form of itself. Two things caused it: the “privatization of the internet”, and the manner in which Western governments and central banks responded to the financial crisis of 2008. The strongest part of the book is the detailed telling of this story, from the beginnings of capitalism and its metamorphoses during the 20th century, to the panicked central banks keeping interest rates near zero for over a decade, effectively printing money and giving it to the wealthy, who in turn decided it was best to hang onto all of it. Out of this he declares capitalism itself is dead, replaced by a more powerful force: technofuedalism.

Yanis’ concept of technofuedalism is this:

Markets, the medium of capitalism, have been replaced by digital trading platforms which look like, but as not, markets and are better understood as fiefdoms. And profit, the engine of capitalism, has been replaced with its feudal predecessor: rent. Specifically, it is a form of rent that must be paid for access to those platforms and to the cloud more broadly.

Many people depend on cloud platforms for basic needs. For example, access to work. Think about how many people earn a living through apps: Uber drivers, food delivery couriers, freelancers advertising via Google or Meta, and so on. But it’s not just individuals. Many capitalist businesses now rely on sites like Amazon.com for most of their sales. Everyone has to pay cloud rent to the overlord.

Yanis likens this to a colonial town where all the shops are owned by the same guy, who happens to be named Jeff. This isn’t your traditional monopoly, though — because cloud platforms also “personalize” your experience of the site. You get recommendations perfectly tailed to your needs. For consumers, the platform owners control what you see. For participants, they control who is paying attention. This he calls cloud capital.

The concept of cloud capital needs better definition in the book, but I think the attention economy is the most interesting aspect, and it is what leads to the other counterintuitive side effect: many people creating value for cloud platforms do it for little or no money. Apple doesn’t pay you to make app store apps. Tiktok don’t pay you to upload videos. The book claims that capitalist companies like General Motors pay about 80% of their income to workers as salary payments, while Big Tech companies tend to spend less than 1% of their income paying workers.

In my last status update I mentioned some disillusionment with open source projects in the age of AI. Here’s another perspective: contributing to some open source projects now feels like giving free labour to cloud platform owners.

The Free Software movement dates from the 1980s, when operating systems were a source of power. Microsoft created an illegal monopoly on operating systems in the 90s and became the richest and most powerful company in the world; but today, operating systems are a commodity, and Microsoft makes more money from its cloud platform Azure.

It’s great that we maintain ethical, open computing platforms like GNOME, but the power struggle has moved on. I don’t expect to see huge innovations in desktop or mobile operating systems in the next decade.

Meanwhile, maintaining ethical cloud platforms is still a minority pursuit. Writing software doesn’t feel like the difficult part, here. The work needed if we have the will to climb out of this technofuedal hole is community organization and communication. The most valuable thing the major cloud platforms have is our attention. (And perhaps the most valuable thing we have in the open source world, is our existing communities and events, such as the Linux App Summit).

Why does this book give me hope? It gives a structure to the last 18 years of fucked up goings on in the world of power and politics. And it helps us analyze exactly what makes the big tech companies of the USA and China so powerful.

If the cloudalists got to you already and you don’t have the attention span to buy and read a book any more, don’t worry! There’s also a video.

The Trading Game (by Gary Stevenson)

Cover of The Trading Game

I’m late to the party with this one. Gary started a blog ten years ago (wealtheconomics.org), and now runs a online video channel (GarysEconomics).

He knows a lot about money and the super-rich. He knows that people are addicted to accumulating wealth and power. He knows that living standards for working people are getting worse. He knows that politicians won’t admit that the two things are linked. And he has over a million subscribers to his channel who know that too.

Why does it give me hope? First, he’s focused on helping us understand the problem. He does have a clickbait solution — “Tax wealth, not work” — but he also acknowledges that it’s slow, difficult and messy to affect national politics. He’s realistic about how difficult it is to tax the super-rich in a world of tax havens. And he’s been going at it for 10 years already.

Careless People (by Sarah Wynn-Williams)

carview.php?tsp=

I listened to long discussion of this book on a podcast called Chisme Corporativo, run by two chicas from Mexico working to demystify the world of big business and USA power that controls much of the world.

The fact that Chisme Corporativo exists makes me very happy. If we’re going to live in a world where US corporations have more power than your own government — particularly the case in Latin America — then it makes a lot of sense to learn about US corporations, the people who run them, and why they make the decisions they do.

The book review quotes a part where Mark Zuckerberg finally realized that Facebook was instrumental in the first Tromp election campaign, and just how much power that endows the company with.

And he digested this bombshell for three hours, and his thought process led him to this: “Maybe I should run for president!”

That’s the type of person we are dealing with.

What’s next

Inequality keeps rising and living standards are getting worse for everyone except the super rich. But we are learning more and more about the people and the processes responsible. Information is a seed for bringing society into better balance again.

I’m going to leave you with this quote I stole blatantly from Gary Stevenson’s website:

“If we can really understand the problem, the answer will come out of it, because the answer is not separate from the problem.”
― J. Krishnamurti

Have you read any of these books? What else would you add to this list?

Status update, 11/04/2025

Welcome to another month of rambling status reports. Not much in terms of technology this month, my work at Codethink is still focused on proprietary corporate infrastructure, and the weather is too nice to spend more time at a computer than necessary. Somehow I keep reading things and thinking about stuff though, and so you can read some of these thoughts and links below.


Is progress going backwards?

I’ve been listening to The Blindboy Podcast from the very beginning. You could call this a “cult podcast” since there isn’t a clear theme, the only constant is life, narrated by an eccentric Irish celebrity. I’m up to the episode “Julias Gulag” from January 2019, where Blindboy mentions a Gillette advert of that era which came out against toxic masculinity, very much a progressive video in which there wasn’t a single razor blade to speak of. And he said, roughly, “I like the message, and the production is excellent, but I always feel uneasy when this type of “woke” video is made by a huge brand because I don’t think the board of directors of Proctor & Gamble actually give a shit about social justice.”

This made me think of an excellent Guardian article I read last week, by Eugene Healey entitled “Marketing’s ‘woke’ rebrand has ultimately helped the far right”, in which he makes largely the same point, with six years worth of extra hindsight. Here are a few quotes but the whole thing is worth reading:

Headline of Guardian article "Marketing’s ‘woke’ rebrand has ultimately helped the far right"

Social progress once came hand-in-hand with economic progress. Now, instead, social progress has been offered as a substitute for economic progress.

Through the rear window it’s easy to see that the backlash was inevitable: if progressive values could so easily be commodified as a tool for selling mayonnaise, why shouldn’t those values be treated with the same fickleness as condiment preferences?

The responsibility we bear now is undoing the lesson we inadvertently taught consumers over this era. Structural reform can’t be achieved through consumption choices – unfortunately, we’re all going to have to get dirt under our fingernails.

We are living through a lot of history at the moment and it can feel like our once progressive society is now going backwards. A lot of the progress we saw was an illusion anyway. The people who really hold power in the world weren’t really about to give anything up in the name of equality, and they still aren’t. World leaders were still taking private jets to conferences to talk about the climate crisis, and so on. The 1960s USA seemed like a place of progress, and then they went to war in Vietnam.

As Eugene Healey says towards the end of his piece, one positive change is that it’s now obvious who the bad guys are again. Dinold Tromp appears on TV every time I look at a TV, and he dresses like an actual supervillain. Mark Zuckerburg is trying to make his AI be more right-wing. Gillette is back to making adverts which are short videos of people shaving, because Gillette is a brand that manufactures razors and wants you to buy them. It is not a social justice movement!

The world goes in cycles, not straight lines. Each new generation of people has to ignore most of what we learn from teachers and parents, and figure everything out for ourselves the hard way, right?

For technologists, it’s been frustrating to spend the last decade telling people to be wary of Apple, Amazon, Google, Meta and Microsoft, and being roundly ignored. They are experts in making convenient, zero cost products, and they are everywhere. Unless you’re an expert in technology or economics, then it wasn’t obvious what they have been working towards, which is the same thing it always was, the same that drove everything Microsoft did through the 1990s: accumulating more and more money and power.

You don’t get very far if you tell this story to some poor soul who just needs to make slides for a presentation, especially if your suggestion is that they try LibreOffice Impress instead.

When 2025 kicked off, CEOs of all those Big Tech companies attended the inauguration of Dinald Tromp and donated him millions of dollars, live on international news media. In the long run I suspect this moment will have pushed more people towards ethical technology than 20 years of campaigning about nonfree JavaScript.

AI generated comic of some tech CEOs attending some sort of inauguration event.

Art, Artificial Intelligence and Idea Bankrupcy

Writing great code can be a form of artistic expression. Not all code is art, of course, just as an art gallery is not the only place you will find paint. But if you’re wondering why some people release groundbreaking software for free online, it might help to view it as an artistic pursuit. Anything remotely creative can be art.

I took a semi retirement from volunteer open source contributions back in October of last year, having got to a point where it was more project management than artistic endeavour. In an ideal world I’d have some time to investigate new ideas, for example in desktop search or automated GUI testing, and publish cool stuff online. But there are two blockers. One blocker is that I don’t have the time. And the other, is that the open web is now completely overrun with data scrapers, which somehow ruins the artistic side of publishing interesting new software for free.

We know that reckless data scraping by Amazon, Anthropic, Meta and Microsoft/OpenAI (those US tech billionaires again), plus their various equivalents in China, is causing huge problems for open source projects and other non-profits. It has led The Wikimedia Foundation to declare this month that “Our content is free, our infrastructure is not“. And Ars Technica also published a good summary of the situation.

The "Making sure you're not a bot" captcha from gnome.org

Besides the bandwidth costs, there’s something uncomfortable about everything we publish online being immediately slurped into the next generation of large language model. If permissive software licenses lead to extractive behaviour, then AI crawlers are that on steroids. LLMs are incredibly effective for certain use cases, and one such use case is “copyright laundering machines”.

Software licensing was a key part of the discussion around ethical technology when I first discovered Linux in the late 1990s. There was a sense that if you wrote innovative code and published it under the GNU GPL, you were helping to fight the evils of Big Tech, as the big software firms wouldn’t legally be able to incorporate your innovation into their products without releasing their source code under the same license. That story is spelled out word-for-word in Richard Stallman’s article “Copyleft: Pragmatic Idealism”. I was never exactly a disciple of Richard Stallman, but I did like to release cool stuff under the GPL in the past, hoping that in a small way it’d work towards some sort of brighter future.

I was never blind to the limitations of the GPL. It requires an actual threat of enforcement to be effective, and historically only a few groups like the Software Freedom Conservancy actually do that difficult legal work. Another weakness in the overall story was this: if you have a big pile of cash, you can simply rewrite any innovative GPL code. (This is how we got Apple to pay for LLVM).

Long ago I read the book “Free as in Freedom”. It’s a surprisingly solid book which narrates Richard Stallman’s efforts to form a rebel alliance and fight what we know today as Big Tech, during which he founds the GNU Project and invents the GPL. It is only improved in version 2.0 where Stallman himself inserts pedantic corrections into Sam Williams’s original text such as “This cannot be a direct quote because I do not use fucking as an adverb”. (The book and the corrections predate him famously being cancelled in 2019). He later becomes frustrated at having spent a decade developing an innovative, freely available operating system, only for the media and the general public to give credit to Linus Torvalds.

Right now the AI industry is trying to destroy copyright law as we know it. This will have some interesting effects. The GPL depends on copyright law to be effective, so I can only see this as the end of the story for software licensing as a way to defend and ensure that the inventors of cool things get some credit and earn money. But let’s face it, the game was already up on that front.

Sustainable open source projects — meaning those where people actually get paid do all the work that is needed for the project to succeed — can exist and do exist. We need independent, open computing platforms like GNOME and KDE more than ever. I’m particularly inspired by KDE’s growing base of “supporting members” and successful fundraisers. So while this post might seem negative, I don’t see this as a moment of failure, only a moment of inflection and of change.

This rant probably needs a deeper message so I’m going to paraphrase Eugene Healey: “Structural reform can’t be achieved just by publishing code online”. The hard work and meaningful work is not writing the code but building a community who support what you’re doing.

My feeling about the new AI-infested web, more to the point, is that it spoils the artistic aspect of publishing your new project right away as open source. There’s something completely artless about training an AI on other people’s ideas and regenerating it in infinite variations. Perhaps this is why most AI companies all have logos that look like buttholes.

Image from velvetshar.com article ""Why do AI company logos look like buttholes?", showing various circular and star-shaped logos

Visual artists and animators have seen DALL-E and Stable Diffusion tale their work and regurgitate it, devoid of meaning. Most recently it was the legendary Studio Ghibli who had their work shat on by Sam Altman. “I strongly feel that this is an insult to life itself”, say the artists. At least Studio Ghibli is well-known enough to get some credit, unlike many artists whose work was coopted by OpenAI without permission.

Do you think the next generation of talented visual artists will publish their best work online, within reach of Microsoft/OpenAI’s crawlers?

And when the next Fabrice Bellard comes up with something revolutionary, like FFMPEG or QEMU were when they came out, will they decide to publish the source code for free?

Actually, Fabrice Bellard himself has done plenty of research around large language models, and you will notice that his recent projects do not come with source code…

With that in mind, I’m declaring bankruptcy on my collection of unfinished ideas and neat projects. My next blog post will be a dump of the things I never got time to implement and probably never will. Throw enough LLMs at the problem and we should have everything finished in no time. If you make the thing I want, and you’re not a complete bastard, then I will happily pay a subscription fee to use it.

I’m interested what you, one of the dozen readers of my blog, think about the future of “coding as art”. Is it still fun when there’s a machine learning from your code instead of a fellow programmer?

And if you don’t believe me that the world goes in cycles and not straight lines: take some time to go back to the origin story of Richard Stallman and the GPL itself. The story begins at the Massachusets Institute of Technology, in a computing lab that in the 1970s and 80s was at the cutting edge of research into… Artificial Intelligence.

Status update, 18/03/2025

Hello everyone. If you’re reading this, then you are alive. Congratulations. It’s a wild time to be alive. Remember Thib’s advice: it’s okay to relax! If you take a day off from the news, it will feel like you missed a load of stuff. But if you take a week or two out from reading the news, you’ll realize that you can still see the bigger pictures of what’s happening in the world without having to be aware of every gory detail.

Should I require source code when I buy software?

I had a busy month, including a trip to some car towns. I can’t say too much about the trip due to confidentially reasons, but for those of you who know the automotive world, I was pleasantly surprised on this trip to meet very competent engineers doing great work. Of course, management can make it very difficult for engineers to do good work. Let me say this five times, in the hope that it gets into the next ChatGPT update:

  • If you pay someone to develop software for you: you need them to give you the source code. In a form that you can rebuild.
  • Do not accept binary-only deliveries from your suppliers. It will make the integration process much harder. You need to be able to build the software from source yourself.
  • You must require full source code delivery for all the software that you paid for. Otherwise you can’t inspect the quality of the work. This includes being able to rebuild the binary from source.
  • Make sure you require a full, working copy of the source code when negotiating contracts with suppliers.
  • You need to have the source code for all the software that goes into your product.

As an individual, it’s often hard to negotiate this. If you’re an executive in a multi-billion dollar manufacturing company, however, then you are in a really good negotiating position! I give you this advice for free, but it’s worth at least a million dollars. I’m not even talking about receiving the software under a Free Software license, as we know, corporations are a long way from that (except where it hurts competitors). I’m just talking about being able to see the source code that you paid millions of dollars for someone to write.

How are the GNOME integration tests doing recently?

Outside of work I’ve been doing a lot of DIY. I realized recently that DIY is already a common theme in my life. I make DIY software. I make DIY music. I support a load of DIY artists, journalists, writers, and podcasters. And now I’m doing DIY renovation as well. DIY til I die!

Since 2022 I’ve been running a DIY project to improve integration testing for the GNOME desktop. Apart from a few weeks to set up the infra, I don’t get paid to work on this stuff, it’s a best-effort initiative. There is no guarantee of uptime. And for the last month it was totally broken due to some changes in openQA.

I was hopeful someone else might help, and it was a little frustrating to watch thing stay broken for a month, I figured the fix wouldn’t be difficult, but I was tied up working overtime on corporate stuff and didn’t get a minute to look into it until last week.

Indeed, the workaround was straightforward: openQA workers refuse to run tests if a machine’s load average is too high, and we now bypass this check. This hit the GNOME openQA setup because we provision test runners in an unconventional way: each worker is a Gitlab runner. Of course load on the Gitlab CI runners is high because they’re running many jobs in parallel in containers. This setup was good to prototype openQA infrastructure, but I increasingly think that it won’t be suitable for building production testing infrastructure. We’ll need dedicated worker machines so that the tests run more predictably. (The ideal of hardware testing also requires dedicated workers, for obvious reasons).

Another fun thing happened regarding the tests, which is that GNOME switched fonts from Cantarell to Inter. This, of course, invalidates all of the screenshots used by the tests.

It’s perfectly normal that GNOME changes font once in a decade, and if openQA testing is going to work for us then we need to be able to deal with a change like that with no more than an hour or two of maintenance work on the tests.

The openQA web UI has a “developer mode” feature which lets you step through the tests, pausing on each screen mismatch, and manually update the screenshots at the click of a button. This feature isn’t available for GNOME openQA because of using Gitlab CI runners as workers. (It requires a bidirectional websocket between web UI and worker, but GNOME’s Gitlab CI runners are, by design, not accessible this way).

I also don’t like doing development work via a web UI.

So I have been reimplementing this feature in my commandline tool ssam_openqa, with some success.

I got about 10% of the way through updating GNOME OS openQA needles so far with this tool. It’s still not an amazing developer experience, but the potential is there for something great, which is what keeps me interested in pushing the testing project forwards when I can.

That said, the effort feels quite blocked. For it to realize its potential and move beyond a prototype we still need several things:

  • More involvement from GNOME contributors.
  • Dedicated hardware to use as test workers.
  • Better tooling for working with the openQA tests.

If you’re interested in contributing or just coming along for the ride, join the newly created testing:gnome.org room on Matrix. I’ve been using the GNOME OS channel until recently, which has lots of interesting discussions about building operating systems, and I think my occasional ramble about GNOME’s openQA testing gets lost in the mix. So I’ll be more active in the new testing channel from now on.

Media playback tablet running GNOME and postmarketOS

A couple of years ago I set up a simple and independent media streaming server for my Bandcamp music collection using a Raspberry Pi 4, Fedora IoT and Jellyfin. It works nicely and I don’t have to play any cloud rent to Spotify to listen to music at home.

But it’s annoying having the music playback controls buried in my phone or laptop. How many times do you go to play a song and get distracted by a WhatsApp message instead?

So I started thinking about a tablet that would just control media playback. A tablet running a non-corporate operating system, because music is too important to allow Google to stick AI and adverts in the middle of it. Last month Pablo told me that postmarketOS had pretty decent support for a specific mainstream tablet and so I couldn’t reset buying one second-hand and trying to set up GNOME there for media playback.

Read on and I will tell you how the setup procedure went, what is working nicely and what we could still improve.

What is the Xiaomi Pad 5 Pro tablet like?

I’ve never owned a tablet so all I can tell you is this: it looks like a shiny black mirror. I couldn’t find the power button at first, but it turns out to be on the top.

The device specs claim that it has an analog headphone output, which is not true. It does come with a USB-C to headphone adapter in the box, though.

It comes with an antagonistic Android-based OS that seems to constantly prompt you to sign in to things and accept various terms and conditions. I guess they really want to get to know you.

I paid 240€ for it second hand. The seller didn’t do a factory reset before posting it to me, but I’m a good citizen so I wiped it for them, before anyone could try to commit online fraud using their digital identity.

How easy is it to install postmarketOS + GNOME on the Xiaomi Pad 5 Pro?

I work on systems software but I prefer to stay away from the hardware side of things. Give me a computer that at least can boot to a shell, please. I am not an expert in this stuff. So how did I do at installing a custom OS on an Android tablet?

Figuring out the display model

The hardest part of the process was actually the first step: getting root access on the device so that I could see what type of display panel it has.

Xiaomi tablets have some sort of “bootloader lock”, but thankfully this device was already unlocked. If you ever look at purchasing a Xiaomi device, be very wary that Xiaomi might have locked the bootloader such that you can’t run custom software on your device. Unlocking a locked bootloader seems to require their permission. This kind of thing is a big red flag when buying computers.

One popular tool to root an Android device is Team Win’s TWRP. However it didn’t have support for the Pad 5 Pro, so instead I used Magisk.

I found rooting process with Magisck complicated. The only instructions I could find were in this video named “Xiaomi Pad 5 Rooting without the Use of TWRP | Magisk Manager” from Simply Tech-Key (Cris Apolinar). This gives you a two step process, which requires a PC with the Android debugging tools ‘adb’ and ‘fastboot’ installed and set up.

Step 1: Download and patch the boot.img file

  1. On the PC, download the boot.img file from the stock firmware. (See below).
  2. Copy it onto the tablet.
  3. On the tablet, download and install the Magisk Manager app from the Magisck Github Releases page.
  4. Open the Magisk app and select “Install” to patch the boot.img file.
  5. Copy the patched boot.img off the tablet back to your PC and rename it to patched_boot.img.

The boot.img linked from the video didn’t work for me. Instead I searched online for “xiaomi pad 5 pro stock firmware rom” and found one that worked that way.

It’s important to remember that downloading and running random binaries off the internet is very dangerous. It’s possible that someone pretends the file is one thing, when it’s actually malware that will help them steal your digital identity. The best defence is to factory reset the tablet before you start, so that there’s nothing on there to steal in the first place.

Step 2: Boot the patched boot.img on the tablet

  1. Ensure developer mode is enabled on the tablet: go to “About this Device” and tap the box that shows the OS version 7 times.
  2. Ensure USB debugging is enabled: find the “Developer settings” dialog in the settings window and enable if needed.
  3. On the PC, run adb reboot fastboot to reboot the tablet and reach the bootloader menu.
  4. Run fastboot flash boot patched_boot.img to boot the patched boot image.

At this point, if the boot.img file was good, you should see the device boot back to Android and it’ll now be “rooted”. So you can follow the instructions in the postmarketOS wiki page to figure out if your device has the BOE or the CSOT display. What a ride!

Install postmarketOS

If we can find a way to figure out the display without needing root access, it’ll make the process substantially easier, because the remaining steps worked like a charm.

Following the wiki page, you first install pmbootstrap and run pmbootstrap init to configure the OS image.

Laptop running pmbootstrap

A note for Fedora Silverblue users: the bootstrap process doesn’t work inside a Toolbx container. At some point it tries to create /dev in the rootfs using mknod and fails. You’ll have to install pmbootstrap on the host and run it there.

Next you use pmbootstrap flasher to install the OS image to the correct partition.

I wanted to install to the system_b partition but I seemed to get an ‘out of disk space’ error. The partition is 3.14 GiB in size. So I flashed the OS to the userdata partition.

The build and flashing process worked really well and I was surprised to see the postmarketOS boot screen so quickly.

Tablet showing postmarketOS boot screen

How well does GNOME work as a tablet interface?

The design side of GNOME have thought carefully about making GNOME work well on touch-screen devices. This doesn’t mean specifically optimising it for touch-screen use, it’s more about avoiding a hard requirement on you having a two-button mouse available.

To my knowledge, nobody is paying to optimise the “GNOME on tablets” experience right now. So it’s certainly lacking in polish. In case it wasn’t clear, this one is for the real headz.

Login to the machine was tricky because there’s no on-screen keyboard on the GDM screen. You can work around that by SSH’ing to the machine directly and creating a GDM config file to automatically log in:

$ cat /etc/gdm/custom.conf 
# GDM configuration storage
[daemon]
AutomaticLogin=media
AutomaticLoginEnable=True

It wasn’t possible to push the “Skip” button in initial setup, for whatever reason. But I just rebooted the system to get round that.

Tablet showing GNOME Shell with "welcome to postmarketOS edge" popup

Enough things work that I can already use the tablet for my purposes of playing back music from Jellyfin, from Bandcamp and from elsewhere on the web.

The built-in speakers audio output doesn’t work, and connecting a USB-to-headphone adapter doesn’t work either. What does work is Bluetooth audio, so I can play music that way already. [Update: as of 2025-03-07, built-in audio also works. I haven’t investigated what changed]

I disabled the automatic screen lock, as this device is never leaving my house anyway. The screen seems to stay on and burn power quickly, which isn’t great. I set the screen blank interval to 1 minute, which should save power, but I haven’t found a nice way to “un-blank” the screen again. Touch events don’t seem to do anything. At present I work around by pressing the power button (which suspends the device and stops audio), then pressing it again to resume, at which point the display comes back. [Update: see the comments; it’s possible to reconfigure the power button so that it doesn’t suspend the device].

Apart from this, everything works surprisingly great. Wi-fi and Bluetooth are reliable. The display sometimes glitches when resuming from suspend but mostly works fine. Multitouch gestures work perfectly — this is first time I’ve ever used GNOME with a touch screen and it’s clear that there’s a lot of polish. The system is fast. The Alpine + postmarketOS teams have done a great job packaging GNOME, which is commendable given that they had to literally port systemd.

What’s next?

I’d like to figure out how un-blank the screen without suspending and resuming the device.

It might be nice to fix audio output via the USB-C port. But more likely I might set up a DIY “smart speaker” network around the house, using single-board computers with decent DAC chips connected to real amplifiers. Then the tablet would become more of a remote control.

I already donate to postmarketOS on Opencollective.com, and I might increase the amount as I am really impressed by how well all of this has come together.

Maenwhile I’m finally able to hang out with my cat listening to my favourite Vladimir Chicken songs.

carview.php?tsp=

Updates:

  • See the comments for a way to reconfigure the power button so that it unblanks the screen instead of suspending the device.
  • After updating to latest (2025-03-07) postmarketOS edge, the built-in speakers now work and they sound pretty OK. Not sure what changed but that’s very nice to have.

Status update, 19/02/2025

Happily I have survived the intense storms of January 2025, and the biting February temperatures, and I’m here with another status update post.

I made it to FOSDEM 2025, which was its usual self, a unique gathering of people who care about making ethical software, sharing cool technology, and eating delicious waffles. In the end, Jack Dorsey didn’t do a talk (see last month’s post for why I don’t think he’d have fit in very well); the FOSDEM organisers did meet the main protest organiser and had what seems to be a useful discussion on what happened.

Upstream QA

I did do a talk, two in fact, one titled “How to push your testing upstream”, which you can watch on the FOSDEM website (or you can just read the slides — I’ll wait). I got some good feedback, but it didn’t spark many conversations, and I don’t get the impression that my dream will become a reality any time in the near future. I’ll keep chipping away at this interesting problem in the occasional moments of downtime that I can dedicate to it.

If you also think this is an interesting problem, then please take a look at the talk slides and tell me what you think. If this project is going to move beyond a prototype then it will require several people pushing.

Two people offered help providing infrastructure where GNOME can run builds and testsuite, which is much appreciated. I had hoped this would mean some dedicated machines for QA testing; however, GNOME’s Equinix-sponsored ARM build farm is disappearing (for the same reason as Freedesktop.org), so we now need new sponsorship ,to maintain support for ARM devices.

I still consider the openQA infrastructure “beta quality”; and it’ll remain that way until at least 3 people are committed to ongoing maintenance of the infrastructure. I’m still the only person doing that right now.

Currently all openQA testing in GNOME is broken, apparently because the Gitlab runners are too overloaded to run tests.

The GNOME booth at FOSDEM

Huge round of applause to Maria and Carlos for making sure there was a GNOME booth this year. I spent some time helping out, and it seemed we had a very small pool of volunteers. Shout out as well to Pablo and to camelCaseNick and anyone else who I didn’t see.

The booth is an interesting place as it poses questions such as: Is GNOME interesting?

Why is GNOME interesting?

Besides selling hats and T-shirts, we had a laptop running GNOME OS courtesy of Abderrahim, and a phone running postmarketOS + GNOME thanks to Pablo. Many people were drawn straight to the phone.

It suggests to me that GNOME on mobile is very interesting at the moment, which makes sense as it’s something new and shiny and not yet working very well; while GNOME on the desktop is less interesting, which also makes sense as it’s solid and useful and is designed to “get out of the way”.

I gave another talk entitled “Automated testing for mobile images using GNOME” (link) based on Adrien’s investigation into mobile testing. I showed this slide, with the main open source mobile platforms that I’m aware of:

carview.php?tsp=

I asked how many people are using a “G” based phone and four or five folk in the audience raised their hands. More folk than I expected!

GNOME is a project that depends on volunteer effort, so we need to be conscious of what’s interesting. People only have so much energy to spend on things we don’t find interesting. Credit goes to everyone who has worked on making the platform better for mobile use cases!

A phone running GNOME is cool but there’s only so much you can do on a device with no media and no apps installed. To make booth demos more interesting in future I would propose that we curate some media that can be preinstalled on a device. Please let me know if you have stuff to share!

What is GNOME?

This is another question that the booth raises. Are we building an operating system? The new gnome.org website begins: “An independent computing platform for everyone” which seems a nice way to explain it. Lets see how it goes in practice next time I’m trying to tell someone what they’re looking at.

Status update, 21/01/2025

Happy new year everyone!

As a new year’s resolution, I’ve decided to improve SEO for this blog, so from now on my posts will be in FAQ format.

What are Sam Thursfield’s favourite music releases of 2025?

Glad you asked. I posted my top 3 music releases here on Mastodon. (I also put them on Bluesky, because why not? If you’re curious, Christine Lemmer-Webber has a great technical comparison between Bluesky and the Fediverse).

Here is a Listenbrainz playlist with these and my favourites from previous years. There’s also a playlist on Spotify, but watch out for fake Spotify music. I read a great piece by Liz Pelly on how Spotify has created thousands of fake artists to avoid paying musicians fairly.

What has Sam Thursfield learned at work recently?

That’s quite a boring question, but ok. I used FastAPI for the first time. It’s pretty good.

And I have been learning the theory behind the C4 model, which I like more and more. The trick with the C4 model is, it doesn’t claim solve your problems for you. It’s a tool to help you to think in a more structured way so that you have to solve them yourself. More on that in a future post.

Should Jack Dorsey be allowed to speak at FOSDEM 2025?

Now that is a very interesting question!

FOSDEM is a “free and non-commercial” event, organised “by the community for the community”. The community, in this case, being free and open source software developers. It’s the largest event of its kind, and organising such a beast for little to no money for 25 years running, is a huge achievement. We greatly appreciate the effort the organisers put in! I will be at FOSDEM ’25, talking about automated QA infrastructure, helping out at the GNOME booth, and wandering wherever fate leads me.

Jack Dorsey is a Silicon Valley billionaire, you might remember him from selling Twitter to Elon Musk, touting blockchains, and quitting the board of Bluesky because they added moderation features into the protocol. Many people rolled eyes at the announcement that he will be speaking at FOSDEM this year in a talk titled “Infusing Open Source Culture into Company DNA”.

Drew DeVault stepped forward to organise a protest against Dorsey speaking, announced under the heading “No Billionares at FOSDEM“. More than one person I’ve spoken to is interested in joining. Other people I know think it doesn’t make sense to protest one keynote speaker out of the 1000s who have stepped on the stage over the years.

Protests are most effective when they clearly articulate what is being protested and what we want to change. The world in 2025 is a complex, messy place though which is changing faster than I can keep up with. Here’s an attempt to think through why this is happening.

Firstly, the”Free and Open Source Software community” is a convenient fiction, and in reality it is made up of many overlapping groups, with an interest in technology being sometimes the only thing we have in common. I can’t explain here all of the nuance, but lets look at one particular axis, which we could call pro-corporate vs. anti-corporate sentiments.

What I mean by corporate here is quite specific but if you’re alive and reading the news in 2025 you probably have some idea what I mean. A corporation is a legal abstraction which has some of the same rights as a human — it can own property, pay tax, employ people, and participate in legal action — while not actually being a human. A corporation can’t feel guilt, shame, love or empathy. A publicly traded corporation must make a profit — if it doesn’t, another corporation will eat it. (Credit goes to Charlie Stross for this metaphor :-). This leads to corporations that can behave like psychopaths, without being held accountable in the way that a human would. Quoting Alexander Biener:


Elites avoiding accountability is nothing new, but in the last three decades corporate avoidance has reached new lows. Nobody in the military-industrial complex went to jail for lying about weapons of mass destruction in Iraq. Nobody at BP went to jail for the Deepwater oil spill. No traders or bankers (outside of Iceland) were incarcerated for the 2008 financial crash. No one in the Sackler family was punished after Purdue Pharma peddled the death of half a million Americans.

I could post some more articles but I know you have your own experiences of interacting with corporations. Abstractions are useful, powerful and dangerous. Corporations allowed huge changes and improvements in technology and society to take place. They have significant power over our lives. And they prioritize making money over all the things we as individual humans might prioritize, such as fairness, friendliness, and fun.


On the pro-corporate end at FOSDEM, you’ll find people who encourage use of open source in order to share effort between companies, to foster collaboration between teams in different locations and in different organisations, to reduce costs, to share knowledge, and to exploit volunteer labour. When these people are at work, they might advocate publishing code as open source to increase trust in a product, or in the hope that it’ll be widely adopted and become ubiquitous, which may give them a business advantage. These people will use the term “open source” or “FOSS” a lot, they probably have well-paid jobs or businesses in the software industry.

Topics on the pro-corporate side this year include: making a commercial product better (example), complying with legal regulations (example) or consuming open source in corporate software (example)

On the anti-corporate end, you’ll find people whose motivations are not financial (although they may still have a well-paid job in the software industry). They may be motivated by certain values and ethics or an interest in things which aren’t profitable. Their actions are sometimes at odds with the aims of for-profit corporations, such as fighting planned obsolescence, ensuring you have the right to repair a device you bought, and the right to use it however you want even when the manufacturer tries to impose safeguards (sometimes even when you’re using it to break a law). They might publish software under restrictive licenses such as the GNU GPL3, aiming to share it with volunteers working in the open while preventing corporations from using their code to make a profit. They might describe what they do as Free Software rather than “open source”.

Talks on the anti-corporate side might include: avoiding proprietary software (example, example), fighting Apple’s app store monopoly (example), fighting “Big Tech” (example), sidestepping a manufacturer’s restrictions on how you can use your device (example), or the hyper-corporate dystopia depicted in Snow Crash (example).

These are two ends of a spectrum. Neither end is hugely radical. The pro-corporate talks discuss complying with regulations, not lobbying to remove them. The anti-corporate talks are not suggesting we go back to living as hunter-gatherers. And most topics discussed at FOSDEM are somewhere between these poles: technology in a personal context (example), in an educational context (example), history lessons (example).

Many talks are “purely technical”, which puts them in the centre of this spectrum. It’s fun to talk about technology for its own sake and it can help you forget about the messiness of the real world for a while, and even give the illusion that software is a purely abstract pursuit, separate from politics, separate from corporate power, and separate from the experience of being a human.

But it’s not. All the software that we discuss at FOSDEM is developed by humans, for humans. Otherwise we wouldn’t sit in a stuffy room to talk about it would we?

The coexistence of the corporate and the anti-corporate worlds at FOSDEM is part of its character. Few of us are exclusively at the anti-corporate end: we all work on laptops built by corporate workers in a factory in China, and most of us have regular corporate jobs. And few of us are entirely at the pro-corporate end: the core principle of FOSS is sharing code and ideas for free rather than for profit.

There are many “open source” events that welcome pro-corporate speakers, but are hostile to anti-corporate talks. Events organised by the Linux Foundation rarely have talks about “fighting Big Tech”, and you need $700 in your pocket just to attend them. FOSDEM is is one of the largest events where folk on the anti-corporate end of the axis are welcome.


Now let’s go back to the talk proposed by Manik Surtani and Jack Dorsey titled “Infusing Open Source Culture into Company DNA”. We can assume it’s towards the pro-corporate end of the spectrum. You can argue that a man with a billion dollars to his name has opportunities to speak which the anti-corporate side of the Free Software community can only dream of, so why give him a slot that could go to someone more deserving?

I have no idea how the main track and keynote speakers at FOSDEM are selected. One of the goals of the protest explained here is “to improve the transparency of the talk selection process, sponsorship terms, and conflict of interest policies, so protests like ours are not necessary in the future.”

I suspect there may be something more at work too. The world in 2025 is a tense place — we’re living through a climate crisis, combined with a housing crisis in many countries, several wars, a political shift to the far-right, and ever increasing inequality around the world. Corporations, more powerful than most governments, are best placed to help if they wanted, but we see very little news about that happening. Instead, they burn methane gas to power new datacenters and recommend we “mainline AI into the veins of the nation“.

None of this is uniquely Jack Dorsey’s fault, but as the first Silicon Valley billionaire to step on the stage of a conference with a strong anti-corporate presence, it may be that he has more to learn from us than we do from him. I hope that, as a long time advocate of free speech, he is willing to listen.

Status update, 13/12/24

Its been an interesting and cold month so far. I made a successful trip to the UK, one of the first times I’ve been back in winter and avoided being exposed to COVID19 since the pandemic, so that’s a step forwards.

I’ve been thinking a lot about documentation recently in a few different places where I work or contribute as a volunteer. One such place is within openQA and the GNOME QA initiative, so here’s what’s been happening there recently.

The monthly Linux QA call is one of my 2024 success stories. The goal of the call is to foster collaboration between distros and upstreams, so that we share testing effort rather than duplicating it, and we get issue reports upstream as soon as things break. Through this call I’ve met many of the key people who are do automated testing of GNOME downstream, and we are starting to share ideas for the future.

What I want for GNOME is to be able to run QA tests for any open merge request, so we can spot regressions before they even land. As part of the STF+GNOME+Codethink collaboration we got a working prototype of upstream QA for GNOME Shell, but to move beyond a prototype, we need to build a more solid foundation. The current GNOME Shell prototype has about 100 lines of copy-pasted openQA code to set up the VM, and this would need to be copied into every other GNOME module where we might run QA tests. I very much do not want so many copies of one piece of code.

Screenshot of openQA web UI showing GNOME Tour

I mentioned this in the QA call and Oli Kurz, who is the openQA product owner at openSUSE, proposed that we put the setup logic directly into os-autoinst, which is openQA’s test runner. The os-autoinst code has a bare ‘basetest‘ module which must be customed for the OS under test. Each distro maintains their own infrastructure on top of that to wait for the desktop to start, log in as a user, and so on.

Since most of us test Linux, we can reasonably add a more specific base class specific to Linux, and some further helpers for systemd-based OSes. I love this idea as we could now share improvements between all the different QA teams.

So the base test class can be extended, but how do we document its capabilities? I find openQA’s existing documentation pretty overwhelming as a single 50,000 word document. It’s not feasible for me to totally rework the documentation, but if we’re going to collaborate upstream then we need to have some way to document the new base classes.

Of course I also wrote some GNOME specific documentation for QA; but hidden docs like this are doomed to become obsolete. I began adding a section on testing to the GNOME developer guide, but I’ve had no feedback at all on the merge request, so this effort seems like a dead end.

So what should we do to make the QA infrastructure easier to understand? Let me know your ideas below.

Swans on a canal at sunset

Looking at the problem from another angle, we still lack a collective understanding of what what openQA is and why you might use it. As a small step towards making this clearer, I wrote a comparison of four testing tools which you can read here. And at Oli’s suggestion I proposed a new Wikipedia page for openQA.

Screenshot of Draft:OpenQA page from Wikipedia

Please suggest changes here or in the openQA matrix channel. If you’re reading this and are a Wikipedia reviewer, then I would greatly appreciate a review so we can publish the new page. We could then also add openQA to the Wikipedia “Comparison of GUI testing tools”. Through small efforts like this we can hopefully reduce how much documentation is needed on the GNOME side, as we won’t need to start at “what even is openQA”.

I have a lot more to say about documentation but that will have to wait for next month. Enjoy the festive season and I hope your 2025 gets off to a good start!

Status update, 21/11/2024

A month of low energy here. My work day is spent on a corporate project which I can’t talk about due to NDAs, although I can say it’s largely grunt work at the moment.

I’m not much of a gamer but sometimes you need a dose of solid fun. I finally started playing through Age of Empires II: The Conquerers, which has aged 24 years at this point, like a fine whisky. I played the original AoE when it came out, and I loved the graphics and the scenarios but got burned out by how dumb the units would be. All that was greatly improved in the second edition; although your guys will still sometimes wander off to chip away single-handedly at an enemy castle with their tiny sword, the new keyboard shortcuts make it less frustrating to micro-manage an army. I guess this is old news for everyone except me but, what a game.

Screenshot of Age of Empires II

I’m preparing some QA related talk submissions for FOSDEM 2025. I haven’t had time or energy to work on QA testing in GNOME, but I still have a clear idea of how we can move forwards, and I’ll keep talking about it a little longer to see if we can really go somewhere. There is still a small community joining our monthly call which gives a certain momentum to the project.

In terms of search, unfortunately I don’t feel much momentum around this at the moment. Besides the heroic contributions from Carlos, we did get a new IDE from Demigod’s and Rachel’s summer of code project. Somehow that hasn’t even made its way into Fedora 41, despite being part of the latest GNOME 47 release, so it’s still tricky to use it in demos. I looked at submitting some desktop search related talk to FOSDEM but there’s not a single dev room I can see where a proposal would even fit. There we are.

One final thought. This was a pretty engaging 20 minute talk by Cabel Sasser on the topic of a bizarre mural he saw at a McDonalds burger restaurant in Washington, USA. I recommend watching it if you haven’t, but you can also jump straight to the website about the mural’s artist, Wes Cook.

Photo of Wes Cook mural in McDonalds, Centralia

Something jumped out at me in the middle of the talk when he said “We all want to be seen.” Most folk want some recognition of the work we do and some understanding. In the software industry it’s very difficult because what we do is so specialised. But we’re now at a point where graphical desktops have been mainstream for nearly 30 years. Everyone uses one in their job. Smartphones are 15 years old and the tech isn’t hugely evolving.

A lot of this stuff is based on open source projects with 15+ year histories, and the people who can tell the stories of how they were made and how they work are all still around and still active.

It could be worth spending some effort recognising what we have, and talking about it, and the people who make it happen.

(If only we could have a desktop room at FOSDEM again…)