| CARVIEW |
January 16, 2026
Filed under: tech»llmEating Paste
Every time I open up a new spreadsheet in Google Sheets, a task that I perform for my job two or three times a day, it shoves a toolbar into the right side of the screen that cannot be disabled with a large blue button. "Help me create a table," the button says, and underneath it lists some of the incredible "AI enhanced" tables that I can insert, such as:
- "Feedback collection," which contains fake comments from a Google Form that doesn't exist, and then provides prompts for an LLM to classify them as positive or negative.
- "Ad creation," in which I can type a product type and an audience, and then Gemini will generate insipid ad copy and social blurbs for them.
- "Call summary," which includes notes for calls that never happened and summaries of those notes. Importantly, the notes themselves are not very long, although they've been written with an excess of verbiage to make "summarizing a single paragraph" seem more reasonable.
This persists even after I've pasted data into the workbook, except then it will apparently replace my actual numbers. So I get to close this panel every single time. It is deeply infuriating.
In the blank tab, meanwhile, a prompt lets me know that I can "type =AI to insert a Gemini prompt into any cell." I've literally never done this, but the docs for the function let me know that I can use this to search the web for information, generate a slogan, or categorize addresses. They also warn me that I shouldn't rely on these features for professional advice, and that they may be inaccurate, which strikes me as a fundamental misunderstanding of the two most important goals for a spreadsheet (i.e., that it is a professional tool for achieving accurate results).
A lot of ink has been spilled on how wasteful LLMs are, how they're based on training data taken without consent and remixed without attribution, their cost to labor, their biases and harmful effects on mental health, and of course the grotesque and grating smoothness of their prose. But it also cannot be stressed enough how stupid their integration into tools like Sheets or Docs has been.
In so many of these cases, the LLM integration is what Jürgen "tante" Geuter refers to as "tool-like" or "makeshifts at best": the =AI formula is a kind of paste being applied to the interface in lieu of having actual purpose-built mechanisms for a given task. For example, sentiment analysis is a well-known problem that can be solved with natural language processing relatively cheaply, but instead of offering =SENTIMENT as a well-scoped and deterministic formula, Google has chosen just to pipe cells into a chatbot.
Similarly, when the docs describe categorizing a list of pizza places by NYC borough using their address and neighborhood, there are tools that would be useful for that — and which are already available, in some form, in Sheets! Google Apps Script provides programmatic access to the Maps geocoder to turn address strings into lists of locations at decreasing levels of granularity, which could include sub-city geography. Adding =GEOCODE could have both a high level of accuracy and other information that could be used to verify the result. There is no reason to get a chatbot involved in this! Just expose the functionality that already exists!
The reason I find the intrusion of Gemini and LLM assistance into Sheets so particularly frustrating is not just that I'm a giant tabular data nerd. It's also that prior to this year, Sheets and Excel had started quietly expanding the underpinnings of their formula language in exciting new ways. =LET, =LAMBDA, =MAP, and =REDUCE build on the work that had started with =FILTER to create a more consistent model for how formulas handle array values, with exciting results. As the formula language became more accomodating to functional programming, it wasn't hard to imagine how other services could be exposed directly in the sheet, without requiring Apps Script or Visual Basic.
Instead, what Google has done is smear new features into a "prompt" formula, a completely opaque tool that takes in and returns arbitrary text in unpredictable ways. This fits well with the pseudo-mystical belief system of AI proponents, who treat LLMs as a "miracle machine" that can be applied to any problem if you just let it churn long enough. But it does nothing to make spreadsheets a better tool, an object that has been shaped intentionally for accomplishing discrete and useful tasks, and with which a person can gain expertise. It is as though someone wished that Excel's notoriously fickle date handling behavior could be moved into a function, and the monkey's paw curled.
January 6, 2026
Filed under: gaming»software2025 in Games
For the first half of the year, I didn't have a ton of time to play or watch much of anything — Spanish lessons for our student visas kept me busy, and between that and work my concentration was shot. So that meant I spent a lot of time in stuff that could be played in bursts without continuity: Game Boy pinball and fighting games.
Street Fighter 6 had really grabbed me in our last year in Chicago. But when I switched over to Linux at the start of 2025, SF6 didn't always jive with my older Nvidia card, and I was a little tired of its metagame anyway (there's only so many times you can watch the same Ken combo), so I started learning Guilty Gear Strive in March.
Strive is a very different game, with its own quirks and frustrations: where SF6 has strong system mechanics that tend to flatten out variations in play style, Strive has thirty-odd characters who each break the rules in some way. It also has a lot more complicated options for interrupting offense and spending meter. The result is a much more dynamic game, with the caveat that some matchups (Happy Chaos, Faust, I-no) resemble trolling more than actual competition. I like it, and the changes for the upcoming 2.0 version sound promising, but I'd love something (Marvel Tokon?) that's somewhat of a midpoint between the two philosophies.
I did fit in a few smaller single-player titles between classes and work. Blade Chimera is a pretty good metroidvania from Team Ladybug, although if you haven't played Deedlit in Wonder Labrynth, start with that instead. UFO 50 was a great value for the money: fifty NES-style games from a fictional developer, so even if they're not all good, you're guaranteed a few hits that match your taste (for me, that's Party House, Elfazar's Hat, and Overbold).
I also surprised myself by completing New Game++ in Armored Core 6 after bouncing off it pretty hard in 2024 — I think it does an astonishingly bad job of providing guidance on its own mechanics, which I know is FromSoft's whole deal but there's still a reason that the only game of theirs that I really connect with is Sekiro. AC6 is fine: I like the parts that remind me of Virtual On, mostly.
Three indie games stand out on PC once I had time and energy for them. In May, Blendo Games finally released Skin Deep, their immersive sim in which you rescue cats from space pirates in a series of Die Hard-inspired slapstick scenarios. I love pretty much everything Brendon Chung has ever made, and this is no exception: it's funny, surpisingly touching, and just a little bit janky in a way that adds texture, not frustration. Well worth the eight year wait since Quadrilateral Cowboy.
Metro Gravity is two Rush games in one: mixing the "any direction can be down" mechanic from Gravity Rush with the beat-matching combat of Hi-Fi Rush, all coated with a strong PS2 aesthetic. I enjoyed this quite a bit, although not quite enough to do all the fiddly challenges for extra costumes, and I do think the story gets a little over its skis at the end. Still a pretty incredible first game from a solo developer, and I'm excited to see what he does next.
Third, I tried Wanderstop toward the end of the year. although I don't know if I'll finish. People love this for its funny dialog and exploration of burnout, and if I'd been playing it in March, that probably would have spoken to me more deeply. But in December, now mostly recovered from chronic sleep deprivation and still happily employed at the best job I've ever had, I just wasn't in that psychological space anymore.
In August I bought a Switch 2, which was its own little minigame since Amazon Spain was convinced I was committing credit card fraud and kept cancelling my order. This was partly reward for graduating classes, and partly a way to get out of my office chair. The vast majority of my time on it was spent in Silksong, which is a masterpiece that I never want to look at again. I also played through Donkey Kong Bananza (fun, but disposable), Mario Kart World (same), and Star Wars: Outlaws (genuinely far better than it has any right to be).
However, two lesser-known Switch titles probably deserve special notice. One of these is Absolum, a roguelike brawler from the team that made Streets of Rage 4 a few years back. It's not particularly well-balanced, but it feels phenomenal to play, and it's beautiful to look at. People don't make titles in this genre much any more, and there are good reasons for that, but I'm glad Guard Crush is keeping it alive.
The other is Demonschool, a tactical puzzler with a self-explanatory name. This reminds me of Into the Breach: it's a little more forgiving with the rewind, but each turn asks you to set up a chain of actions with deterministic results, and then you're graded on how quickly you dispatched the required number of enemies and how many people on your team survived. It's good, and while the writing at the macro level is not particularly great (and the game itself is just a little bit too long), on a line-by-line basis it's one of the funnier things I played this year.
Finally, thirty-two years after it was initially released, I've finally beaten Final Fantasy VI, at which point Belle immediately confiscated the Analogue Pocket so she could play it again herself. I can see why a lot of people love this game, but it's not going to supplant FFXIII as the one I'm irrationally attached to. Still, happy to take that one off the bucket list.
My hope for 2026 is that it'll be a chance to catch up similarly in the PC space, having now updated my GPU to something a little more modern, if not quite cutting-edge. As someone who is trying very hard to keep LLM slop out of my life, a strong rule of thumb is going to be to stick to games that were in development prior to 2023 — the equivalent of low-radiation steel construction — or were developed by teams with clear disclosure policies around AI usage. I don't like that this is something I have to think about, but unfortunately we live in a nightmare run by oligarchs obsessed with turning fossil fuels into LinkedIn posts. On the other hand, that gives me roughly one-and-a-half console generations of entertainment to keep me busy until the bubble finally pops.
August 14, 2025
Filed under: tech»webModern Solutions
Twelve (!) years ago, when I started writing what would turn into the interactive template for the Seattle Times, NPR, and Civic News (not to mention a few others), I needed a way to cue up its various functions from the command line. As this was pre-Webpack, this was an interesting space with a lot of innovation going on. I went with Grunt, instead of Gulp, Broccoli, or Brunch.
Since at this point there are probably readers who have never used (or even heard of) these tools, it's worth talking about the design of Grunt a bit. The basic concept is that your build process can be organized into tasks, and those tasks can be composed into larger pipelines. So I can transpile client-side scripts with grunt bundle, or run the CSS preprocessor with grunt less, but I can also run both with grunt bundle less. More importantly, I can define a new meta-task that lumps these together with other build steps into a single pass: grunt static in the rig loads data, applies it to HTML templating, builds JavaScript, assembles CSS, and copies assets to the build folder for publishing.
Grunt was originally introduced and pitched to many developers based on its plugin ecosystem, but I never used those very much. Instead, as the interactive template grew and adapted to new projects (e.g., scraping title data for the NPR Book Concierge, baking out election results, connecting to Google office apps), the real value became the way that it imagined the build process as a vocabulary. Grunt ended up being almost Forth-like: a system in which you create very small "words" of functionality (which are easy to create and maintain, due to their limited scope) and then use the tool to sequence and combine them toward more complex goals.
In the wider dev culture, Grunt was eclipsed by Webpack, which was A) boosted by the popularity of React and B) provided an all-in-one solution for front-end build tooling (as long as you didn't mind debugging a truly incomprehensible configuration file). Once out of fashion, Grunt lost a lot of energy. The last significant update in the source repo was more than three years ago, and the last real feature release was about a decade back. It shipped a copy of Coffeescript all the way up through 2020! I am personally very content to use tools that are tried and true, and Grunt has continued to work well without complaints for all this time, but there's a looming sense that at some point either Apple or Node (or both) will ship an update that breaks it, and that'll be an awkward week for me.
Party like it's 2014
Unfortunately, none of the tools that replaced it — Webpack, Vite, Parcel, npm scripts, etc. — really do what Grunt was doing. They're very good if you want to build out a single-page application on a few static routes, especially if you really only care about the JS code path. But they're not designed to be a general-purpose task runner for a static site generator, and they certainly don't offer the same kind of composability. So I've started writing Heist, a kind of minimal subset of Grunt that takes advantage of all the advances in the Node runtime over the intervening decade.
The JavaScript community loves to come up with new approaches to problem solving — new philosophies and theories of architecture — which it deploys via increasingly complex runtimes and compilation processes. These tendencies are fractal: they show up at the broad level, as with JSX or "component models," and at the micro level, as with the endless parade of state management solutions. I don't think this innovation is necessarily bad, but it perpetuates the pattern of what happened with Grunt: not only are old tools abandoned when a replacement is introduced, but their approaches are discarded as well.
But what if, as with Heist, we retained the design and just modernized the code? Today's built-in JavaScript environment is monumentally more capable both on the browser and the server than it was a decade ago. Instead of building around an entirely new fundamental conceit, forcing developers to abandon trusted paradigms in favor of blowing bong hits in the cat's face, we start from a familiar concept with a radically smaller and more maintainable codebase. This turns out to be surprisingly compact: the core of Grunt task management, plus file system searches, ends up being about 160 lines of code when we take advantage of modern Node.
I catch myself doing this fairly often. Heist is Grunt built on a modern foundation. Skelethon is Backbone-style MVC and Conspiracy is HTML-based templating, again on a modern foundation. The interactive template itself (as well as my textbook on interactive graphics) builds out a piecemeal jQuery from browser primitives. I've also explored building new patterns (like signals) using classic OO techniques. Some of this is no doubt nostalgia — the same way the best music was always released your senior year of high school, the best code patterns are the ones you learned at your first professional development gig — but I also have a strong feeling that those paradigms weren't broken, and don't deserve to be tossed aside.
A little friction, as a treat
The challenge is distinguishing between traditional patterns that were a "best worst choice" option for the time of their design (think of Hyperscript-style h("div") DOM construction) and which ones are still useful. I wish I had hard and fast rules for this, but the one I come back to most often is "good boilerplate." Isn't all boilerplate bad? I would argue no, it's actually deeply important.
Take my favorite web development punching bag: React hooks, which are violently anti-boilerplate. Lots of developers love hooks. They provide a way to manage persistent state in a UI made up only of nested functions (if you're thinking about explaining this to a student, alarm bells should already be going off). And they seem kind of magical when you use them, because somehow they're managing to track a value consistently across function invocations using only a local variable, which (if you know anything about JavaScript scope) seems like it should be impossible.
The reason for that, under the hood, is that hooks are tracked using a linear list of value slots that is populated and accessed whenever React renders. That means that the order and frequency of these must stay constant for it to work, and they can only be called from inside a function chain initiated by the render process. You can't use hooks in regular code, and you cannot put them inside any dynamic code path, such as loops or conditionals. There's a whole set of rules for this, plus dev plugins to help flag misuse. All of which seems self-evidently insane to me, but it does eliminate the need to type out the class definition.
Now, I don't like writing boilerplate code more than anyone else, and I generally think it's a sign that you need to think about your abstractions more clearly. But I would argue that a little boilerplate is good for you. Similar to the "framework vs. library" distinction, boilerplate often implies that you remain in control of execution, and it provides a strong example for structure if it's well-designed. Arguably, the biggest problem with React's class components wasn't the classes themselves, it's that the lifecycle methods were garbage — everything they've done since has been downstream of those early bad decisions.
Not to mention, when we think about younger developers, it's good to not only have that structural guidance, but to also have more opportunities for them to engage with the language on a practical basis. This is, incidentally, another problem with learning from AI (another "boilerplate avoidance" tool): if you argue, as many have, that LLMs can handle the boring things like "writing a loop" or "defining a function," it means that junion developers stop engaging with the syntax routinely, which means they likely fail to develop an intuitive sense of how the language actually works, in much the same way that Google Translate does not help you develop actual fluency in Spanish.
There's a rich seam of developer experience that's available to us here, locked behind that little bit of boilerplate. If we think of friction not as a thing to be absolutely avoided, but as an interesting part of the rhetorical and design space — something that shapes the directions that users will take when they use the code in anger — then choosing where to deploy it becomes more interesting. In reality, all code has this friction, but older code patterns tend to make it explicit (subclassing, events and other engagement with the runtime, multiple syntax constructs) and a lot of newer code is implicit (rules for when constructs can be used, use only of syntax that can fit into a single expression or function).
Notably, JavaScript as a language offers us much greater tools for managing productive boilerplate. We have classes with access control modifiers now, proxies (albeit with performance caveats), native modules, Maps and Sets, custom element lifecycles... If you want to implement patterns that were originally designed in other (more full-featured) languages, like MVC UI, we are in much better shape to do that now than we were in 2014. We're not inventing things from first principles anymore (Alex Russell is fond of noting that React is a legacy technology built for a much older, less reliable browser environment, which explains why it ships so much redundant code to this day).
So by evaluating the boilerplate of older code patterns, we can start to distinguish between the ones that are using it to offer guidance or direction, and those that simply considered it a cost you paid for (at the time) higher performance or integration with their framework. We can let the latter die off without regret. But in the case of the latter, whether the original was web-related or from native development, it's not just nostalgia at work. There's value that's worth reconsidering, now that our tools and platform give us greater capabilities for their implementation.
August 7, 2025
Filed under: tech»educationTweak sauce
It's intern season, which means I spend a lot of time explaining "this is how a shell works" and "this is how you save time by piping a CSV through grep." That also turns into lessons about how zsh and Bash are different, and how to install newer versions of all the tools that Apple leaves stranded in 2005 because they don't like the GPL. It's frustrating, but educational, for all concerned.
One thing that we don't cover, in these pairing sessions, is any kind of shell customization. This is partly because it's already hard enough getting young journalists on board with the command line, without introducing variations into the experience. But it's also a long-standing philosophy of mine, which is: assume the system could experience catastrophic failure at any time, so optimize for garbage tools you'll always have, instead of adapting to a curated ecological niche. Be a trash panda, not a panda-panda.
Among other things, this means I try to use unmodified configurations for:
- The system's editors (including vim and nano)
- The shell itself (Apple switching to zsh has muddled this some)
- core utilities like grep, find, sed, and awk
- special terminals or access utilities (including tmux)
- general keyboard remapping, like putting Esc on the caps lock key
But over the course of my career, I've spent a lot of time logged into new computers: fresh virtual machines, debugging for coworkers and interns, replacements for broken laptops, shells on devices that are secretly running Linux under the hood, and so on. Some of these are the result of disaster and some are just how infrastructure works now. Either way, it makes sense to learn to be immediately productive in an unpredictable environment.
And for all its frustrations, there's also something to be said for learning a set of tools with a lifespan measured in decades. These tools are often not as good as you'd like them to be, but they're also often very efficient, versatile, and so deeply ingrained in the culture that they're unlikely to ever go away. Sed is always going to be there, a thought that is equally comforting and depressing. And if they are broken, it will be in reliable ways that persist over time — you do not have to worry about someone pushing an update that suddenly alters how grep searches, which is a relief in an age of auto-updated everything else.
Of course, this is a lot to unload on an intern who's still trying to figure out why there are at least four different "quit" commands that they need to learn. Trust me, I tell them. Either it'll all make sense eventually or you'll realize that your manager cannot be trusted—either way, it's good preparation for a data journalism career ahead.
May 8, 2025
Filed under: tech»osM'Linux
From The Verge:
Windows 11 is also getting a variety of new AI features, including an AI agent baked into the Windows settings menu; more Click to Do text and image actions; AI editing features for Paint, Photos, and the Snipping Tool; Copilot Vision visual search; improved Windows Search; rich image descriptions for Narrator; AI writing functions in Notepad; and AI actions from within File Explorer. In its detailed blog post Microsoft says the AI features are designed to “make our experiences more intuitive, more accessible, and ultimately more useful.”I have been using Windows (or MS-DOS, even) for my entire life. It hasn't always been a pleasant experience, but it has generally worked and (as someone who does a fair amount of gaming) ran the specific software that I wanted to run, with a high level of backward compatibility. But over the last few years, it has become clear that Microsoft and I are no longer seeing eye to eye on what my computer should be doing.
I think it should be doing the tasks that I ask for predictably and reliably, and Microsoft thinks that it should be inserting semi-randomized chatbots into every nook and cranny of the system, when it's not taking screenshots of everything I do and running them through OCR. This is in addition to a series of UI tweaks that have made Windows increasingly unusable, like the weirdly-centered taskbar or jamming ads into the start menu.
So during my holiday break this January, I started taking steps to make 2025 my own personal Year of Linux on the Desktop. I'd already been using Xubuntu to keep an old 2009-era Thinkpad viable, so I knew it could work for my professional tools and workflows, but I'd resisted making the shift on my tower PC until the end of Windows 10 support gave me a deadline to meet.
To make the switch easier, I bought a second hard drive solely for a Fedora installation, keeping the original drive in the machine unchanged. With this setup, I could switch between the two operating systems at boot, gradually moving over to Linux for longer and longer periods, and pulling files off the old drive as necessary. As of this week, I haven't reverted back into Windows for a couple of months, and I thought it might be useful to write about what the experience has been like, for anyone considering the same migration.
Getting started
I picked Fedora since it was often recommended as the "no-nonsense" distribution. I actually tried a few distros, and went through a few reinstalls, before everything was functional at the basic hardware level. In particular, the Nvidia drivers for my GPU (a well-loved 1070 GTX) were obnoxious to install and upgrade reliably. Also, partway through the process, my motherboard blew out (I'm assuming for unrelated reasons, probably due to being carted across the Atlantic in a badly-padded suitcase) and had to be replaced, including a new CPU (AMD this time around).
Finally, I needed to manually disable the USB wake functionality for my mouse, which is apparently chattier than Linux likes when it's trying to sleep. This fits my general expectations from prior experience, which was that 95% of my hardware would be fine and 5% would have some screwy but generally surmountable problems (it was certainly miles easier than debugging sleep issues on Windows has been for me).
At the software level, Fedora generally does feel more cohesive, in ways both big and small, compared to the Ubuntu systems that I've used in the past. For example, the logo and progress graphics shown during initial boot or upgrade are more polished, which seems minor but contributes to confidence that corners are not being cut on larger issues either. I prefer Flatpak to Snap, which was another factor in its favor. And of course, they're not trying to sell me a "Pro" service subscription, which I appreciate.
It does have some quirks, mostly around its software sources: by default Fedora only comes with "free" (read: non-patent encumbered) repositories enabled. You need to turn on "non-free" in order to install Steam or good video drivers, and in some cases you'll want to reinstall applications like FFmpeg to use the non-free version, unless you really like having choppy, broken video playback. You also need non-free repos for Blu-ray support, which is important to me.
With the system in a solid working state, I disabled automatic updates. I'll still run upgrades, of course, but I can do it on my own schedule. This is part of my general philosophy with computing going forward, which is that I'm through with software that doesn't respect the user's right to informed consent. Almost everything I do is either on the web platform (which can handle a little lag) or offline, I don't need to be updating every time a UI gets revamped so that a product manager can get a raise.
WIMP
My personal opinion is that user interface design pretty much peaked with Windows 7 and it's all been downhill from there. I want to be able to snap windows to the screen edge and tile them, search and run applications from the OS menu, and see the names of the programs I'm running in the task bar. I do not want to have big media popups whenever I change the volume. I do not want a "notification center" that serves as the junk drawer for old chat messages. I do not want recommendations or ads anywhere on a computer that I paid for with my own money.
I do not want "AI" anywhere on the machine, at all.
Keeping all that in mind, I went with KDE for the default window manager, since it seemed like the best modern "Windows-ish" option (I like XFCE, but it's always felt clunky in terms of keyboard shortcuts and settings, and Gnome has a real case of MacOS envy that I've never cared for). A few tweaks have put everything pretty much the way I like it — mostly.
The primary catch, which will be unsurprising to any Linux user, has been multi-monitor support. KDE handles rendering to my second screen just fine, especially once I got the Nvidia driver running to support Displayport daisy-chaining. But it seems clear that testing on multi-monitor setups is not something that Linux devs do very much. For example, the taskbar on each monitor is a separate "panel" with its own configuration and application order — if I drag Firefox to the leftmost position on the first screen, I have to repeat this on the second if I want them to be consistent. The result has been that I've largely stopped re-ordering items in the task bar so that I don't obsess over it, which is not ideal but ultimately doesn't actually have any impact on my workflow.
Window positioning also sometimes requires intervention. For example, I typically keep the picture-in-picture video window for Firefox on my second monitor, so that it's basically a "watch in background" button. But KDE initially insisted on automatically placing the pop-up player directly over the Firefox tab, until I specifically told it to remember the last position and size of a browser window with a specific title. I don't know why that's not the default. Of course, some applications do remember where they were last located, which I think they're doing for themselves instead of letting the OS handle it, because Linux UI has a legendary "no gods, no masters" approach to window management that I think only got worse with Wayland.
My favorite thing about the GUI is actually not graphical at all — it's the ability to run an SSH daemon on our local network. Every now and then something (usually a game running full-screen) crashes in a way that captures all input and prevents closing the misbehaving application. I used to fix these crashes by using some weird kernel-level keyboard shortcuts that bypass KDE entirely, causing their own oddities along the way. But then I realized I can just open a terminal on my phone and kill the process from there. This is funny, and stupid, and incredibly useful, all at the same time.
Multi-monitor gripes aside, window management has pretty much been a non-issue. It stays out of my way and most of my GUI muscle memory still works. I suspect that in part this is because I've always been a person who didn't really customize the defaults very much, whether on Windows, Linux, or Chrome OS (and on MacOS I mostly only installed tweaks to get it to the standards of the others). So I've never developed any really esoteric habits that I needed to unlearn.
Applications
At the end of the day, software is what actually matters. As long as I can actually run the programs I need — and I am not, in this regard, a person with particularly esoteric tastes — my experience will probably be fine.
I spend roughly 90% of my time in Firefox. Unsurprisingly, it works exactly as I would expect, with the exception of an annoying keyboard shortcut change that I wrote an add-on to fix. Both Firefox and Chrome have been able to see the camera and microphone for video chats without any issues, although there were issues with WebUSB, so I've been running Via from its older AppImage package.
Sublime also worked out of the box. For backups, I'm using Deja Dup instead of Acronis. Bitwarden came from Flatpak. Mozilla VPN is only officially supported on Ubuntu, but you can compile it or (and this is what I did) you can grab the RPM file from the releases on the GitHub repo. I will have to update this manually, but it hasn't been an issue so far.
For e-mail, I had been using a copy of Outlook 2007 for the last two decades. Obviously, it wouldn't be directly compatible and this was a good time to upgrade anyway. It took a little while to figure out the tools needed to convert my old .PST files into something that Thunderbird could import, but I only needed to do that once, and then it's been pretty smooth sailing. For the rest of my office suite, LibreOffice works, but at this point I'm much more comfortable in Google Sheets, so there wasn't much migration cost there.
The truly impressive thing has been running Steam. Of course I knew that Wine had existed for doing Windows emulation, and that Valve had put in a lot of effort to make applications run on their Linux-based handheld. But it's one thing to know that in theory, and another to see pretty much everything in my library run pretty much flawlessly under Proton. The one exception — literally the one I've found so far — is Street Fighter 6, which starts out in good shape and then at some point the shaders lose coherence and turn the screen into one giant chaotic polygon soup. As a result, I've been playing less SF6, which is probably not a bad thing for my sleep habits, and does mean that I'm finally getting around to games I've neglected, like UFO 50 and the just-released Skin Deep.
Sadly, my ancient copy of Photoshop 6.0 has issues under the current versions of Wine. Since I refuse to use either a newer Adobe product or the badly-named open-source image editor, this may become a longer-term project.
Of course, as a web developer, the truly nice thing has been getting access to Linux's tooling support without having to run WSL or see what would function under Git's MSYS shell. Being able to run Poppler, or FFmpeg, or Python, without jumping through any of those hoops is not a revolution, since working on Windows for such a long time has made me pretty good at hoop-jumping. But it's very much appreciated.
All of which is to say
Would I recommend this to an ordinary person, like my dad? Probably not. Once the system is running, it's been largely stable, but getting it there was still not frictionless. If you have closed-source devices that you're plugging in, or you need a specific proprietary application, I wouldn't want to take it on faith that those things will work (e.g., my much-loved Zune HD can be viewed in the file explorer but I can't add music to it). And when things break I'm still sometimes digging into a text file from the terminal to fix them.
On the other hand, that kind of transparency — being able to deeply configure the system from a text file — is exactly what I want from my computer these days. Linux has gotten good enough that day-to-day I'm not spending a lot of time recompiling or manually tweaking (i.e., I'm not doing sysadmin work as a hobby), but if I need to change something, I have that option.
Meanwhile, nothing is being installed without my permission. Copilot is not lurking on the horizon, and I don't have to cringe whenever Windows Update pops up a notification or pesters me to update to Windows 11. People complain about systemd or Wayland, but they feel like things I can conceptualize by comparison, and that I can access on my own terms. It's not a perfect system, but for the first time in a long time, it feels like mine, and that's well worth the occasional inconvenience.
July 18, 2024
Filed under: journalism»dataA Letter to Fellow Data Journalists about "AI"
We need to talk, friends. Things have gotten weird out there, and you're not dealing with it well at all.
I'm in a lot of data journalist social spaces, and a couple of years ago I started to notice a lot of people starting to use large language models for things that, bluntly, didn't make any sense. For example, in response to a question about converting between JSON and CSV, someone would inevitably pipe up and say "I always just ask ChatGPT to do this," meaning that instead of performing an actual transfer between two fully machine-readable and well-supported formats, they would just paste the whole thing into a prompt window and hope that the statistics were on their side.
I thought this was a joke the first time I saw it. But it happened again and again, and gradually I realized that there's an entire group of people — particularly younger reporters — who seem to genuinely think this is a normal thing to do, not to mention all the people relying on LLM-powered code completion. Amid the hype, there's been a gradual abdication of responsibility to "ChatGPT said" as an answer.
The prototypical example of this tendency is Simon Willison, a long-time wanderer across the line between tech and journalism. Willison has produced a significant amount of public output since 2020 "just asking questions" about LLMs, and wrote a post in the context of data journalism earlier this year that epitomizes both the trend of adoption and the dangers that it holds:
- He demonstrates a plugin for his Datasette exploration tool that uses an LLM to translate a question in English into a SQL query. "It deliberately makes the query visible, in the hope that technical users might be able to spot if the SQL looks like it's doing the right thing," he says. This strikes me as wildly optimistic: since joining Chalkbeat, I write SQL on a weekly basis, collaborating with a team member who has extensive database experience, and we still skip over mistakes in our own handwritten queries about a third of the time.
- Generally, the queries that he's asking the chatbot to formulate are... really simple? It's all SELECT x, y FROM table GROUP BY z in terms of complexity. These kinds of examples are seductive in the same way that front-end framework samples are: it's easy to make something look good on the database equivalent of a to-do app. They don't address the kind of architectural questions involved in real-world problems, which (coincidentally) language models are really bad at answering.
- To his credit, Simon points out a case in which he tried to use an LLM to do OCR on a scanned document, and notes that it hallucinates some details. But I don't think he's anywhere near critical enough. The chatbot not only invents an entirely new plaintiff in a medical disciplinary order, it changes the name from "Laurie Beth Krueger" to "Latoya Jackson" in what seems to me like a pretty clear case of implicit bias that's built into these tools. Someone's being punished? Better generate a Black-sounding name!
- He uses text from a web page as an example of "unstructured" data that an LLM can extract. But... it's not unstructured! It's in HTML, which is the definition of structured! And it even has meaningful markup with descriptive class names! Just scrape the page!
I really started to think I was losing my mind near the end of the post, when he uploads a dataset and asks it to tell him "something interesting about this data." If you're not caught up in the AI bubble, the idea that any of these models are going to say "something interesting" is laughable. They're basically the warm, beige gunk that you have to eat when you get out of the Matrix.
More importantly, LLMs can't reason. They don't actually have opinions, or even a mental model of anything, because they're just random word generators. How is it supposed to know what is "interesting?" I know that Willison knows this, but our tendency to anthropomorphize these interactions is so strong that I think he can't help it. The ELIZA effect is a hell of a drug.
I don't really want to pick on Willison here — I think he's a much more moderate voice than this makes him sound. But the post is emblematic of countless pitch emails and conversations that I have in which these tools are presumed to be useful or interesting in a journalism context. And as someone who prides themself on producing work that is accurate, reliable, and accountable, the idea of adding a black box containing a bunch of randomized matrix operations in my process is ridiculous. That's to say nothing of the ecological impact that they have in aggregate, or the fact that they're trained on stolen data (including the work of fellow journalists).
I know what the responses to this will be, particularly for people who are using Copilot and other coding assistants, because I've heard from them when I push back on the hype: what's wrong with using the LLM to get things done? Do I really think that the answer to these kinds of problems should be "write code yourself" if a chatbot can do it for us? Does everyone really need to learn to scrape a website, or understand a file format, or use a programming language at a reasonable level of competency?
And I say: well, yes. That's the job.
But also, I think we need to be reframing the entire question. If the problem is that the pace and management of your newsroom do not give you the time to explore your options, build new skills, and produce data analysis on a reasonable schedule, the answer is not to offload your work to OpenAI and shortchange the quality of journalism in the process. The answer is to fix the broken system that is forcing you to cut corners. Comrades, you don't need a code assistant — you need a union and a better manager.
Of course your boss is thrilled that you're using an LLM to solve problems: that's easier than fixing the mismanagement that plagues newsrooms and data journalism teams, keeping us overworked and undertrained. Solving problems and learning new things is the actual fun part of this job, and it's mind-boggling to me that colleagues would rather give that up to the robots than to push back on their leadership.
Of course many managers are fine with output that's average at best (and dangerous at worst)! But why are people so eager to reduce themselves to that level? The most depressing tic that LLM users have is answering a question with "well, here's what the chatbot said in response to that" (followed closely by "I couldn't think of how to end this, so I asked the chatbot"). Have some self-respect! Speak (and code) for yourself!
Of course CEOs and CEO-wannabes are excited about LLMs being able to take over work. Their jobs are answering e-mails and trying not to make statements that will freak anyone out. Most of them could be replaced by a chatbot and nobody would even notice, and they think that's true of everyone else as well. But what we do is not so simple (Google search and Facebook content initiatives notwithstanding).
If you are a data journalist, your job is to be as correct and as precise as possible, and no more, in a world where human society is rarely correct or precise. We have spent forty years, as an industry niche, developing what Philip Meyer referred to as "precision journalism," in which we adapt the techniques of science and math to the process of reporting. I am begging you, my fellow practitioners, not to throw it away for a random token selection process. Organize, advocate for yourself, and be better than the warm oatmeal machine. Because if you act like you can be replaced by the chatbot, in this industry, I can almost guarantee that you will be.
April 4, 2024
Filed under: techSpam, Scam, Scale
I got my first cell phone roughly 20 years ago, a Nokia candybar with a color screen that rivaled the original GBA for illegibility. At the time, I was one of the last people my age I knew who had relied entirely on a landline. Even for someone like me, who resisted the tech as long as I could (I still didn't really text for years afterward), it was clear that this was a complete paradigm shift. You could call anyone from anywhere — well, as long as you were in one of the (mostly urban) coverage areas. It was like science fiction.
Today I almost never answer the phone if I can help it, since the only people who actually place voice calls to me are con artists looking to buy houses that I don't actually own, political cold-calls, or recorded messages in languages I don't speak. The waste of this infurates me: we built, as a civilization, a work of communication infrastructure that was completely mind-boggling, and then abandoned it to rot apart in only a few short years.
If you think that can't happen to the Internet — that it's not in danger of happening now — you need to think again. Shrimp Jesus is coming for us.
Welcome to the scam economy
According to a report from 404 Media, the hot social media trend is a scam based around a series of ludicrous computer-generated images, including the following subjects:
...AI-deformed women breastfeeding, tiny cows, celebrities with amputations that they do not have in real life, Jesus as a shrimp, Jesus as a collection of Fanta bottles, Jesus as sand sculpture, Jesus as a series of ramen noodles, Jesus as a shrimp mixed with Sprite bottles and ramen noodles, Jesus made of plastic bottles and posing with large-breasted AI-generated female soldiers, Jesus on a plane with AI-generated sexy flight attendants, giant golden Jesus being excavated from a river, golden helicopter Jesus, banana Jesus, coffee Jesus, goldfish Jesus, rice Jesus, any number of AI-generated female soldiers on a page called “Beautiful Military,” a page called Everything Skull, which is exactly what it sounds like, malnourished dogs, Indigenous identity pages, beautiful landscapes, flower arrangements, weird cakes, etc.
These "photos," bizarre as they may be, aren't just getting organic engagement from people who don't seem particularly discerning about their provenance or subject matter. They're also being boosted by Facebook's algorithmic feeds: if you comment on or react to one of these images, more are recommended to you. People who click on the link under the image are then sent to a content mill site full of fraudulent ads provided through Google's platform, meaning that at least two major tech companies are effectively complicit.
Shrimp Jesus is an obvious and deeply stupid scam, but it's also a completely predictable one. It's exactly what experts and bystanders said would happen as soon as generative tools started rolling out: people would start using it to run petty scams by producing mass amounts of garbage in order to trawl for the tiny percentage of people foolish enough to engage.
This was predictable precisely because we live in a scam economy now, and that fact is inextricable from the size and connectivity of the networked world. There's a fundamental difference between a con artist who has to target an individual over a sustained period of time and a spammer who can spray-and-pray millions of e-mails in the hopes that they find a few gullible marks. Spam has become the business model: venture capitalists strip-mine useful infrastructure (taxis and public transit, housing, electrical power grids, communication networks) with artificial cash infusions until the result is too big to fail.
Big Trouble
It's not particularly original to argue that modern capitalism eats itself, or that the VC obsession with growth distorts everything it touches. But there's an implicit assumption by a lot of people that it's the money that's the problem — that big networks and systems on their own are fine, or are actually good. I'm increasingly convinced that's wrong, and that in fact scale itself is the problem.
Dan Luu has a post on the "diseconomies of scale" where he makes a strong argument along the same lines, essentially stating that (counter to the conventional wisdom) big companies are worse than small companies at fighting abuse, for a variety of reasons:
- At a certain size they automate anti-fraud efforts, and the automation is worse at it than humans are.
- Moderation is expensive, and it's underfunded to maintain the profits expected from a multinational tech company.
- The systems used by these companies are so big and complicated that they actually can't effectively debug their processes or fully understand how abuse is occurring.
The last is particularly notable in the context of Our Lord of Perpetual Crayfish, given that large language models and other forms of ML in use now are notoriously chaotic, opaque, unknowably complicated math equations.
As we've watched company after company this year, having reached either market saturation or some perceived level of user lock-in, pivot to exploitation (jacking up prices, reducing perks, shoveling in ads, or all three) you have to wonder: maybe it's not that these services are hosts for scams. Maybe at a certain size, a corporation is functionally indistinguishable from a scam.
The conventional wisdom for a long time, at least in the US, was that big companies were able to find efficiencies that smaller companies couldn't manage. But Luu's research seems to indicate that in software, that's not the case, and it's probably not true elsewhere. Instead, what a certain size actually does is hide externalities by putting distance — physical, emotional, and organizational — between people making decisions (both in management and at the consumer level) and the negative consequences.
Corporate AI is basically a speedrun of this process: it depends on vast repositories of structured training data, meaning that its own output will eventually poison it, like a prion disease from cannibalism. But the fear of endless AI-generated content is itself a scam: running something like ChatGPT isn't cheap or physically safe. It guzzles down vast quantities of water, power, and human misery (that AI "alignment" that people talk about so often is just sparkling sweatshop labor). It can still do a tremendous amount of harm while the investors are willing to burn cash on it, but in ways that are concrete and contemporary, not "paperclip optimizer" scaremongering.
What if we made scale illegal?
I know, that sounds completely deranged. But hear me out.
A few years ago, writer/cartoonist Ryan North said something that's stuck with me for a while:
Sometimes I feel like my most extreme belief is that if a website is too big to moderate, then it shouldn't be that big. If your website is SO BIG that you can't control it, then stop growing the website until you can.
A common throughline of Silicon Valley ideology is a kind of blinkered free speech libertarianism. Some of this is probably legitimately ideological, but I suspect much of it also comes from the fact that moderation is expensive to build out compared to technical systems, and thus almost all tech companies have automated it. This leads to the kind of sleight of hand that we see regularly from Facebook, which Erin Kissane noted in her series of posts on Myanmar. Facebook regularly states that their automated systems "detect more than 95% of the hate speech they remove." Kissane writes (emphasis in the original):
At a glance, this looks good. Ninety-five percent is a lot! But since we know from the disclosed material that based on internal estimates the takedown rates for hate speech are at or below 5%, what’s going on here?
Here’s what Meta is actually saying: Sure, they might identify and remove only a tiny fraction of dangerous and hateful speech on Facebook, but of that tiny fraction, their AI classifiers catch about 95–98% before users report it. That’s literally the whole game, here.
So…the most generous number from the disclosed memos has Meta removing 5% of hate speech on Facebook. That would mean that for every 2,000 hateful posts or comments, Meta removes about 100–95 automatically and 5 via user reports. In this example, 1,900 of the original 2,000 messages remain up and circulating. So based on the generous 5% removal rate, their AI systems nailed…4.75% of hate speech. That’s the level of performance they’re bragging about.
The claim that these companies are making is that automation is the only way to handle a service for millions or billions of users. But of course, the automation isn't handling it. For all intents and purposes, especially outside of OECD member nations, Facebook is basically unmoderated. That's why it got so big, not the other way around.
More knowledgeable people than me have written about the complicated debate over Section 230, the law that provides (again, in the US) a safe harbor for companies around user-generated content. I'm vaguely convinced that it would be a bad idea to repeal it entirely. But I think, as North argues, that a stronger argument is not to legislate the content directly, but to require companies to meet specific thresholds for human moderation (and while we're at it, to pay those moderators a premium wage). If you can't afford to have people in the loop to support your product, shut it down.
We probably can't make "being a big company" illegal. But we can prosecute for large-scale pollution and climate damage. We can regulate bait-and-switch pay models and worker exploitation. We can require companies to pay for moderation when they launch services in new markets. It can be more costly to run on a business model like advertising, which depends on lots of eyeballs, if there is stronger data privacy governance. We can't make scale illegal, but we could make it pay its actual bills, and that might be enough.
In the meantime, I'd just like to be able to answer my phone again.
February 12, 2024
Filed under: gaming»portablePocket Change
The lower-right corner of my desk, where I keep my retro console hardware, contains:
- A Dreamcast, yellowed
- An Nvidia Shield Portable, barely charges these days
- A GameBoy Pocket Color, lime green
- Two Nintendo DS systems, in stereotypical colors (pink for Belle, blue for me)
- A Nintendo 3DS, black
- Various controllers and power cables
- A GBA SP, red, with the original screen
- A GBA with an aftermarket screen, speaker, and USB-C battery pack, black
- An enormous Hori fight stick for the XBox 360, largely untouched
I wouldn't say I'm a collector so much as I just stopped getting rid of anything at some point. I was lucky enough to have held onto the systems that I bought in college, long before the COVID speculative bubble drove all the prices up. And most of these have sentimental value: I wasn't allowed to have anything that hooked up to the TV as a kid, but I saved up and bought an original GameBoy, and it got me through a lot of long car trips back in the day.
Still, this is a lot of stuff that I barely use, and most of which is redundant. So of course, I gave into my worst impulses, and ordered an Analogue Pocket.
A Brief Review of the Analogue Pocket
If you don't have original cartridges, the Pocket is hard to justify: emulation has reached the point where it may not be flawless, but it's certainly good enough, and there are much cheaper handhelds that can imitate more powerful consoles. But I do own about 20 GB/GBA carts that I go back to fairly frequently, and although I did rip them to ROM files last year just in case, I like playing them on actual hardware. About half of the systems listed above were purchased with that in mind.
A modded GBA will actually cost more than the Pocket in 2024, which is wild, and the result is an uneven experience. Retrofitted hardware is still old, meaning that the buttons on mine can be sticky, the d-pad is a little stiff, and I had to re-solder the power switch this winter. Obviously the Pocket might age poorly too, but if you're examining your options today and you don't actually want to do console repair as a hobby, it's probably the more reliable choice.
But the big draw on the Pocket is the screen, a high-range panel that's sized specifically to display classic GameBoy games with integer scaling (at a 10:1 ratio), including a number of uncanny display filters to mimic different original hardware color aberrations, refresh rates, and quirks. It's very good, and even on systems that don't round evenly to the 1600x1440 resolution, it's so sharp that you'd be hard-pressed to see any scaling errors.
You can, of course, also run ROMs and non-portable hardware systems on the Pocket through OpenFPGA plugins, as long as they don't exceed the complexity that the internal FPGA chips can model (topping out at around the 16-bit era, including some arcade machines like CPS-2). It does this quickly and accurately, and with relatively little fuss. I'm more suprised that it borrows some traditional emulation features for actual cartridges: since the Pocket runs its "virtual hardware" on an FPGA, it actually offers save states for physical media, which is frankly unhinged (but entirely welcome).
Permacomputing and modern ROMs
Uxn is a virtual machine designed by and for the Hundred Rabbits artist collective, a two-person team that lives on a boat. It's a stack-based graphical runtime with four colors, so more like a simplified assembly with SDL bindings than, say, a Forth. Like a lot of fantasy consoles, it runs "ROM" files even though there are obviously no actual read-only memory chips involved. In other words, there are a lot of aesthetic choices here.
This may seem like an unconnected topic, but Uxn was designed in conversation with gaming and its preservation. Hundred Rabbits had worked on iOS games and seen how they had a lifetime of about three years, between Apple's aggressive backwards incompatibility efforts and the complexity of the tech stack. They were inspired by the NES, as well as the history of game-specific virtual machines. Out of this World and the Z-machine, for example, are artifacts of an era where computing was so heterogenous that it made sense (even on limited, slow hardware) to run on a VM. This works: we have access to a vast library of text-based gaming history on modern platforms, because they were built from the start to be emulated.
There are two conceptual threads running through the design and community of Uxn. The first is permacomputing, shading into collapse computing: the idea that when we revert to an agrarian society, we'll still want to build and use computers based on leftover Z-80 or 6502 chips or something. This is, generously, nonsense. It's a prepper daydream for nerds, who imagine themselves as tech-priests of their local village.
The other thread is implementation-first computing, which comes out of the nautical experience of living with extremely limited connectivity. Devine Lu Linvega, the developer for Uxn, has a very good talk about the inspirations and thought process behind this. Living at sea, they can't rely on Stack Overflow to answer questions, and they certainly can't spare gigabytes of bandwidth to update your compiler or install dependencies. Whereas it takes about a week to write a Uxn interpreter, and from that point a person is basically self-sufficient and future-proof.
Most of us do not live on boats, or in a place where we can't get to MDN, so the emphasis on minimalism and self-implementation comes across as a little overdramatic. At the same time, I don't think it's entirely naive to see the appeal of Uxn as a contrast to the quicksand foundations of contemporary software design. I'm always tempted to be very smug about building for the web and browser compatibility, until I remind myself that every six months Safari breaks a significant feature like IndexedDB or media playback for millions of users.
In a very real sense, regardless of the abstract threads underpinning the philosophy of Uxn, what it really means is choosing a baseline where things are "good enough" and sticking with them — both in terms of the platform itself, but also the software it runs. It trades efficiency for resiliency, which is something you maybe can't fully appreciate until you've had cloud software fall over in transferring data between applications or generating backups.
The end of history
In addition to old GBA carts, this month I also started replaying Halo Infinite, a game that I think is generally underrated. It was panned by hardcore fans for a number of botched rollout decisions, but none of those matter much to me because the only thing I really wanted out of Halo is "a whole game that's made out of Silent Cartographer" and that's largely what Infinite delivers.
Unfortunately, sometime between launch and today, Microsoft decided that single-player Halo was not a corporate priority. So now the game starts in a dedicated multiplayer mode, and you have to wait for all that to load in before you can click a button and have the executable literally restart with different data. There's some trickery that it does to retain some shared memory, so the delay isn't as bad the second time, but I haven't been able to discover a flag or environment variable that will cause it to just start in single-player directly. It's a real pain.
I think about this a lot in the context of the modern software lifecycle, and I hate it. I don't think this is just me getting older, either. Every time my phone gets an OS upgrade, I know something is going to break or get moved around — or worse, it's going to have AI crammed into it somewhere, which will be A) annoying and B) a huge privacy violation waiting to happen. Eventually I just know I'm going to end up on Linux solely because it's the only place where a venture capitalist can't force an LLM to monitor all my keystrokes.
In other words, the read-only nature of old hardware isn't just a charming artifact. It ends up being what makes the retro experience possible at all. The cartridge (or ROM) is the bits that shipped, nothing more and nothing less. I'm never going to plug in Link's Awakening and find that it's now running a time-limited cross-promotion with a movie franchise, or that it's no longer compatible with the updated OS on my device, or that it won't start because it can't talk to a central server. It'll never get better, or worse. That's nostalgia, but it's also sadly the best I can hope for in tech these days.
January 17, 2024
Filed under: journalism»dataAdd It Up
A common misconception by my coworkers and other journalists is that people like me — data journalists, who help aggregate accountability metrics, find trends, and visualize the results — are good at math. I can't speak for everyone, but I'm not. My math background taps out around mid-level algebra. I disliked Calculus and loathed Geometry in high school. I took one math class in college, my senior year, when I found out I hadn't satisfied my degree requirements after all.
I do work with numbers a lot, or more specifically, I make computers work with numbers for me, which I suspect is where the confusion starts. Most journalists don't really distinguish between the two, thanks in part to the frustrating stereotype that being good at words means you have to be bad at math. Personally, I think the split is overrated: if you can go to dinner and split a check between five people, you can do the numerical part of my job.
(I do know journalists who can't split a check, but they're relatively few and far between.)
I've been thinking lately about ways to teach basic newsroom numeracy, or at least encourage people to think of their abilities more charitably. Certainly one perennial option is to do trainings on common topics: percentages versus percentage points, averages versus medians, or risk ratios. In my experience, this helps lay the groundwork for conversations about what we can and can't say, but it doesn't tend to inspire a lot of enthusiasm for the craft.
The thing is, I'm not good at math, but I do actually enjoy that part of my job. It's an interesting puzzle, it generally provides a finite challenge (as opposed to a story that you can edit and re-edit forever), and I regularly find ways to make the process better or faster, so I feel a sense of growth. I sometimes wonder if I can find equivalents for journalists, so that instead of being afraid of math, they might actually anticipate it a little bit.
Unfortunately, my particular inroads are unlikely to work very well for other people. Take trigonometry, for example: in A Mathematician's Lament, teacher Paul Lockhart describes trig as "two weeks of content [...] stretched to semester length," and he's not entirely wrong. But it had one thing going for it when I learned about sine and cosine, which was that they're foundational to projecting a unit vector through space — exactly what you need if you're trying to write a Wolf3D clone on your TI-82 during class.
Or take pixel shader art, which has captivated me for years. Writing code from The Book of Shaders inverts the way we normally think about math. Instead of solving a problem once with a single set of inputs, you're defining an equation that — across millions of input variations — will somehow resolve into art. I love this, but imagine pointing a reporter at Inigo Quilez's very cool "Painting a Character with Maths." It's impressive, and fun to watch, and utterly intimidating.
(One fun thing is to look at Quilez's channel and find that he's also got a video on "painting in Google Sheets." This is funny to me, because I find that working in spreadsheet and shaders both tend to use the same mental muscles.)
What these challenges have in common is that they appeal directly to my strengths as a thinker: they're largely spatial challenges, or can be visualized in a straightforward way. Indeed, the math that I have the most trouble with is when it becomes abstract and conceptual, like imaginary numbers or statistical significance. Since I'm a professional data visualization expert, this ends up mostly working out well for me. But is there a way to think about math that would have the same kinds of resonance for verbal thinkers?
So that's the challenge I'm percolating on now, although I'm not optimistic: the research I have been able to do indicates that math aptitude is tied pretty closely to spatial imagination. But obviously I'm not the only person in history to ask this question, and I'm hopeful that it can be possible to find scenarios (even if only on a personal level) that can either relate math concepts to verbal brains, or get them to start thinking of the problems in a visual way.
December 31, 2023
Filed under: random»personal2023 in Review
I mean, it wasn't an altogether terrible year.
Work life
This was my second full year at Chalkbeat, and it remains one of the best career decisions I've ever made. I don't think we tell young people in this industry nearly often enough that you will be much happier working closer to a local level, in an organization with good values that treats people sustainably, than you ever will in the largest newsrooms in the country.
I did not have a background in education reporting, so the last two years have been a learning experience, but I feel like I'm on more solid ground now. It's also been in interesting change: the high-profile visual and interactive storytelling that I did most often at NPR or the Seattle Times is the exception at Chalkbeat, and more often I'm doing data analysis and processing. I miss the flashier work, but I try to keep my hand in via personal projects, and there is a certain satisfaction in really embracing my inner spreadsheet pervert.
You can read more about the work we did this year over in our retrospective post as well as our list of data crimes.
Blogging
Blogging's back, baby! I love that it feels like this is being revitalized as Twitter collapses. I really enjoyed writing more on technical topics in the latter half of the year, and I still have a series I'd like to do on my experiences writing templating libraries. Technically, I never really stopped, but in recent years it's been more likely to be on work outlets than here on Mile Zero.
Next year this blog will be twenty years old, if I've done my math right. That's a long time. A little while back, I cleared out a bunch of the really old posts, since I was a little nervous about the attack surface from things I wrote in my twenties, especially post-Gamergate. But the underlying tech has mostly stayed the same, and if I'm going to be writing here more often, I've been wondering if I should upgrade.
When I first converted this from a band site to a blog, I went with a publishing tool called Blosxom, which basically reads files in chronological order to generate the feed. I rewrote it in PHP a few years later, and that's still what it's using now. The good news is that I know it scales — I got linked by Boing Boing a few times back in the day, and never had a reliability problem — but it's still a pretty primitive approach. I'm basically writing HTML by hand, there's no support for things like syntax highlighting, and I haven't run a backup in a while.
That said, if it's not broke, why fix it? I don't actually mind the authoring experience — my pickiness about markup means using something like Pandoc to generate post markup makes me a little queasy. I may instead aim for some low-effort improvements, like building a process for generating a post index file that the template can use instead of recursing through the folder heirarchy on every load.
Games
Splatoon 3 ate up a huge amount of time this year, but I burned out on it pretty hard over the summer. The networking code is bad, and the matchmaking is wildly unpredictable, so it felt like I was often either getting steamrolled or cruising to victory, and never getting the former when I really needed it to rank up. I still have a preorder for the single-player DLC, and I'm looking forward to that: Nintendo isn't much for multiplayer, but the bones of the game are still great.
Starting in September (more on that in a bit), I picked up Street Fighter 6 and now have almost 400 hours logged in it, almost all of it in the ranked mode. I'd never been very good at fighting games, and I'm still not particularly skilled, but I've gotten to the point where I'd almost like to try a local tournament at some point. SF6 strikes a great balance between a fairly minimal set of mechanics and a surprisingly deep mental stack during play. It also has an incredibly polished and well-crafted training mode and solid networking code — it's really easy to "one more round" until early in the morning. I've tried a few other fighting games, but this is the only one that's really stuck so far.
The big release of the year was Tears of the Kingdom, which was... fine. It's a technical marvel, but I didn't enjoy it as a game nearly as much, and for all its systemic freedom it's still very narratively-constrained — I ended up several times in places where I wasn't supposed to be yet, and had to go back to resume the intended path instead of being able to sequence break. ToTK mainly just made me want to replay Dragon's Dogma, which gets better every time I go through it, including beating the Bitterblack Isle DLC for the first time this year.
Movies
What did I read in 2023? I barely remember, and I didn't keep a spreadsheet this time around. I did record my Shocktober, as usual, so at least I have a record of that. My theme was "the VHS racks at the front of the Food Lion in Lexington, Kentucky," meaning all the box art six-year-old me stared at when my parents were being rung up.
Some of these were actually pretty good: Critters is surprisingly funny and well-made, Monkey Shines is not at all what was promised, and The Stuff holds up despite its bizarre insistence that Michael Moriarty is a leading man. On the other hand, Nightmare on Elm Street 3 doesn't really survive Heather Langenkamp's acting, and C.H.U.D. has actually gotten worse since the last time I watched it.
Outside of the theme, the strongest recommendation I can make is for When Evil Lurks, a little post-pandemic gem from Argentina about a plague of demon possession. Eschewing the traditional trappings of exorcism movies (no priests, no crosses, and no projectile vomiting), it alternates between pitch-black comedy and gruesome violence. I love it, and really hope it sees a wider release (I think it's currently only on Shudder).
Touring Spain
Belle's been studying Spanish for a few years now, and headed to Spain in September to work on her Catalan and get certified as an English teacher there. I joined her in November, and we took a grand tour of the southeast side of the country. We saw Barcelona, Sevilla, Granada, Valencia, and Madrid. My own Spanish is serviceable at best, but I skated by.
They say that when you travel, mostly what you learn are the things you've taken for granted in your own culture. On this trip, the thing that really stood out was the degree to which Spanish cities prioritize people over cars. This varies, of course — the older cities are obviously much more pedestrian friendly, because they were never planned around automobile travel — but even in Madrid and Barcelona, it still feels so much safer and less aggressive than the car-first culture of Chicago and other American metro areas.
Given the experience, we've started thinking about whether Spain might be a good place to relocate, at least for a little while. While I'm cautiously optimistic about the 2024 election cycle, I wouldn't mind watching it on European time, just in case.