Journal tags: development

383

sparkline

Simplify

I was messing about with some images on a website recently and while I was happy enough with the arrangement on large screens, I thought it would be better to have the images in a kind of carousel on smaller screens—a swipable gallery.

My old brain immediately thought this would be fairly complicated to do, but actually it’s ludicrously straightforward. Just stick this bit of CSS on the containing element inside a media query (or better yet, a container query):

display: flex;
overflow-x: auto;

That’s it.

Oh, and you can swap out overflow-x for overflow-inline if, like me, you’re a fan of logical properties. But support for that only just landed in Safari so I’d probably wait a little while before removing the old syntax.

Here’s an example using pictures of some of the lovely people who will be speaking at Web Day Out:

Jemima Abu Rachel Andrew Lola Odelola Richard Rutter Harry Roberts

While you’re at it, add this:

overscroll-behavior-inline: contain;

Thats prevents the user accidentally triggering a backwards/forwards navigation when they’re swiping.

You could add some more little niceties like this, but you don’t have to:

scroll-snap-type: inline mandatory;
scroll-behavior: smooth;

And maybe this on the individual items:

scroll-snap-align: center;

You could progressively enhance even more with the new pseudo-elements like ::scroll-button() and ::scroll-marker for Chromium browsers.

Apart from that last bit, none of this is particularly new or groundbreaking. But it was a pleasant reminder for me that interactions that used to be complicated to implement are now very straightforward indeed.

Here’s another example that Ana Tudor brought up yesterday:

You have a section with a p on the left & an img on the right. How do you make the img height always be determined by the p with the tiniest bit of CSS? 😼

No changing the HTML structure in any way, no pseudos, no background declarations, no JS. Just a tiny bit of #CSS.

Old me would’ve said it can’t be done. But with a little bit of investigating, I found a nice straightforward solution:

section >  img {
  contain: size;
  place-self: stretch;
  object-fit: cover;
}

That’ll work whether the section has its display set to flex or grid.

There’s something very, very satisfying in finding a simple solution to something you thought would be complicated.

Honestly, I feel like web developers are constantly being gaslit into thinking that complex over-engineered solutions are the only option. When the discourse is being dominated by people invested in frameworks and libraries, all our default thinking will involve frameworks and libraries. That’s not good for users, and I don’t think it’s good for us either.

Of course, the trick is knowing that the simpler solution exists. The information probably isn’t going to fall in your lap—especially when the discourse is dominated by overly-complex JavaScript.

So get yourself a ticket for Web Day Out. It’s on Thursday, March 12th, 2026 right here in Brighton.

I guarantee you’ll hear about some magnificent techniques that will allow you to rip out plenty of complex code in favour of letting the browser do the work.

Harry Roberts is speaking at Web Day Out

I was going to save this announcement for later, but I’m just too excited: Harry Roberts will be speaking at Web Day Out!

Goddamn, that’s one fine line-up, and it isn’t even complete yet! Get your ticket if you haven’t already.

There’s a bit of a story behind the talk that Harry is going to give…

Earlier this year, Harry posted a most excellent screed in which he said:

The web as a platform is a safe bet. It’s un-versioned by design. That’s the commitment the web makes to you—take advantage of it.

  • Opt into web platform features incrementally;
  • Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context;
  • Write code that leans into the browser, not away from it.

Yes! Exactly!

Thing is, Harry posted this on LinkedIn. My indieweb sensibilities were affronted. So I harangued him:

You should blog this, Harry

My pestering paid off with an excellent blog post on Harry’s own site called Build for the Web, Build on the Web, Build with the Web:

The beauty of opting into web platform features as they become available is that your site becomes contextual. The same codebase adapts into its environment, playing to its strengths, rather than trying to build and ship your own environment from the ground up. Meet your users where they are.

That’s a pretty neat summation of the agenda for Web Day Out. So I thought, “Hmm …if I was able to pester Harry to turn a LinkedIn post into a really good blog post, I wonder if I could pester him to turn that blog post into a talk?”

I threw down the gauntlet. Harry accepted the challenge.

I’m sure you’re already familiar with Harry’s excellent work, but if you’re not, he’s basically Mr. Web Performance. That’s why I’m so excited to have him speak at Web Day Out—I want to hear the business case for leaning into what web browsers can do today, and he is most certainly the best person to bring receipts.

You won’t want to miss this, so be sure to get your ticket now; it’s only £225+VAT.

If you’re not ready to commit just yet, but you want to hear about more speaker announcements like this, you can sign up to the mailing list.

Speaking at Web Day Out

Half of the line-up of speakers for Web Day Out is already on the site. One more is already confirmed.

I’m ridiculously excited about the way the line-up is taking shape, and judging by the zippiness of ticket sales, so are lots of my peers. Seriously, don’t wait to get your ticket or you might end up missing out completely.

I’ve already got a shortlist of other people I could imagine on the line-up, but I’m open to more suggestions. If you’d like to speak at Web Day Out—or you know someone you think would be great—send an email to jeremy@clearleft.com

I won’t be checking my work email while I’m away on holiday next week but it would be lovely to come back to an inbox of exciting suggestions.

A couple of pointers…

I’d rather not have too many people like me on the line-up. White dudes are already over-represented in this industry, especially at conferences.

If you’ve never given a talk before, don’t worry. I’d love to help you put your talk together and coach you in presenting it. I have some experience in this area.

No product pitches. That includes JavaScript frameworks and CSS libraries.

If I get even a whiff of “AI”, your proposal doesn’t stand a chance. There are many, many, many other events that are only too happy to have wall-to-wall talks about …that sort of thing.

If you end up speaking at Web Day Out you will, of course, be paid. We will, of course, cover travel and accommodation too. We can’t afford the travel costs of bringing anyone in from outside Europe though (and we’d like to keep the carbon footprint of the event as small as possible).

Web Day Out has an opinionated agenda all about showing what’s possible in web browsers today. Some potential topics include:

The emphasis should be on using stuff in production rather than theoretical demos.

If you’ve got a case study about using the web platform—perhaps migrating away from a framework-driven approach—that would fit the bill perfectly.

How’s all that sounding? Know someone who could deliver the goods? Let me know!

Announcing Web Day Out

I’m going to cut right to the chase: Clearleft is putting on a brand new conference in 2026. It’s called Web Day Out. It’ll be on Thursday, March 12th right here in Brighton. Tickets are just £225+VAT. You should be there!

If you’ve ever been to Responsive Day Out or Patterns Day, the format will be familiar to you. There’s going to be eight 30 minute talks. Bam! Bam! Bam!

Like those other one-day conferences, this one has a laser-sharp focus.

Web Day Out is all about what you can do in web browsers today. You can expect talks that showcase hands-on practical uses for the latest advances in HTML, CSS, and JavaScript APIs. There will be no talks about libraries, frameworks or build tools, and I can guarantee there will be absolutely no so-called “AI”.

As you might have gathered, this is an opinionated conference.

If you care about performance, accessibility, and progressive enhancement, Web Day Out is the event for you.

Or if you’ve been living in React-land but starting to feel that maybe you’re missing out on what’s been shipping in web browsers, Web Day Out is the event for you too. And I’m not talking about cute demos here. This is very much about shipping to production.

I’ve got half of the line-up assembled already:

Jemima’s talk gives you a flavour of what to expect at Web Day Out:

In this talk, we’ll take a look at how to use HTML and CSS to build simpler alternatives to popular JavaScript components such as accordions, modals, scroll transitions, carousels etc We’ll also take a look at the performance and accessibility benefits and real-life applications and use-cases of these components.

Web Day Out will be in The Studio Theatre of the Brighton Dome, which is a fantastic intimate venue. That means that places are limited, so get your ticket now!

The Invisibles

When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.

CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.

It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.

The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.

What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.

If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).

Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.

I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.

Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!

That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.

It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.

Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.

On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.

Some examples:

If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.

Underlines and line height

I was thinking about something I wrote yesterday when I was talking about styling underlines on links:

For a start, you can adjust the distance of the underline from the text using text-underline-offset. If you’re using a generous line-height, use a generous distance here too.

For some reason, I completely forgot that we’ve got a line-height unit in CSS now: lh. So if you want to make the distance of your underline proportional to the line height of the text that the link is part of, it’s easy-peasy:

text-underline-offset: 0.15lh;

The greater the line height, the greater the distance between the link text and its underline.

I think this one is going into my collection of CSS snippets I use on almost every project.

Style your underlines

We shouldn’t rely on colour alone to indicate that something is interactive.

Take links, for example. Sure, you can give them a different colour to the surrounding text, but you shouldn’t stop there. Make sure there’s something else that distinguishes them. You could make them bold. Or you could stick with the well-understood convention of underlying links.

This is where some designers bristle. If there are a lot of links on a page, it could look awfully cluttered with underlines. That’s why some designers would rather remove the underline completely.

As Manu observed:

I’ve done a lot of audits in the first half of this year and at this point a believe that designing links without underlines is a kink. The idea that users don’t understand that links are clickable arouses some designers. I can’t explain it any other way.

But underlining links isn’t the binary decision it once was. You can use CSS to style those underlines just as you’d style any other part of your design language.

Here’s a regular underlined link.

For a start, you can adjust the distance of the underline from the text using text-underline-offset. If you’re using a generous line-height, use a generous distance here too.

Here’s a link with an offset underline.

If you’d rather have a thinner or thicker underline, use text-decoration-thickness.

Here’s a link with a thin underline.

The colour of the underline and the colour of the link don’t need to be the same. Use text-decoration-color to make them completely different colours or make the underline a lighter shade of the link colour.

Here’s a link with a translucent underline.

That’s quite a difference with just a few CSS declarations:

text-underline-offset: 0.2em;
text-decoration-thickness: 1px;
text-decoration-color: oklch(from currentColor l c h / 50%);

If that still isn’t subtle enough for you, you could even use text-decoration-style to make the underline dotted or dashed, but that might be a step too far.

Here’s a link with a dotted underline.

Whatever you decide, I hope you’ll see that underlines aren’t the enemy of good design. They’re an opportunity.

You should use underlines to keep your links accessible. But you should also use CSS to make those underlines beautiful.

Streamlining HTML web components

If you’re a front-end developer and you don’t read Chris Ferdinandi’s blog, you should change that right now. Add that RSS feed to your feed reader of choice!

Lately he’s been posting about some of the thinking behind his Kelp UI library. That includes some great nuggets of wisdom around HTML web components.

First of all, he pointed out that web components don’t need a constructor(). This was news to me. I thought custom elements had to include this incantation at the start:

constructor () {
  super();
}

But it turns that if all you’re doing is calling super(), you can omit the whole thing and it’ll be done for you.

I immediately refactored all the web components I’m using on The Session. While I was at it, I implemented Chris’s bulletproof web component loading.

Now technically, I don’t need to do this. I’m linking to my JavaScript at the bottom of every page so I know it’s going to load after the HTML. But I don’t like having that assumption baked into my code.

For any of my custom elements that reference other elements in the DOM—using, say, document.querySelector()—I updated the connectedCallback() method to use Chris’s technique.

It turned out that there weren’t that many of my custom elements that were doing that. Because HTML web components are wrapped around existing markup, the contents of the custom element are usually what matters (rather than other elements on the same page).

I guess that’s another unexpected benefit to HTML web components. Because they’ve already got their own bit of DOM inside them, you don’t need to worry about when you load your markup and when you load your JavaScript.

And no faffing about with the dark arts of the Shadow DOM either.

Awareness

Today is Global Accessibility Awareness Day:

The purpose of GAAD is to get everyone talking, thinking and learning about digital access and inclusion, and the more than One Billion people with disabilities/impairments.

Awareness is good. It’s necessary. But it’s not sufficient.

Accessibility, like sustainability and equality, is the kind of thing that most businesses will put at the end of sentences that begin “We are committed to…”

It’s what happens next that matters. How does that declared commitment—that awareness—turn into action?

In the worst-case scenario, an organisation might reach for an accessibility overlay. Who can blame them? They care about accessibility. They want to do something. This is something.

Good intentions alone can result in an inaccessible website. That’s why I think there’s another level of awareness that’s equally important. Designers and developers need to be aware of what they can actually do in service of accessibility.

Fortunately that’s not an onerous expectation. It doesn’t take long to grasp the importance of having good colour contrast or using the right HTML elements.

An awareness of HTML is like a superpower when it comes to accessibility. Like I wrote in the foreword to the Web Accessibility Cookbook by O’Reilly:

It’s supposed to be an accessibility cookbook but it’s also one of the best HTML tutorials you’ll ever find. Come for the accessibility recipe; stay for the deep understanding of markup.

The challenge is that HTML is hidden. Like Cassie said in the accessibility episode of The Clearleft Podcast:

You get JavaScript errors if you do that wrong and you can see if your CSS is broken, but you don’t really have that with accessibility. It’s not as obvious when you’ve got something wrong.

We are biased towards what we can see—hierarchy, layout, imagery, widgets. Those are the outputs. When it comes to accessibility, what matters is how those outputs are generated. Is that button actually a button element or is it a div? Is that heading actually an h1 or is it another div?

This isn’t about the semantics of HTML. This is about the UX of HTML:

Instead of explaining the meaning of a certain element, I show them what it does.

That’s the kind of awareness I’m talking about.

One way of gaining this awareness is to get a feel for using a screen reader.

The name is a bit of a misnomer. Reading the text on screen is the least important thing that the software does. The really important thing that a screen reader does is convey the structure of what’s on screen.

Friend of Clearleft, Jamie Knight very generously spent an hour of his time this week showing everyone the basics of using VoiceOver on a Mac (there’s a great short video by Ethan that also covers this).

Using the rotor, everyone was able to explore what’s under the hood of a web page; all the headings, the text of all the links, the different regions of the page.

That’s not going to turn anyone into an accessibility expert overnight, but it gave everyone an awareness of how much the HTML matters.

Mind you, accessibility is a much bigger field than just screen readers.

Fred recently hosted a terrific panel called Is neurodiversity the next frontier of accessibility in UX design?—well worth a watch!

One of those panelists—Craig Abbott—is speaking on day two of UX London next month. His talk has the magnificent title, Accessibility is a design problem:

I spend a bit of time covering some misconceptions about accessibility, who is responsible for it, and why it’s important that we design for it up front. It also includes real-world examples where design has impacted accessibility, before moving onto lots of practical guidance on what to be aware of and how to design for many different accessibility issues.

Get yourself a ticket and get ready for some practical accessibility awareness.

CSS snippets

I’ve been thinking about the kind of CSS I write by default when I start a new project.

Some of it is habitual. I now use logical properties automatically. It took me a while to rewire my brain, but now seeing left or top in a style sheet looks wrong to me.

When I mentioned this recently, I had some pushback from people wondering why you’d bother using logical properites if you never planned to translate the website into a language with a different writing system. I pointed out that even if you don’t plan to translate a web page, a user may still choose to. Using logical properties helps them. From that perspective, it’s kind of like using user preference queries.

That’s something else I use by default now. If I’ve got any animations or transitions in my CSS, I wrap them in prefers-reduced-motion: no-preference query.

For instance, I’m a huge fan of view transitions and I enable them by default on every new project, but I do it like this:

@media (prefers-reduced-motion: no-preference) {
  @view-transition {
    navigation: auto;
  }
}

I’ll usually have a prefers-color-scheme query for dark mode too. This is often quite straightforward if I’m using custom properties for colours, something else I’m doing habitually. And now I’m starting to use OKLCH for those colours, even if they start as hexadecimal values.

Custom properties are something else I reach for a lot, though I try to avoid premature optimisation. Generally I wait until I spot a value I’m using more than two or three times in a stylesheet; then I convert it to a custom property.

I make full use of clamp() for text sizing. Sometimes I’ll just set a fluid width on the html element and then size everything else with ems or rems. More often, I’ll use Utopia to flow between different type scales.

Okay, those are all features of CSS—logical properties, preference queries, view transitions, custom properties, fluid type—but what about actual snippets of CSS that I re-use from project to project?

I’m not talking about a CSS reset, which usually involves zeroing out the initial values provided by the browser. I’m talking about tiny little enhancements just one level up from those user-agent styles.

Here’s one I picked up from Eric that I apply to the figcaption element:

figcaption {
  max-inline-size: max-content;
  margin-inline: auto;
}

That will centre-align the text until it wraps onto more than one line, at which point it’s no longer centred. Neat!

Here’s another one I start with on every project:

a:focus-visible {
  outline-offset: 0.25em;
  outline-width: 0.25em;
  outline-color: currentColor;
}

That puts a nice chunky focus ring on links when they’re tabbed to. Personally, I like having the focus ring relative to the font size of the link but I know other people prefer to use a pixel size. You do you. Using the currentColor of the focused is usually a good starting point, thought I might end up over-riding this with a different hightlight colour.

Then there’s typography. Rich has a veritable cornucopia of starting styles you can use to improve typography in CSS.

Something I’m reaching for now is the text-wrap property with its new values of pretty and balance:

ul,ol,dl,dt,dd,p,figure,blockquote {
  hanging-punctuation: first last;
  text-wrap: pretty;
}

And maybe this for headings, if they’re being centred:

h1,h2,h3,h4,h5,h6 {
  text-align: center;
  text-wrap: balance;
}

All of these little snippets should be easily over-writable so I tend to wrap them in a :where() selector to reduce their specificity:

:where(figcaption) {
  max-inline-size: max-content;
  margin-inline: auto;
}
:where(a:focus-visible) {
  outline-offset: 0.25em;
  outline-width: 0.25em;
  outline-color: currentColor;
}
:where(ul,ol,dl,dt,dd,p,figure,blockquote) {
  hanging-punctuation: first last;
  text-wrap: pretty;
}

But if I really want them to be easily over-writable, then the galaxy-brain move would be to put them in their own cascade layer. That’s what Manu does with his CSS boilerplate:

@layer core, third-party, components, utility;

Then I could put those snippets in the core layer, making sure they could be overwritten by the CSS in any of the other layers:

@layer core {
  figcaption {
    max-inline-size: max-content;
    margin-inline: auto;
  }
  a:focus-visible {
    outline-offset: 0.25em;
    outline-width: 0.25em;
    outline-color: currentColor;
  }
  ul,ol,dl,dt,dd,p,figure,blockquote {
    hanging-punctuation: first last;
    text-wrap: pretty;
  }
}

For now I’m just using :where() but I think I should start using cascade layers.

I also want to start training myself to use the lh value (line-height) for block spacing.

And although I’m using the :has() selector, I don’t think I’ve yet trained my brain to reach for it by default.

CSS has sooooo much to offer today—I want to make sure I’m taking full advantage of it.

Codewashing

I have little understanding for people using large language models to generate slop; words and images that nobody asked for.

I have more understanding for people using large language models to generate code. Code isn’t the thing in the same way that words or images are; code is the thing that gets you to the thing.

And if a large language model hallucinates some code, you’ll find out soon enough:

With code you get a powerful form of fact checking for free. Run the code, see if it works.

But I want to push back on one justification I see repeatedly about using large language models to write code. Here’s Craig:

There are many moral and ethical issues with using LLMs, but building software feels like one of the few truly ethically “clean”(er) uses (trained on open source code, etc.)

That’s not how this works. Yes, the large language models are trained on lots of code (most of it open source), but they’re not only trained on that. That’s on top of everything else; all the stolen books, all the unpaid creative work of others.

Even Robin Sloan, who first says:

I think the case of code is especially clear, and, for me, basically settled.

…goes on to acknowledge:

But, again, it’s important to say: the code only works because of Everything. Take that data away, train a model using GitHub alone, and you’ll get a far less useful tool.

When large language models are trained on domain-specific data, it’s always in addition to the mahoosive amount of content they’ve already stolen. It’s that mohoosive amount of content—not the domain-specific data—that enables them to parse your instructions.

(Note that I’m being very delibarate in saying “parse”, not “understand.” Though make no mistake, I’m astonished at how good these tools are at parsing instructions. I say that as someone who tried to write natural language parsers for text-only adventure games back in the 1980s.)

So, sure, go ahead and use large language models to write code. But don’t fool yourself into thinking that it’s somehow ethical.

What I said here applies to code too:

If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.

OKLCH()

I was at the State Of The Browser event recently, which was great as always.

Manu gave a great talk about colour in CSS. A lot of it focused on OKLCH. I was already convinced of the benefits of this colour space after seeing a terrific talk by Anton Lovchikov a while back.

After Manu’s talk, someone mentioned that even though OKLCH is well supported in browsers now, it’s a shame that it isn’t (yet) in design tools like Figma. So designers are still handing over mock-ups with hex values.

I get the frustration, but in my experience it’s not that big a deal in practice. Here’s why: oklch() isn’t just a way of defining colours with lightness, chroma, and hue in CSS. It’s also a function. You can use the magical from keyword in this function to convert hex colours to l, c, and h:

--page-colour: oklch(from #49498D l c h);

So even if you’re being handed hex colour values, you can still use OKLCH in your CSS. Once you’re doing that, you can use all of the good stuff that comes with having those three values separated out—something that was theoretically possible with hsl, but problematic in practice.

Let’s say you want to encode something into your CSS like this: “if the user has specified that they prefer higher contrast, the background colour should be four times darker.”

@media (prefers-contrast: more) {
  --page-colour: oklch(from #49498D calc(l / 4) c h);
}

It’s really handy that you can use calc() within oklch()—functions within functions. And I haven’t even touched on the color-mix() function.

Hmmm …y’know, this is starting to sound an awful lot like functional programming:

Functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values.

Command and control

I’ve been banging on for a while now about how much I’d like a declarative option for the Web Share API. I was thinking that the type attribute on the button element would be a good candidate for this (there’s prior art in the way we extended the type attribute on the input element for HTML5).

I wrote about the reason for a share button type as well as creating a polyfill. I also wrote about how this idea would work for other button types: fullscreen, print, copy to clipboard, that sort of thing.

Since then, I’ve been very interested in the idea of “invokers” being pursued by the Open UI group. Rather than extending the type attribute, they’ve been looking at adding a new attribute. Initially it was called invoketarget (so something like button invoketarget="share").

Things have been rolling along and invoketarget has now become the command attribute (there’s also a new commandfor attribute that you can point to an element with an ID). Here’s a list of potential values for the command attribute on a button element.

Right now they’re focusing on providing declarative options for launching dialogs and other popovers. That’s already shipping.

The next step is to use command and commandfor for controlling audio and video, as well as some form controls. I very much approve! I love the idea of being able to build and style a fully-featured media player without any JavaScript.

I’m hoping that after that we’ll see the command attribute get expanded to cover JavaScript APIs that require a user interaction. These seem like the ideal candidates:

There’s also scope for declarative options for navigating the browser’s history stack:

  • button command="back"
  • button command="forward"
  • button command="refresh"

Whatever happens next, I’m very glad to see that so much thinking is being applied to declarative solutions for common interface patterns.

The web on mobile

Here’s a post outlining all the great things you can do in mobile web browsers today: Your App Should Have Been A Website (And Probably Your Game Too):

Today’s browsers are powerhouses. Notifications? Check. Offline mode? Check. Secure payments? Yep, they’ve got that too. And with technologies like WebAssembly and WebGPU, web games are catching up to native-level performance. In some cases, they’re already there.

This is all true. But this post from John Gruber is equally true: One Bit of Anecdata That the Web Is Languishing Vis-à-Vis Native Mobile Apps:

I won’t hold up this one experience as a sign that the web is dying, but it sure seems to be languishing, especially for mobile devices.

As John points out, the problems aren’t technical:

There’s absolutely no reason the mobile web experience shouldn’t be fast, reliable, well-designed, and keep you logged in. If one of the two should suck, it should be the app that sucks and the website that works well. You shouldn’t be expected to carry around a bundle of software from your utility company in your pocket. But it’s the other way around.

He’s right. It makes no sense, but this is the reality.

Ten or fifteen years ago, the gap between the web and native apps on mobile was entirely technical. There were certain things that you just couldn’t do in web browsers. That’s no longer the case now. The web caught up quite a while back.

But the experience of using websites on a mobile device is awful. Never mind the terrible performance penalties incurred by unnecessary frameworks and libraries like React and its ilk, there’s the constant game of whack-a-mole with banners and overlays. What’s just about bearable in a large desktop viewport becomes intolerable on a small screen.

This is not a technical problem. This doesn’t get solved by web standards. This is a cultural problem.

First of all, there’s the business culture. If your business model depends on tracking people or pushing newsletter sign-ups, then it’s inevitable that your website will be shite on mobile.

Mind you, if your business model depends on tracking people, you’re more likely to try push people to download your native app. Like Cory Doctorow says:

50% of web users are running ad-blockers. 0% of app users are running ad-blockers, because adding a blocker to an app requires that you first remove its encryption, and that’s a felony (Jay Freeman calls this ‘felony contempt of business-model’).

Matt May brings up the same point in his guide, How to grey-rock Meta:

Remove Meta apps from your devices and use only the mobile web versions. Mobile apps have greater access to your personal data, provided the app requests those privileges, and Facebook and Instagram in particular (more so than WhatsApp, another Meta property) request the vast majority of those privileges. This includes precise GPS data on where you are, whether or not you are using the app.

Ironically, it’s the strength of the web—and web browsers—that has led to such shitty mobile web experiences. The pretty decent security model on the web means that sites have to pester you.

Part of the reason why you don’t see the same egregious over-use of pop-ups and overlays in native apps is that they aren’t needed. If you’ve installed the app, you’re already being tracked.

But when I describe the dreadful UX of most websites on mobile as a cultural problem, I don’t just mean business culture.

Us, the people who make websites, designers and developers, we’re responsible for this too.

For all our talk of mobile-first design for the last fifteen years, we never really meant it, did we? Sure, we use media queries and other responsive techniques, but all we’ve really done is make sure that a terrible experience fits on the screen.

As developers, I’m sure we can tell ourselves all sorts of fairy tales about why it’s perfectly justified to make users on mobile networks download React, Tailwind, and megabytes more of third-party code.

As designers, I’m sure we can tell ourselves all sorts of fairy tales about why intrusive pop-ups and overlays are the responsibility of some other department (as though users make any sort of distinction).

Worst of all, we’ve spent the last fifteen years teaching users that if they want a good experience on their mobile device, they should look in an app store, not on the web.

Ask anyone about their experience of using websites on their mobile device. They’ll tell you plenty of stories of how badly it sucks.

It doesn’t matter that the web is the perfect medium for just-in-time delivery of information. It doesn’t matter that web browsers can now do just about everything that native apps can do.

In many ways, I wish this were a technical problem. At least then we could lobby for some technical advancement that would fix this situation.

But this is not a technical problem. This is a people problem. Specifically, the people who make websites.

We fucked up. Badly. And I don’t see any signs that things are going to change anytime soon.

But hey, websites on desktop are just great!

Making the new Salter Cane website

With the release of a new Salter Cane album I figured it was high time to update the design of the band’s website.

Here’s the old version for reference. As you can see, there’s a connection there in some of the design language. Even so, I decided to start completely from scratch.

I opened up a text editor and started writing HTML by hand. Same for the CSS. No templates. No build tools. No pipeline. Nothing. It was a blast!

And lest you think that sounds like a wasteful way of working, I pretty much had the website done in half a day.

Partly that’s because you can do so much with so little in CSS these days. Custom properties for colours, spacing, and fluid typography (thanks to Utopia). Logical properties. View transitions. None of this takes much time at all.

Because I was using custom properties, it was a breeze to add a dark mode with prefers-color-scheme. I think I might like the dark version more than the default.

The final stylesheet is pretty short. I didn’t bother with any resets. Browsers are pretty consistent with their default styles nowadays. As long as you’ve got some sensible settings on your body element, the cascade will take care of a lot.

There’s one little CSS trick I think is pretty clever…

The background image is this image. As you can see, it’s a rectangle that’s wider than it is tall. But the web pages are rectangles that are taller than they are wide.

So how I should I position the background image? Centred? Anchored to the top? Anchored to the bottom?

If you open up the website in Chrome (or Safari Technical Preview), you’ll see that the background image is anchored to the top. But if you scroll down you’ll see that the background image is now anchored to the bottom. The background position has changed somehow.

This isn’t just on the home page. On any page, no matter how tall it is, the background image is anchored to the top when the top of the document is in the viewport, and it’s anchored to the bottom when you reach the bottom of the document.

In the past, this kind of thing might’ve been possible with some clever JavaScript that measured the height of the document and updated the background position every time a scroll event is triggered.

But I didn’t need any JavaScript. This is a scroll-driven animation made with just a few lines of CSS.

@keyframes parallax {
    from {
        background-position: top center;
    }
    to {
        background-position: bottom center;
    }
}
@media (prefers-reduced-motion: no-preference) {
        html {
            animation: parallax auto ease;
            animation-timeline: scroll();
        }
    }
}

This works as a nice bit of progressive enhancement: by default the background image stays anchored to the top of the viewport, which is fine.

Once the site was ready, I spent a bit more time sweating some details, like the responsive images on the home page.

But the biggest performance challenge wasn’t something I had direct control over. There’s a Spotify embed on the home page. Ain’t no party like a third party.

I could put loading="lazy" on the iframe but in this case, it’s pretty close to the top of document so it’s still going to start loading at the same time as some of my first-party assets.

I decided to try a little JavaScript library called “lazysizes”. Normally this would ring alarm bells for me: solving a problem with third-party code by adding …more third-party code. But in this case, it really did the trick. The library is loading asynchronously (so it doesn’t interfere with the more important assets) and only then does it start populating the iframe.

This made a huge difference. The core web vitals went from being abysmal to being perfect.

I’m pretty pleased with how the new website turned out.

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Going Offline is online …for free

I wrote a book about service workers. It’s called Going Offline. It was first published by A Book Apart in 2018. Now it’s available to read for free online.

If you want you can read the book as a PDF, an ePub, or .mobi, but I recommend reading it in your browser.

Needless to say the web book works offline. Once you go to goingoffline.adactio.com you can add it to the homescreen of your mobile device or add it to the dock on your Mac. After that, you won’t need a network connection.

The book is free to read. Properly free. Not the kind of “free” where you have to supply an email address first. Why would I make you go to the trouble of generating a burner email account?

The site has no analytics. No tracking. No third-party scripts of any kind whatsover. By complete coincidence, the site is fast. Funny that.

For the styling of this web book, I tweaked the stylesheet I used for HTML5 For Web Designers. I updated it a little bit to use logical properties, some fluid typography and view transitions.

In the process of converting the book to HTML, I got reaquainted with what I had written almost seven years ago. It was kind of fun to approach it afresh. I think it stands up pretty darn well.

Ethan wrote about his feelings when he put two of his books online, illustrated by that amazing photo that always gives me the feels:

I’ll miss those days, but I’m just glad these books are still here. They’re just different than they used to be. I suppose I am too.

Anyway, if you’re interested in making your website work offline, have a read of Going Offline. Enjoy!

Going Offline

Making the website for Research By The Sea

UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.

I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:

We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.

If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.

Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!

These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:

I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.

I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.

I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.

I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.

Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.

Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left, right, width or height in a style sheet, it looks like a bug to me.

Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp declaration on the body element, which can also get the job done.

Okay, so those are the things that feel standard to me. So what could I play around with that was new?

View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.

Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…

Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.

You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.

That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.

Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference @media query.

I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).

Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Train coding

When I went up to London for the State of the Browser conference last month, I shared the train journey with Remy.

I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.

On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.

Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.

If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.

Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.

Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.

The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.

There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.

With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.

Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.

By the time I was back home, the code was done. Here’s the result:

The Session archive: a static copy of the content on thesession.org.

If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.

I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.

Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.

You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.

I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).

I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.