CARVIEW |
mark nottingham
recent entries all entries feed
Hi, I’m Mark Nottingham. I write about the Web, protocol design, HTTP, Internet governance, and more. This is a personal blog, it does not represent anyone else. Find out more.
Comments? Let's talk on Mastodon. @mnot@techpolicy.social
other HTTP posts
- Yet More New HTTP Specs
Wednesday, 8 June 2022 - A New Definition of HTTP
Monday, 6 June 2022 - How Multiplexing Changes Your HTTP APIs
Sunday, 13 October 2019 - Designing Headers for HTTP Compression
Tuesday, 27 November 2018 - How to Think About HTTP Status Codes
Thursday, 11 May 2017 - RFC2616 is Dead
Saturday, 7 June 2014
other Standards posts
- The Nature of Internet Standards (series)
- RFC 9518 - What Can Internet Standards Do About Centralisation?
Tuesday, 19 December 2023 - RFC 8890 - The Internet is for End Users
Friday, 28 August 2020 - How to Read an RFC
Tuesday, 31 July 2018
On RFC8674, the safe preference for HTTP
Thursday, 5 December 2019
It’s become common for Web sites – particularly those that host third-party or user-generated content – to make a “safe” mode available, where content that might be objectionable is hidden. For example, a parent who wants to steer their child away from the rougher corners of the Internet might go to their search engine and put it in “safe” mode.
There are, of course, other ways to prevent access to content; for example, DNS filtering, or installing a root CA in the browser. However, these techniques are much more intrusive, and less granular; if a site hosts undesirable content, a DNS filter has to block the whole site. A root CA gives its controller access to view and change everything you do on the Web.
So, a safe mode for sites is generally a positive development, in that it allows you to control your experience of the Internet. However, it’s frustrating, because you have to find that setting and change it on each site – and there are many. And, if you clear cookies or use a different browser, you have to go through the whole process again.
However, if the Web browser can be configured to tell sites that the user is requesting safe content, it doesn’t require setting all of those cookies; it’s a single choice when you configure your browser (or operating system). It gives users more honest control of their experience of the Internet, rather than requiring them to jump through hoops on an unbounded number of Web sites.
This was roughly my thinking in 2013, when I wrote a draft for a safe preference in HTTP. In a nutshell, it’s a one bit request header that indicates a preference for content that is “safe” – where the site defines the meaning of that word.
By its nature this requires the cooperation of sites; it can’t guarantee a safe experience, but it can make it easier to include Web sites in the set of tools you use to get there.
When I wrote the original draft, it was a thought experiment; I didn’t expect it to go much further. However, some folks at Microsoft who worked on their family safety products expressed interest. Eventually it was implemented in Internet Explorer and Bing. Soon afterwards, there was a discussion at Mozilla and they implemented support for it too.
Based upon that interest (as well as support elsewhere), I asked for it to be standardised, as an Independent Submission to the IETF. Independent submissions are standarised without a Working Group; there’s an IETF-wide Last Call, and then the IESG makes a decision.
That’s where things fell apart, and why I’m writing about this more than six years later. While the IETF Last Call went reasonably smoothly, some in the IESG pushed back on the draft; when I tried to address their concerns, I got silence in return. I could speculate as to why, but I don’t have enough information to be sure. However, the mechanism is deployed on the Internet, and it needs to be documented and entered into the registry.
So, after waiting for a while with no further progress, I requested that the document be published on the Independent Stream – which doesn’t require IETF consensus or IESG approval so isn’t a “real” standard – and the result is RFC8674, many years later.
The Safe preference is not a perfect mechanism, by any measure; there are several tradeoffs in its design.
For example, it broadcasts one bit of identifying information (whether you’re requesting safe or not) to every HTTPS Web site you visit. That could, in a small way, contribute to fingerprinting you – although one bit is only useful in concert with many other bits.
The desire to limit fingerprinting led to that one-bit design, but precludes any granularity of what “safe” is. For some sites, it’s interpreted as content appropriate for young kids (a very wide net); for others, it might be nudity or other more specific definitions.
That, in turn, leads to some user frustration, as people find that they can’t access their definition of safe content. As I understand it, Youtube implemented the safe preference for a while, but backed out of it because of the support load of people complaining when they couldn’t access videos they wanted to. I’d speculate that with a better definition of “safe” for their site, and better user support, they could make it work, but it’s hard to say for sure unless they want to put more effort into it.
If more Web sites honour the safe preference, it will increase in utility. Even if it doesn’t get broader adoption, I think it’s a step in the right direction – we need to find more ways to let users control their experience of the Internet, without resorting to sledgehammers like man-in-the-middle attacks.
I also learned a lot in the process, and the world has changed considerably in the intervening time; in particular, the online safety community has become much more visible (and political). I think there’s value in finding common ground, improving dialogue and looking for mutually acceptable solutions to both that community and the Internet technical community; otherwise, we’re going to have mismatched expectations and friction, like we’ve seen with DNS over HTTP recently.