| CARVIEW |
There are a lot of things involved in making performance more predictable, but one immediate thing you can do is do the work up front to set expectations about which metrics you expect the optimization to move, and how you’ll measure the impact.
For example, I have a client who is experimenting with the new Speculation Rules, specifically using it to prefetch PDP pages when someone is on a category or search page.
In addition to the technical details for how to implement it, our ticket includes a “Measuring the Impact” section that looks something like this:
How are we going to measure it?
We will measure the impact using our RUM data. We’ll include an inline JavaScript snippet to determine if the page was prefetched or not, and add that as custom data so we have two very clear buckets for comparison.
We’ll use the RUM products compare functionality to compare visits to PDP’s where the pages were prerendered to visits where the PDP’s were not prerendered.
Speculation Rules are only supported in Chromium-based browsers, so we’ll zero in on Chrome and Edge sessions only.
What do we expect to change?
We expect to see a direct impact on TTFB and that will be the primary metric we’ll look at.
Paint metrics such as First Contentful Paint and Largest Contentful Paint should also see an improvement.
It’s nothing too fancy or complex, but including this little section for every optimization accomplishes a few things.
It makes sure the entire team knows which data source will be our source of truth. Often teams have different tools, including locally running tools, with very different results, so we want to make sure everyone is on the same page. In this case, it also helps people to know how to narrow that data down—this isn’t an optimization that will impact Safari traffic, for example, so to get the clearest picture of the impact, we need to exclude that from our analysis.
It makes it clear which metric(s) we expect to move. There is no shortage of performance metrics, so having a clear list of which ones should be impacted here.
It ensures the team has thought about what we expect from this optimization up front. A lot of times, organizations can chase optimizations because they’ve read about them or saw a talk about them. By making sure that any optimization includes information about which metrics to measure and how, it ensures at least some thought has been given to how that optimization may or may not impact their specific situation.
It makes it clear which metric is our primary one to look at, in this case TTFB. Harry has written before about the importance of measuring the metrics you impact, not the ones you influence.. That framing of directly impacted and indirectly impacted metrics is incredibly useful for ensuring we’re getting a clear picture of the impact of our optimizations. If we were to focus on Largest Contentful Paint for this optimization, for example, we would allow a lot of noise to interfere with our measurement—prefetching pages will directly impact TTFB, but there are a lot of other potential bottlenecks between that milestone and when LCP fires.
By including a section like this, we’ve provided a lot of clarity about the optimization and reduced the likelihood for any wins to get lost by looking at the wrong metrics or data sources.
Over time, doing this with each optimization opportunity helps to build both the team’s knowledge and the overall confidence in understanding what might move the needle, and what might not.
]]>
I get a fair amount of folks asking me if I know of any companies hiring for roles that involve web performance work. I also get a fair amount of companies who contact me asking me if I know of anyone they can bring on board for roles that require some level of performance knowledge.
So, I figured, why not make it a bit easier for those companies and people to connect.
In the spirit of “what is the least bit you can do to validate there’s a need” (while also holding myself accountable), I shared on LinkedIn in April that I was going to build it, and then launched a one-page form for folks to sign up with early interest.
I wasn’t expecting a ton of attention it yet, but that one share turned into ~600 early-bird signups and a lot more initial (self-inflicted) pressure than I had anticipated!
On the one hand, launching a job board focused on a niche like performance at a time when the headlines are full of layoffs seems like a questionable strategy.
On the other hand, it seems like the perfect time.
There are so many talented folks right now who are available, or are quietly looking around. And there are more companies hiring than the headlines may lead you to believe.
They just need to connect, and hopefully perfwork can help with that. I have no ambitions that this will change the world, but I do hope it will help a few folks a little and that’s good enough for me.
Right now, the site works best for full-time or part-time roles with companies. I absolutely want this to be a useful resource for companies looking for contract and consulting help as well, and while technically those sorts of roles can be posted today, there’s a lot more to do to be truly helpful in that way. The immediate need for a way to find full-time work seemed more pressing.
You don’t need an account to look for jobs (and you never will), though I do plan on adding some features in the future that will work best if you have an account.
There’s an RSS feed to help you keep tabs because RSS is awesome and works wonderfully as a connective tissue—you can use RSS as the plumbing for a ton of other distribution channels.
For the super tech-minded folks reading this, the site is built with web components using Enhance, which I absolutely love—I’ll probably be writing more about some of that stuff at a later point in time.
I plan on being pretty transparent about perfwork and how I’m building it. I’ve really appreciated that level of transparency from others with things they’ve built.
So in that vein, I do have a public “roadmap” of sorts started. It’s using Notion, because I like it and that felt more approachable to non-technical folks who are involved in the hiring. I have comments enabled on it so that folks can leave any feedback they might have. And please do! I’m all ears on ways to make this a better resource for folks.
The bulk of my time is spent with my performance consulting clients, so I’m not expecting to crank out the roadmap features at a blistering pace or anything like that, but I should be able to have some steady progress for sure.
If you’re looking for your next role with a performance twist, or if you’re a quality company looking for your next great hire, I hope perfwork is helpful.
]]>
I saw it again the other day as someone shared it, with a cautionary reminder that decisions about technological products cannot be a democracy. The image is comical, and I think all of us can think of products that have gone a little too far in this direction.
But the thing that bothers me each time I see the comic is that it pushes the problematic “single visionary” narrative.
At best, it’s a dangerous and risky mindset. At worst, it can be quite toxic.
The genius myth
We have a tendency to want to look for the single genius—the one individual who is a visionary, who knows what users need better than they do themselves, who knows what a product should or shouldn’t do. They don’t need to talk to users, they don’t need to confer with others. They have a powerful vision in their mind of what needs to be done, and they have their team make it so.
It’s a damaging fairytale that results in a bunch of people who think they’re that person. They think they know best. They don’t need to talk to users. They don’t need to look at the data. They don’t need the people around them to come up with the ideas. They just need people to listen to what they have to say and execute on the decision they already know to be correct.
99% of the time, those people are just flat-out wrong.
Maybe they have a small series of successful decisions that encourages them for a bit, but inevitably things fall off the rails. In their wake, they leave a trail of destruction. The people around them are discouraged and demoralized because they’ve never been empowered to make decisions themselves. There’s no established culture of exploration and learning. So when things start to go amiss, the framework isn’t there to recover.
The idea that product decisions can or should be made by a single visionary mind is the root of so many failed attempts to create the next great thing.
Doing better
So if the visionary is a rarity, how do you make product decisions that will end up with a product that is useful and valuable and not overrun with every random idea?
You do, in fact, start with a vision, but not about execution—it has to start with something more open than that. You start by figuring out what it is that you’re setting out to build, and for whom. What are the principles you want your product to reflect.
These principles not only help you figure out what you want to build, but they help you figure out what not to build as well.
Principles like this are important because it means you don’t have to sit there and put your fingers in your ears to avoid the “overly complex iPhone” scenario. Instead, you actively listen and explore so that you can discover new ways of aligning with them.
With these principles in mind, the next step is to have a mindset of humility and curiosity. In my opinion, these may be the two single most important characteristics for success in developing products.
Never assume you know the situation, do the work to learn.
Talk to your users, regularly. The more regularly the better. I try to have at least one call with a user a week, more if there’s a specific question I’m trying to answer. However frequently you do it, the point is to not wait until you have a question you need answered, but to make it continuous. These calls are for listening. Find out the problems folks are trying to solve, the way they’re using your product, the gaps they still have.
Look at the data. Measure what you can about how your product gets used. When you come up with a new feature, think about what you’re trying to accomplish with it, and how you might measure its effectiveness. Spend time exploring the data to see what you can learn about how people use the things you build.
Always make a point to challenge your assumptions. If you’ve got a hypothesis in mind, that’s great. But then go look for the evidence to either support it, or to prove it wrong.
It can be tempting to only look at data that will build your case, but that’s really just the same problem as the “visionary” narrative. One thing I like to do before diving into the data is to make a list of questions that I want to answer. I think about the questions that would support my hypothesis. But then I also think about the questions that someone might explore that could disprove my hypothesis. My goal is to have an even number of both.
Most importantly, and in sharp contrast of “don’t make decisions like a democracy”, involve those around you. Presumably, you’ve got smart people in your organization that care in some way about what they’re working on. Giving them marching orders and expecting them to just follow is such a waste of that talent.
Instead, arm them with the vision and guiding principles, and then solicit their feedback constantly. The idea is not to push a solution on them, but to instead give them the problem and see what sort of solutions they come up with. Make sure you’re giving them ample time to take a step back and think big picture.
If you can, involve them in the user conversations and data analysis. If you can’t, at least share those with them on a regular cadence. They’ll feel more involved and in no time, you’ll have a robust set of great ideas flowing.
The ultimate goal is to create an environment where everyone feels like they can provide input and ideas on the future direction of what you’re all building. Simply giving people the space and respect to contribute will do wonders for your product.
As Jeena James—one of my favorite people I’ve ever worked with (and a great example of humility and curiousity in leadership)—likes to say “It’s incredible what can be accomplished when no one cares who gets the credit.”
That’s what you’re after. That’s how products and companies succeed. Not by a single, all-knowing visionary, but by the collective input of a group of people working on the product, informed by data and continuous user conversations, and framed by a series of principles that you want your product to reflect.
We should be much more afraid of the “single visionary” narrative than we should of making our product decisions more democratic.
]]>Their Time to First Byte, First Contentful Paint and Largest Contentful Paint are all overwhelmingly in the green. Their Cumulative Layout Shift and Interaction to Next Paint have a little room for improvement, but even those are really solid. All the usual suspects look great.
Looking at the metrics alone, you would think their performance was rock-solid and there wasn’t much to do. But in reality, they get frequent user complaints about it.
When we rolled up our sleeves and dug into those complaints a bit, a theme emerged.
These pages have a lot of content. As you scroll down into the page, that content often contains images and other forms of media and those are very large and unoptimized, resulting in them loading very slowly. (There’s a little more involved, but that’s the basic gist.)
Last week, at Smashing Conference, Carie Fisher gave a wonderful talk about accessibility. There was one phrase I that she repeated a few times that I absolutely loved: “Accessibility isn’t about conformance”.
Neither is performance.
There’s an tendency at times for organizations to treat performance as a checklist of sorts, particularly as we’ve seen the core web vitals metrics bring more attention to performance than ever before. You try to tick the box on those metrics to get them green, then call it a day. (This organization, to their great credit, did not do that.)
But none of that matters if those metrics aren’t painting a complete picture of how users interact with our sites.
Performance, like accessibility, is not about conformance.
It’s not about a checklist.
It’s not even about simply making things fast.
It’s about providing a better experience for the people using our sites and applications to make sure they can efficiently accomplish their goals. Doing that requires that we pay close attention to what those goals are, how they are trying to achieve them, and then making sure that the way we measure performance matches up.
]]>What you might not realize, is that the same logic doesn’t apply yet to First Contentful Paint.
I put together a super simple demo based on a real-world approach to lazy-loading images, but with a couple small changes to make it easier to see what’s happening.
The page first loads a placeholder image—in the real-world, I’ve usually seen this as plain white, but to make it more obvious, it’s an unmissable bright pink. Then the actual image is loaded to replace it (I added a 1 second delay to make it more obvious).
In this test result, the First Contentful Paint (FCP) fires at 99ms, and the Largest Contentful Paint (LCP) fires at 1.2s.
In Chrome DevTools, we can see the FCP metric (triggered by the placeholder image) firing much earlier than the LCP metric (triggered by the actual image that is lazy-loaded).
Here’s what the page looks like at both stages. On the left, is what it looks like at FCP—our pink placeholder is visible. On the right, is what the page looks like at LCP—the actual image has replaced the placeholder.
On the left, the placeholder image triggers the FCP metric. But because the image has a low bpp, the LCP metric isn’t triggered until the actual image loads (right).
Dropping the placeholder image onto the bpp-calculator I threw together, we can see that the bpp is…well it’s tiny. I round it, but the actual number looks like it’s about .003.
The bpp for the placeholder image is tiny, well below the 0.05 threshold.
Since that’s well below the 0.05 bpp threshold that LCP now looks at, it doesn’t count as an LCP element. So even though it’s the same size as the actual image that comes in later, Chrome ignores it for its LCP measurement. (Funny enough—I actually had to compress the image to get it below the threshold. When I exported directly from Sketch, the size of the file was large enough that the bpp ended up exceeding the threshold, so the placeholder image was triggering the LCP metric).
FCP, however, doesn’t care. The definition of “contentful” there doesn’t factor in bpp, so FCP fires when that placeholder image loads.
SVG gaps
That’s not the only situation where we have a gap between what “contentful” means in one scenario versus another.
Andy Davies shared an example where two pages that look absolutely identical report two different LCP elements. In the first one, the chart is an svg element. When it comes to svg elements, LCP only considers a nested <image> as content. So in this case, it counts the h1 element as the LCP.
The chart is loaded as an svg element, which LCP does not report on. So instead, it reports the much smaller h1 element.
On his other page, it’s the same charge, but now it’s loaded using an img element.
<img src="images/chart.svg" width="830" height="446">As a result, Chrome reports the LCP element as the chart—not the h1 element.
Using an img element to load the chart results in the LCP metric being attribute to the image now, not the much smaller h1 element.
Again, it’s worth re-emphasizing in both situations FCP will consider the chart—whether it’s embedded or linked to externally.
We can see this more clearly if we remove the h1 altogether from the first example. Chrome reports the chart as the FCP element, but doesn’t report an LCP at all—since the embedded SVG is not considered “contentful” in the context of LCP, nothing ever triggers the metric.
With the h1 element removed, LCP won’t fire at all because it doesn’t count the SVG element as content.
Opacity too….
Another situation where there’s a gap comes into play when an element has an initial opacity of 0 and then gets animated into place.
On the Praesens site, all the text animates into place. As a result, FCP gets reported, but not LCP.
All the text starts with an opacity of zero and then animates into place, so while it triggers FCP, it never triggers LCP.
To “solve” the issue and get LCP to report, they actually ended up adding a div to the page that has some text that matches the background color (so it is never seen).
<div class="lcp" aria-hidden="true">This site performs!</div>Ok. What gives?
The reason why “contentful” has very different meaning in the context of LCP vs FCP is because while the two metrics sound very similar, they’re actually built on two different underlying specifications.
First Contentful Paint is built on top of the Paint Timing API, which has one definition of “contentful”. Largest Contentful Paint, however, is built on top the Element Timing API, which actually has no definition of “contentful”, but does have a list of elements it will expose timing for.
That feels…not ideal. It’s certainly a bit confusing and leads to situations where folks are going to be scratching their heads trying to sort out why their measurements may look off—or be missing entirely.
I assume solving this requires either making both metrics use the same underlying API, or abstracting the definition of “contentful” so that both the Paint Timing API and Element Timing API have a definition, and that definition matches across both.
Another potential solution could be to rename Largest Contentful Paint. Ironically, after all the fine-tuning Chrome as done, Largest Meaningful Paint feels most accurate based on what they’re trying to accomplish, but of course that will bring confusion with First Meaningful Paint (may it rest in peace).
I’m also really curious to see what happens when other browsers start to support LCP. Currently, it’s Chromium-based browsers only though I do know that Firefox is working on it (hopefully that means Element Timing support is coming to Firefox too!). It’ll be interesting to see how much Firefox decides to match Chrome’s heuristics around “contentful” in the LCP metric. It almost feels like they’d have to to avoid confusion when folks compare across both browsers, but that’s just speculation on my part.
Until and unless something happens to align the definition, it will be important for anyone measuring both FCP and LCP to remember that, in this case, contentful in one doesn’t equal contentful in the other, and that may lead to some odd disconnects.
]]>In the discussion that followed, the Golden Rule of Performance (popularized by Steve Souders) was brought up:
80-90% of the end-user response time is spent on the frontend.
In the thread, Steve Souders suggested that someone should revisit the Golden Rule to see if it still holds for today.
I was curious, so I figured I would oblige.
Revisiting the golden rule
Way back in 2006, Tenni Theurer first wrote about the 80 / 20 rule as it applied web performance. The Yahoo! team did some digging on 8 popular sites (remember MySpace?) to see what percentage of the time it took for a page to load was comprised of backend work (Time to First Byte) versus frontend work (pretty much everything else).
What they found was that the vast majority of the time was spent on the frontend for all 8 sites:
| Time Retrieving HTML | Time Elsewhere | |
|---|---|---|
| Yahoo! | 10% | 90% |
| 25% | 75% | |
| MySpace | 9% | 91% |
| MSN | 5% | 95% |
| ebay | 5% | 95% |
| Amazon | 38% | 62% |
| YouTube | 9% | 91% |
| CNN | 15% | 85% |
When Steve Souders repeated it in 2012, he found much the same. Among 50,000 websites the HTTP Archive was monitoring at the time, 87% of the time was spent on the frontend and 13% on the backend.
I ran a few queries against the HTTP Archive to see how well the same logic applies today. In sticking with the original analysis, I’m comparing Time to First Byte (how long it takes for the first bytes of the first request to start arriving from the server) compared to the total load time (when onLoad fires). I broke the percentages down by page rank (based on traffic to the site). The percentages are the median percentages for that group of sites.
First up, the mobile results.
| Site Rank | backend | frontend |
|---|---|---|
| Top 1,000 | 12.7% | 87.3% |
| 1,001 - 10,000 | 12.5% | 87.5% |
| 10,001 - 100,000 | 13.8% | 86.2% |
| 100,001 - 1,000,000 | 14.5% | 85.5% |
Next up, the desktop results.
| Site Rank | backend | frontend |
|---|---|---|
| Top 1,000 | 9.9% | 90.1% |
| 1,001 - 10,000 | 10.8% | 89.2% |
| 10,001 - 100,000 | 12.6% | 87.4% |
| 100,001 - 1,000,000 | 13.3% | 86.7% |
For both desktop and mobile data, the 80 / 20 rule still holds strong. In fact, it’s more like the 85⁄15 rule threatening to be the 90⁄10 rule for the top 1,000 URLs where they likely have more resources to throw at high quality CDN’s and backend infrastructure.
But….does it really matter?
So…now what?
While it’s interesting to see that the rule holds up (and that, at least at the aggregate level) perf is getting even more frontend dominant, I’m not sure how much it really matters anymore.
When the rule was first brought up, frontend performance wasn’t really a thing that people did. Performance conversations were dominated by discussions around the server-side aspects of performance. That rule itself was part of an appeal to get folks to focus on the frontend aspects as well.
Fast forward to today, and that’s less of a problem. To Rafael’s point, there’s a lot of chatter nowadays around the frontend aspects of performance, particularly in after the rise in popularity of Lighthouse and Core Web Vitals.
The split between “backend” and “frontend” performance is increasingly murky as we see popular JavaScript frameworks focusing more on server-side rendering with client-side enhancements after the fact. We also see how server configurations can enable features like early hints, or mess with HTTP/2 prioritization, or impact the level of compression applied to resources, and on and on. It isn’t immediately obvious where backend performance ends and frontend performance starts.
Nor should we be thinking about it that way.
As we increasingly focus our conversations of performance on how metrics that relate to the user experience, the divide between backend and frontend goes past murky and starts becoming problematic.
Want to improve your Largest Contentful Paint? If you’re focusing solely on the “frontend” aspects of performance, you’re likely missing out. Measuring and optimizing your Time to First Byte is frequently a critical component of improving LCP. I’ve seen a lot of sites suffering from extremely volatile TTFB metrics that vary dramatically based on geography or whether or not there’s a cache hit or miss. Diving deeper into what that TTFB is comprised of and how to optimize it can unlock big performance wins and even the playing field for your users.
Another great example is the new Early Hints header. That’s technically a server-side optimization (probably…) but one that Shopify and Cloudflare saw have a big impact on frontend, user-focused metrics like First Contentful Paint and Largest Contentful Paint.
Point being: if you’re focusing on one or the other at this point, you’re going to find yourself coming to the limits of what you’re able to do with regards to optimization pretty quickly.
The Golden Rule of performance still applies, and if you’re finding yourself in a situation where your organization still treats performance as primarily a server-side consideration, then there’s definitely value in doing the comparison for your own sites to help folks start to shift their thinking.
Otherwise? The rule probably doesn’t matter much.
Take a holistic approach to performance and pay attention to how it all contributes to the user experience—that’s how you’ll find your biggest, most meaningful performance wins.
]]>There have been a ton of interesting changes in the performance industry the past few years—new metrics, new challenges, new opportunities. It’s been fun to work on them from a tooling perspective, and I’m eager to start helping folks make sense of them.
I enjoy working with organizations who want to not just make their sites faster, but who want to build up the internal culture, knowledge, tooling and processes that will help them stay that way. Regressing on performance is a situation that many companies are all too familar with. Combatting that regression requires a more holistic look at the culture of performance in an organization—the processes, the tools, the knowledge and more.
I recently heard from a past client who told me that the work I did with them a few years back “…literally changed the mindset of our engineers as a whole.”
I’ve got another past client who still messages me regularly about their (very impressive) continued performance improvements.
Those are pretty much the ideal outcomes for all the folks I work with.
As I’ve only just announced this, it does mean that I’ve got some immediate availability (at the moment, at least). So if that sounds like you, or an organization you know, let’s chat.
]]>It’s not always obvious to folks, but the versions of Chrome or Firefox or any other browser you can download on iOS today still uses WebKit, Safari’s underlying engine, under the hood. So you’re not actually getting browser choice at all. It’s more than a little surprising they’ve been able to make that a requirement as long as they have.
Opening it up so that you can use actual Firefox or actual Chrome or actual Edge? That’s a huge win for browser diversity and the web at large.
I’ve seen a few arguments that this potentially only further cements Chrome’s position as the dominant browser engine, but I think that’s only true if Apple decides to let it happen.
Apple’s got some supremely talented folks working on Safari and WebKit—talent has never been the problem. The problem is prioritization and resources. Safari/WebKit are understaffed in comparison to Chrome/Edge/Chromium, and, at least in my experience, historically not as incentivized as Firefox to move forward as quickly with feedback from the community. There have already been big improvements this past year on that front, and there’s plenty of room for more.
The latest release of Safari is an incredibly encouraging sign that this momentum will continue. Not only are there a lot of very valuable (hello home screen web apps!), but the sheer size of the release is staggering compared to what we’ve seen in the past.
Here, for example, are the official release notes for the 16.4 beta alongside the release notes for 16.3 (which is a typical length for Safari’s release notes):
A comparison of the length of the release notes for Safari 16.4 (left) and 16.3 (right). The 16.4 note are substantially longer.
I’m not saying that the comprehensiveness here is entirely prompted by the fact that things might be about to get a lot more competitive, but the timing sure is highly coincidental if it’s not.
Apple makes plenty of money. If they want to start investing more heavily into their browser, they absolutely can. If they want to put a higher focus on making Safari awesome, getting the word out about it, and focusing on making it as ergonomic for developer tooling integrations as Chrome has, they can do that.
Heck, if they wanted to make it possible for folks to run Safari on other operating systems, they could do that too. It would be non-trivial—Safari does tend to rely heavily on the underlying MacOS functionality—but there’s no reason why they couldn’t get there.
So yeah, if they continue to operate with a business as usual approach, with decisions that made sense when they didn’t have to compete for attention on iOS, then sure—other browsers being able to be run on iOS probably does strengthen Chrome’s market share (though let’s not forget how powerful defaults are).
But if they sense the rules of engagement have changed and decide to adjust their strategy as a result? Then this is an unequivocal win for the web and you could even envision a scenario where they start to creep into Chrome’s market share a bit
I mean…could you imagine if they could get to even 60-70% of the engineers working on WebKit as Chrome and Edge have working on Chromium?
Could you imagine if they made it as easy to programmatically drive WebKit as it is to drive Chromium with Puppeteer and the like?
A pivot in strategy from Apple could open up so many doors to a healthier web.
]]>So I would. I would grab a coffee, sit down, and start reading. Often we’d share posts back and forth, or chat about some of the more interesting ones we had read. It was something we did that was never on accident…it was intentional, deliberate. It was a way, I think, of investing in ourselves while also acknowledging how much we still could learn from others.
Nowadays, of course, a few things have changed.
Obviously, I’m no longer using Google Reader—I’m a big fan of Feedbin and the annual payment I make is perhaps the easiest payment to justify all year.
I don’t subscribe to as many tech news outlets as I used to. In fact, there are just a handful of publications in my RSS feeds.
Personal blogs have always been my favorite, and continue to be my favorite, though they’re a lot quieter than they used to be (mine too).
Some of those have been replaced by newsletters. Historically, newsletters were a format I could just never quite get into (gosh email is rough enough without having those in there), but Feedbin’s ability to subscribe to email newsletters that then show up in my reader has been a massive game-changer there.
Some things are also the same as they’ve always been.
Opening up my RSS reader, a cup of coffee in hand, still feels calm and peaceful in a way that trying to keep up with happenings in other ways just never has. There’s more room for nuance and thoughtfulness, and I feel more in control of what I choose to read, and what I don’t.
The act of spending that time in those feeds still feels like a very deliberate, intentional act. Curating a set of feeds I find interesting and making the time to read them feels like an investment in myself.
I still make a point to spend some time each day, reading through my feeds, learning from others, and it still feels like one of the most important and enjoyable parts of each workday.
]]>Their initial reaction was to get frustrated at the measurement process and tool in question—why was it messing up so frequently?
But in reality, when we looked closer, what we found was that they were doing some server pre-rendering and then reloading a bunch of content using client-side JavaScript.
Turns out, they had a race condition
If a certain chunk of CSS was applied before a particular JavaScript resource arrived, then for a brief moment, their hero element was larger, causing their Largest Contentful Paint to fire much earlier. When things loaded as they intended, the hero element was a bit smaller and the Largest Contentful Paint didn’t fire until a later image was loaded.
The tests weren’t flawed—they were exposing a very real issue with their website.
Variability in performance data is a frequent industry complaint, and as a result, we see a lot of tools going out of their way to iron out that variability—to ensure “consistent” results. And sometimes, yeah, the tool is at fault.
But just as often, going out of our way to smooth over variability in data is counterproductive. Variability is natural and glossing over it hides very real issues.
Variability itself isn’t the issue. Variability without providing a clear way to explore that variability and understand why it occurs? That’s the real problem.
]]>- HTTP/3 work started in 2012 with Google working on QUIC, adopted by IETF in 2017, RFC’s published in June 2022
- All major browsers support thanks to iOS starting with version 16
What and Why?
- HTTP/3 builds on top of UDP, not TCP
- UDP often blocked as it is frequently used in attacks
- The TLS + QUIC layer makes UDP safer to use, but a lot of networks will likely still block for awhile
- HTTP/3 brings faster connection setup, better header compression, stream prioritization, packet loss handlnig, tunable congestion control, connection migration
- Establishing a connection in HTTP/2 requires 3 RTT (round-trip times)
- Establishing a connection in HTTP/3 only requires 2 RTT
- HTTP/3 introduces 0 RTT mode which can reduce the total round trip to 1
- In general, HTTP/3 is 1 to 2 RTTs faster than HTTP/2
Adapting pages to HTTP/3
- Fewer domains: consolidate on 1-3 conenctions
- Less bundling: 10-40 files is sufficient
- Help the browser: async, defer, preload, preconnect, priority hints, lazy loading
- No server push! Use 103 Early Hints instead
Setting up HTTP/3
- Easy way to setup HTTP/3: All major CDN’s provide HTTP/3, often with the flip of a switch.
- Scalemates.com flipped the switch to enable HTTP/3 and almost immediately HTTP/3 traffic rose to ~13% of all requests on the landing page, ~49% for all pages in total
- The RSVP problem: HTTP/3 might be blocked on the network or may not be enabled on the server. So the browser will only try HTTP/3 if it’s certain the server supports it.
- For a new domain, the browser connects using H/1 or H/2. The server sends back an “alternative services” header indicating H/3 support. The browser stores alt-svc info in alt-svc cache. From then on, the browser tries HTTP/3 in parallel with HTTP/1 and 2 so that there’s an immediate fallback if the network blocks.
- Setting up HTTP/3 on your own is much harder and server support is still improving
Testing the Performance of HTTP/3
- To check if HTTP/3 is supported, use either HTTP3Check.net or
curl. - Browser devtools will also show you which protocol is being used for a given request.
- Lighthouse and browser devtools are excellent tools, but not for network protocol throttling.
- WebPageTest uses a built-in network throttling suite which provides much more realistic network throttling.
- Because of the
alt-svcprocess, testing to compare HTTP/2 and HTTP/3 is tricky. - A lot of browsers will start switching to the new HTTP/3 connection during page load
- In WebPageTest, you can pass custom command-line commands to enable or disable quic for Chromium based brosers
- Connection view is still a little buggy in HTTP/3, with SSL sometimes missing.
- However, WebPageTest provides low-level packet trace and network logs that can surface exactly what is happening.
Unexpected Results
- Protocol performance is rarely low0hanging fruit. You need a relatively optimized page to see impact, and even then it’s mainly going to benefit your 90th percentile and up.
- Google Search with QUIC: p90 went down 6% on desktop, 5% on mobile. p99 went down 16% on desktop, 14% on mobile.
- Wix saw TTFB (Time to First Byte) in India get 47.5% faster at p75, and 55% faster in Philippines.
- Digging in though, TTFB metric was heaviliy influenced by the HTTP/2 connections including a DNS lookup and the HTTP/3 connections not including the DNS lookup.
- Better than TTFB is to look at connection time (“honest TTFB”). Updating still saw improvements of ~30% faster at p75.
- Wix also saw a 21% improvement in LCP (Largest Contentful Paint) at p75 on mobile
- Microsoft found similar with Outlook: p90 saw a 30% faster “honest TTFB” and up to 64% faster at the p99.99
- LoveHolidays saw a 18% faster TTFB at p75 when using HTTP/3 versus HTTP/2, but only 2% faster LCP.
Summary
- Know what the feature actually does. A deeper understanding is always helpful.
- Use a CDN
- Understand the limitations of your tools
- Hypothesize first, and always confirm with data
What we found was that the median site using Ember.js spent 21.9s dealing with JavaScript when loaded on an emulated mobile device. That’s a whopping 14.4s longer than the next closest detected library or framework.
It would be one thing to see a number like that late in the long-tail; if, for example, we saw that at the 99th percentile it would represent an anomaly where something probably went very wrong. But to see it at the median is really startling.
I wanted to follow-up, to satisfy my own curiosity, but honestly forgot about it during all the holiday season shuffling until someone asked about it on Twitter.
So, let’s dig in and see what the heck is going on.
First things first, I ran a query against BigQuery to return all of the sites that perform worse than the median. There were a lot of pages that were subdomains at Fandom.com, so I ran another query and it turns out a whopping 99% of all URLs performing worse than the median were Fandom sites.
Running against the December data (the latest run at the time of digging in) confirmed the same situation was still in place.
In the December run of HTTP Archive, Ember was detected on 17,733 URLs. Of those, 13,388 (75%) are subdomains at fandom.com. And, again, those subdomains comprise the bulk of the poor performers.
It’s worth noting: this is only true on mobile. Fandom sites serve different versions of the site to mobile browsers. The desktop version doesn’t use Ember; the mobile version does. That’s why, if we query desktop sites in the December run of HTTP Archive, we only see 4,869 sites using Ember compared to 17,733 mobile sites.
Now that we know Fandom sites are particularly common, and problematic, we can compare the aggregate processing times for all pages with Ember detected to A) all Fandom pages with Ember detected and B) all pages with Ember detected that aren’t Fandom pages.
| URL Subset | 10th | 25th | 50th | 75th | 90th |
|---|---|---|---|---|---|
| All Ember URLs | 3516.5ms | 11474.9ms | 19064ms | 25782.6ms | 32636.8ms |
| Fandom Sites | 14867.9ms | 17515.1ms | 21790.8ms | 28172.8ms | 34309.9ms |
| Fandom Sites Excluded | 2111.5ms | 2985.1ms | 3968.3ms | 5741.4ms | 8362.9ms |
So, this left me with a few follow-up questions.
How does HTTP Archive figure out which URLs to track?
First off, where does HTTP Archive get their list of URLs? That laundry list of *.fandom.com sites seems suspicious.
I pinged Rick Viscomi and Paul Calvano and it turns out that HTTP Archive generates its list of URLs by querying the most recent month’s Chrome User Experience Report (CrUX) data. Given the timing, there’s essentially a two-month gap between CrUX data and HTTP Archive URLs. In other words, December’s HTTP Archive run would use the URLs found in October’s Chrome User Experience Report.
Whether a site shows up in CrUX is up to the traffic level—sites with enough traffic during any given month are reported, sites without enough traffic during the month are not. So the bias in URLs here comes from reality—subdomains at Fandom.com really do comprise the majority of popular Ember use (that we can detect).
Since Ember’s overall sample size is relatively small (contrast that 17,333 URLs with Ember to the 337,737 URLs with React detected, for example) one popular use of the library is enough to significantly mess with the results.
Ok. So why do these sites perform so poorly?
Which brings us to question #2: what the heck are those sites doing that is so bad?
I tested https://voice-us.fandom.com/, the median site for the run the Web Almanac was based on, using a combination of WebPageTest (you can check out the full results, if you’re keen), using a Moto G4 over a 3G network, and Chrome Dev Tools with a 4x CPU throttle (When I could. I’d estimate dev tools froze maybe 80% of the time when trying to load the profile.)
The results weren’t pretty.
The WebPageTest run shows 13,254ms of script related work during the initial page load.
WebPageTest’s processing breakdown shows that JavaScript related work accounted for 13,254ms, or 84.7% of all work during the page load.
WebPageTest also shows us the total CPU time associated with each request. If we sort that by CPU time, we’ll see that while there are plenty of third-party scripts also costing us precious CPU time, the top two offenders are first party scripts, and five of the top 20 offenders. Between the five scripts, we have 6,968ms of CPU activity.
Looking at the request details in WebPageTest, and sorting by CPU Time, shows that first-party scripts make up five of the top 20 offenders.
The most significant long task is the initial execution of the mobile-wiki script, which on this test resulted in a 2s long task.
Digging into the performance timeline shows a massive 2s long task early on as Ember gets the app ready to go.
I’m far from an Ember expert, so I talked to Melanie Sumner and Kris Selden from the Ember JS Core Framework Team to help me better understand what was going on in those massive long tasks. Turns out, there a few different things that are all working together to create the perfect environment for poor performance.
First up, the Fandom sites use server-side rendering but rehydration appears to be failing here, if it’s used at all.
For rehydration to work, the client rendered DOM must match what was served via the server. When Ember boots up on the client, it’s going to compare the existing DOM structure with the DOM structure the client-side app generates—if that DOM structure is mismatched (the HTML provided by the server is invalid, third-party scripts alter the DOM before hydration occurs, etc) the rehydration process breaks. This is massively expensive as now the DOM has to be tossed out and rebuilt.
The second major issue here that Kris pointed out was that all the work triggered by _boundAutorunEnd in the flame chart, as well as the forced layouts and style recalculation, indicates that the app is relying heavily on component hooks and/or computed properties. This is a frequent issue seen in Ember apps, often leading to multiple render passes which, as you might expect, can get very expensive. With the new Glimmer component, Ember greatly reduced the number of lifecycle hooks (to just two) to help avoid this issue altogether.
Finally, there’s just a lot of code involved in initializing the app. It’s likely that much of what is being built here doesn’t even need to be in that initial rendering process. Trying to do too much during the initial render phase is a very common issue with any site built with a single-page-architecture. The more we can lazy-load individual components to break up that initial render cost, the better.
So…..what about Ember?
To me, there are a couple of things worth noting about this whole thing.
First, it’s a cautionary tale about not digging deep enough into data. We had an outlier—not in terms of comparing Ember to Ember, but Ember to other frameworks—which is always something worth exploring.
Looking closer paints a different picture than we originally saw. It’s not that Ember itself is so much worse on mobile than other frameworks (in fact, if we exclude this one example, the numbers for Ember look pretty good when compared with many of its counterparts). Instead the results are exaggerated by a combination of the sample size being relatively small compared to more popular choices and that sample set being dominated by one particularly egregious example.
While we’ve seen that Ember’s results are not as bad as they seem at first blush, what this example also shows us is how easy it is for things to get out of hand
Digging deeper into the troublesome examples reveals a few patterns folks should look to avoid when using Ember for their own projects. Though, honestly, the patterns here aren’t specific to Ember: too much work during initialization, broken rehydration, forced layouts and style recalculation—these are all common issues found in many sites that rely on a lot of client-side JavaScript, regardless of the framework in use.
]]>The way it works is you load in some JavaScript (typically from a third-party domain), and that JavaScript runs, applying the experiments you’re running for any given situation. Since those experiments usually involve changing the display of the page in some way, these scripts are typically either loaded as blocking JavaScript (meaning nothing happens until the JavaScript arrives and gets executed) or they’re loaded asynchronously. If they are loaded asynchronously, they’re usually with some sort of anti-flicker functionality (hiding the page until the experiments run by setting the opacity to 0, for example).
With either approach, you’re putting a pause button in the middle of your page load process. They can wreak havoc on a site’s performance.
People use them, though, partly because of convenience and partly because of cost.
From a convenience perspective, running client-side tests is significantly easier to do than server-side testing. With server-side testing, you need developer resources to create different experiments. With client-side testing, that’s often handled in the form of a WYSIWYG editor, which means marketing can try out new experiments quickly without that developer resources bottleneck.
I’m optimistic about edge computing as a way to solve this. Edge computing introduces a programmable layer between your server or CDN and the folks using your site. It’s simple in concept but very powerful. Moving our testing to the CDN layer lets us run manipulations on the content that the server is providing before they hit the browser.
From an A/B testing perspective, this means A/B testing services can offer the ability to still use a WYSIWYG to set up experiments, but now, instead of having to run all those experiments in the browser, they can use edge computing to apply the experiments at the CDN layer, before the HTML is ever provided to the user. In other words, we get the best of both worlds: we have the convenience provided by WYSIWYG type editing, but the performance benefit that server-side testing has in terms of shifting work out of the browser.
The other reason that I mentioned for why folks use client-side A/B testing is cost. It’s typically much more affordable to use a client-side service than a server-side solution for this. In some cases, like Google Optimize, it’s even free.
But just because the monthly amount you’re making your checks out for (some people still write checks, right?) is low, that doesn’t mean the actual cost to the business isn’t much higher.
Let’s do a cost-benefit analysis to show what I mean, focusing on an unnamed but real-world site using Optimizely for client-side A/B testing. We’re not picking on Optimizely because of anything they do that’s any worse than any other client-side A/B testing solution (all in all, they do pretty well compared to some of the other options I’ve tested), but because of their popularity in the space means this example is likely very relevant to many folks reading this. The tests are all going to be run using WebPageTest, on a Moto G4, over a fast 3G network.
First, let’s look at the impact this Optimizely script has on performance when it’s in place.
The results of the test showed the site having a First Contentful Paint time of 4.4s and a Largest Contentful Paint time of 5.5s.
The Optimizely script is 133.2kb, and it’s loaded as a blocking script (meaning, the browser won’t parse any more HTML until it gets downloaded and executed). The total time for the request to complete, including the initial connection to the Optimizely domain, is ~1.7 seconds.
Looking at the request for the Optimizely script in WebPageTest, we see that the file is 133.2kb and it takes around 1.7s to download.
Once it’s downloaded, we see some execution of the script (the pink block following the request).
The pink bars shown after the script have loaded tell us that there’s a large script execution period right after the script has been downloaded, blocking the main thread.
Opening up the timeline that WebPageTest captured we see that immediately after download, the script takes 648ms to execute—continuing to block the main thread of the browser.
Looking at the performance timeline, we can see that the browser spends 648ms evaluating the client-side A/B testing script.
So, between it all, the page is paused from parsing more HTML for around 2.4 seconds.
That doesn’t mean that the actual impact is 2.4 seconds…there’s a lot going on here including some other blocking scripts that certainly contribute to the delay, so the direct impact of the client-side testing may be less significant.
We can test the direct impact using WebPageTest’s blocking feature to block the Optimizely script from loading.
WebPageTest’s blocking feature lets us block all calls to optimizely.com so we can test the impact.
When I did that, the result was a significant improvement in the paint metrics. First Contentful Paint dropped from 4.4s to 2.5s. Largest Contentful Paint dropped from 5.5s to 4.6s.
Ok. So a 900ms delay in our Largest Contentful Paint. Let’s try to put some value to that.
In practice, were you doing this to your own site, you would hopefully have some data about how performance impacts your own business metrics. You would also have access to your actual conversion rate, value per order, monthly traffic, and stuff like that.
We don’t have that, so we’re going to put together a hypothetical. Let’s say this site gets 100,000 monthly visitors, has a conversion rate of 2%, and an average order value of $60. That puts annual revenue at $1.44 million dollars.
While not perfect, Google’s Impact Calculator is a decent way to guess-timate potential improvements in revenue based on performance. It’s based on Largest Contentful Paint (perfect for our situation) and anonymized data from sites using Google Analytics.
If we plug those numbers in, we see removing that 900ms would garner us an additional $37,318. (Now imagine how big of an impact something like the anti-flicker snippet for Google Optimize, which hides the page for up to 4 seconds, could have on revenue!)
Google’s Impact Calculator tries to estimate how much additional revenue would be earned by improving a site’s Largest Contentful Paint. In our case, eliminating the delay caused by client-side A/B testing would result in an increase of $37,318 over the course of the year.
Now we’re starting to get a picture of the actual cost of that client-side A/B testing solution. The full cost has to factor in both the price we’re paying for the service, as well as the impact it has on the business.
It’s possible the A/B testing tool could still end up benefiting the business in the end. But we’re starting from a bit of a hole. That $37,318 is roughly 2.6% of our annual revenue. Just to get back to zero, we need to find an experiment that is going to at least make up that 2.6% deficit. To come out ahead, we need to be able to find experiments that will have even greater returns. And that doesn’t even cover the monthly cost of the service, which would make our initial hole even larger.
Now that we know the monthly cost as well as the impact on the business, we can start asking some important questions:
- How much does does proxying the request to Optimizely through a CDN (such as Akamai) reduce our total cost?
- Would switching to Optimizely’s Performance Edge or their full-stack solution help reduce the performance impact enough to justify paying potentially higher cost for the service?
- How confident are we that we can create experiments with an impact significant enough to offset our initial deficits?
- Knowing how much of an impact it has, should we disable client-side A/B testing altogether during particularly busy seasons?
A lot of this is hypothetical, I know. This is very much back of a napkin kind of math. In reality, we should be looking at actual traffic. We should be running a test to see the difference on real-user traffic with and without client-side A/B testing. We should also be using our own data to figure out a reasonable expectation for the impact of that service on our conversions and revenue.
But even as just an approximation, it does make a pretty clear case that while client-side A/B testing may be cheaper than a server-side solution, that doesn’t mean it isn’t expensive. Ultimately, folks may decide that for their given situation, client-side A/B testing is worth the cost, but at least by doing this sort of analysis we ensure that decision is one that has been made with a full understanding of what we’re trading off in the process.
]]>I hand-optimize my images (ok fine my computer does it but whatever) before I add them to my site, so they’re usually pretty small already, but there are limitations to how far I can take it myself without making things incredibly complex. I can try to automate things like generating the right sizes, creating several formats, serving those formats up to different browsers, as well as the actual optimization itself. That’s all fine, but that’s a lot to manage.
Cloudinary can handle that for me. Their service is capable of automatically compressing images to the highest degree of compression while still maintaining acceptable levels of quality. They can also determine the best file format for each image based on that image’s characteristics and whatever a particular browser supports. There’s much more they can do, but those two things alone are super appealing.
What isn’t appealing, and the reason I haven’t used Cloudinary for my own personal stuff yet, is the separate domain the browser would need to connect to in order to serve those images. Images from Cloudinary are served from https://res.cloudinary.com. When the browser sees this domain, it has to open up another HTTP connection, going through the process of resolving the DNS, opening the TCP connection and handling SSL negotiation. The separate domain also messes with HTTP/2’s snazzy prioritization schemes. (Though, on my own site, that’s not a big deal because the number of resources on a given page is pretty small).
So my plan was to take Netlify Edge and proxy any requests to Cloudinary through my own domain. That would let Netlify do the connection on their end, saving the browser from having to do it.
Turns out, I didn’t need Edge for this—I could’ve been doing this all along with Netlify’s redirects.
Phil posted a short demo of how you can use Netlify’s redirects to proxy requests to another service automagically. Netlify handles the connection to the service at the CDN level, which means the browser only ever sees the one domain.
So, naturally, I had to give it a shot (I try to keep it a secret from him, but I do find listening to Phil to be a smart decision…sometimes).
First up was adding the redirect rule to my netlify.toml configuration file, just like Phil said to do:
[[redirects]]
from = "/optim/:image"
to = "https://res.cloudinary.com/tkadlec/image/fetch/q_auto,f_auto/https://timkadlec.com/images/:image"
status = 200
Breaking this down:
- The
fromdirective tells Netlify which requests to apply the redirect to. In this case, we’re saying all requests made to the/optimdirectory. We’re capturing the image name as:imageso we can reference it later. - The
todirective tells Netlify where to route that request. Here we’re using Cloudinary’s fetch functionality (note the “fetch” segment of the URL) to fetch the original image from my site and load it into their service so they can optimize it. This is also where we use:imageto refer back to the image name that we captured in thefromsection. - Finally, we set the status to “200”, indicating the request completed succesfully. This is the key bit for this from Netlify’s perspective: by setting the status to 200, we officially make this redirect a rewerite instead.
Confession time: I messed this up at first. I originally had the from directive setup like this:
from = "/images/:image"
I was being lazy, and I thought this way I wouldn’t have to update any image references in my site—they’d just work. A bunch of you (maybe all of you) already see the problem here.
By using the same directory name as the one the images are actually in, I create some circular logic. Netlify sees a request to something in my /image directory so it routes that to Cloudinary, passing along the full path. Cloudinary needs to request that image so it makes a request to the image, again in the /images/ directory. Netlify has to serve that image to Cloudinary, but it sees that it should be redirecting any request to /images/ to Cloudinary, so it does. And on and on we go again.
It was a silly mistake on my part, but I can imagine others making it too so I figure it’s worth noting.
With the configuration file updated, I then have to update the reference to any images I want to route to Cloudinary to use the /optim route. That means changing from this:
<img src="/images/headshot-transparent.png">To this:
<img src="/optim/headshot-transparent.png">And voilà—we’ve got automatic image optimization via Cloudinary, without having to do anything to my build process or even touch Cloudinary directly.
I mentioned the two appealing parts of this to me were the advanced image optimization Cloudinary can provide and the ability to avoid that separate domain connection, and I’m pretty happy with the results from both.
Even with the fact that I pre-optimize images before putting them on my site, the additional compression and format conversions that Cloudinary provides gave a nice little boost. Here’s the size of the images on my home page before and after I used Cloudinary, as measured on a Moto G6 via WebPageTest:
| Without Cloudinary | With Cloudinary | Savings |
|---|---|---|
| 100.2kb | 73.1kb | 27.1kb |
Now, 27kb may or may not seem like much—depending on how much of a performance nut you are—but my home page is pretty light so that represents a 23% reduction in bytes from only a couple minutes of work.
Next let’s look at the second part of this: eliminating the separate connection.
Here’s a run from WebPageTest (3G network, Moto G6) showing my home page using Cloudinary without proxying through Netlify.
Request #2 shows the cost of connecting to res.cloudinary.com (the green, orange and purple bars). Because of that delay, the request doesn’t complete until ~3.8s and the final image (request #18) doesn’t arrive until ~5.1s.
Notice how before the image gets to be downloaded (request #2), the browser has to go through the process of the DNS resolution (green bar), TCP Connection (orange bar), and SSL negotiation (purple bar) causing the images to be delayed.
In this case, it took 1.3 seconds for that connection to be opened, and as a result, my headshot image (request #2) doesn’t arrive until about 3.8 seconds into the page load. Our final image doesn’t arrive until 5.1 seconds into the page load process.
By proxying through Netlify, we avoid that overhead altogether. Here’s what that looks like, using the same test conditions in WebPageTest:
By proxying the requests through Netlify, we no longer see any connection costs for the images (requests #2-17). Without the delay, the headshot image (request #2) arrives around 2.5 seconds and the final image (request #17) arrives around ~3.7s.
In this waterfall, you can see that the separate connection is gone altogether, meaning we get to start downloading those images very quickly. The result is that same headshot image (request #2, again) arrives around 2.5 seconds—about 1.3 seconds faster than without the proxy. The final image arrives 3.7 seconds into the page load process, around 1.4 seconds than before.
So we get reduced data cost, with no extra connection from the browser, and what appears to be pretty negligible cost at the CDN (the difference in response time for the final proxied images versus loading them without Cloudinary in place is barely noticeable in my tests)—and it all took just a few minutes to put into place. It’s also a great way to make the right thing easy by making all these optimizations happen with virtually no effort on my part.
Not too shabby, Netlify.
]]>For those unfamiliar, a skeleton screen is a method of displaying the outline (skeleton) of the content to come, typically using gray boxes and lines instead of a progress bar while content is loading. It’s a pretty creative approach to handling wait—undoubtedly more creative and helpful than a perpetually spinning circle.
But even good ideas can become bad ones if we stray too much from the original intent.
The other day on Twitter, Jeremy Wagner lamented the use of skeleton screens:
Am I the only person who thinks skeleton UIs are incredibly bad UX? “Here comes some client-rendered content—oops, wait you get rectangles for now!”
It got me thinking about how we’ve taken a carefully applied optimization and started applying it haphazardly without giving too much thought as to why and how to use it effectively.
It’s helpful to revisit Luke’s application of the approach.
At the time, Luke was working on a startup called Polar. Polar was a mobile application that was built around the idea of micro-interactions. Users were presented with a simple “either-or” poll. They’d tap on an answer and then move on to the next poll.
At several locations in the application, such as when new polls were loaded, it would take some time for those elements to download and be displayed. Polar used, at first, the same thing many applications use: a generic spinner.
When they looked at user feedback, they saw people frequently complained about the amount of time they spent waiting for the content to refresh. Given the constraints the app faced with the network and web view performance at the time, they were a bit limited on what they could do from a technical perspective. Instead, they adjusted the design, switching from spinners to a skeleton screen that incrementally filled in as content arrived back from the server. People stopped complaining about wait times, and a new perceived performance “best practice” was born.
But the thing about best practices is that they’re only best practices when used in the right context. Used the wrong way, even best practices can be detrimental. That’s why when I do performance training, I always start with how the browser and network work. You need to know why something works to know when it makes sense and when it doesn’t.
So going back to skeleton screens, why do they (at least in theory) work?
It has to do with active waiting versus passive waiting. With active waiting, we’re doing something that feels like progress while we’re waiting. With passive waiting, we’re, well, passively sitting there, with nothing to do as we wait for whatever it happens to be we’re waiting for to happen. According to research on time perception, active waiting periods are perceived as faster than passive waiting phases.
With a progress bar or spinner, our entire waiting period is passive: there’s nothing for us to do but watch this spinner that has absolutely nothing to do with the content we’re about to see.
With Polar’s skeleton screen, some of that wait time gets flipped into an active state. If we look at the screenshots of how the page progressed, we see we went from a few gray boxes and primary text headings to some filled in textual content and, finally, to displayed images and icons.

Each time that state changes, we have a brief moment of active waiting as we start to process the information presented to us, giving us context about what is eventually going to arrive.
That’s why, in theory, skeleton screens are useful. They can provide instantaneous feedback about what is to come, and, as that content arrives and gets filled in, they keep pulling us back into brief periods of active waiting, helping the time to fly by a bit more quickly.
To me, that seems sound, but there are a few things we have to keep in mind when implementing skeleton screens in our interface.
First up, it’s a workaround. If you can display content right away, by all means, do that instead.
If you notice, Jeremy’s specific example was client-side applications taking too long to load. That’s unfortunately very common. In my experience profiling sites and applications that rely on a lot of client-side JavaScript to display content, those delays are usually measured in seconds, not milliseconds.
Let’s pick on an example Scott Jehl presented: YouTube in a desktop browser. We’ll pick on it not because it’s a particularly egregious example but because it’s an excellent example of many of the common issues across skeleton screen implementations today.
In that example, as tested on a Cable connection, we’re staring at gray boxes for 6.9 seconds before we get to see the content.

That’s far too long of a delay to expect a few gray boxes to hold us over…we’re only in that active waiting state for a moment before we’ve digested the context provided to us and are ready to move on. All that research about how time perception speeds up during an active waiting state? There’s also research that suggests that as the total wait time gets longer, the benefits of being in an active state of waiting wane a bit.
Which brings us to the next consideration—that active state isn’t going to last forever, so we need to make quick progress. If we can’t for whatever reason, then we need to make incremental progress.
Remember, in the Polar example, we saw three basic stages:
- Gray boxes, borders, and headings
- Text content
- Imagery
If the loading period is very fast, maybe that’s enough to keep us in an active state the entire time. If the loading period is a bit slower, we’re going to toggle between active and passive waiting states, and that’s where that incremental approach helps. At each stage, we’re given additional context to occupy us for a few moments.
Looking back at the YouTube example, we have two basic stages (pretty typical for many skeleton UI’s we see implemented today):
- Gray boxes and borders
- All the things!
There are no initial headings to provide any additional context, and there’s no incremental stage where we have more information to process.
After the initial moment or two that it’s going to take us to process that original display of gray boxes, we’re going to switch right back into a passive waiting state for the rest of that duration. Without that incremental progress (and without the additional context provided by early headings), we’re going to be spending the majority of that time in a passive waiting state and, understandably, we’re going to be annoyed by the delay—even if it wasn’t as long as it is.
Another thing to consider: the expectations should match reality.
In the Polar app, the original boxes don’t shift around. Instead, they accurately depict the content that is coming and where it’s going to be displayed.
Once again, the YouTube page is a good example of the opposite effect.
The first screenshot below shows the initial state of the page. We have a grid of boxes where the videos are going to display, and some circles for thumbnail images. But the second screenshot shows that the entire skeleton screen shifts significantly as an ad is displayed.

In this case, we were given expectations of what content would be displayed and where, and those expectations ended up being misleading. We now have to re-orient ourselves to where the content ends up being displayed.
When the skeleton screen doesn’t match the outcome, we’ve created confusion and frustration that will overcome any benefit you might have gotten from trying to handle that delay in a better way.
Generally, I think there’s also some conditioning to be aware of here. As the pattern of skeleton screens becomes increasingly familiar, the ability it has to switch us into that active state of waiting is going to decrease. If we’re trying to use this approach as a band-aid for long loading times that users have to encounter every time they use our site, it’s going to fall on its face pretty quickly.
So are skeleton screens a bad user experience, or are they are a nice way to improve perceived performance?
Like anything, they’re likely a bit of both. If you’re weighing whether or not to add one to your site or application, keep in mind:
- Are they necessary? Can you avoid the delay altogether with a different approach?
- Skeleton screens only distract for so long before they become frustrating. If you have a long delay at a critical stage in your site or application, shortening that delay should still be a primary focus.
- When you use skeleton screens, make sure people see incremental progress and that they’re given as much context as possible at each stage in the progression.
- Make the expectations match reality. Skeleton screens that falsely represent what’s coming or how it will be positioned, will be disorienting and cause confusion.
Finally, test.
Skeleton screens worked for Polar. They knew it worked because they did the necessary research to see how it impacted people’s perception of the application as a whole. That doesn’t mean it’s going to work with your audience, or for all of your audience in the same way.
One of the healthiest and most important things we can do is continuously challenge ourselves to question why a “best practice” is labeled as such. When we understand why and how it works, we put ourselves in a much better position to know how to properly apply it in our own situations, and when it makes sense to do so.
]]>I ran a profile on my MacBook Pro (a maxed out 2018 model) in Chrome Developer Tools to see how much work it was doing on the main thread.
It wasn’t pretty.
It turns out, the widget triggers four long running tasks, one right after the other. The first task came in 887ms, followed by a 1.34s task, followed by another 92ms and, finally, a 151 ms task. That’s around 2.5 seconds where the main thread of the browser is so occupied with trying to get this widget up and running that it has no breathing room for anything else: painting, handling user interaction, etc.

2.5 seconds of consecutive long tasks is bad by any measure, but especially on such a high end desktop machine. If a souped up laptop is struggling with this code, how much worse is the experience on something with less power?
I decided to find out, so I fired up my Alcatel 1x. It’s one of my favorite Android testing devices—it’s a great stress test for sites and it lets me see what happens when circustances are less than ideal.
I knew the numbers would be worse, but I admit the magnitude was a bit more dramatic than I anticipated. That 2.5 seconds of consecutive long tasks? That ballooned into a whopping 48.6 seconds.

48.6.
That’s not a typo.
Eventually, Chrome fires a “Too much memory used” notification and gives up.
Here’s the kicker: among the things being blocked for nearly 50 seconds are all the various analytics beacons.
As Andy Davies puts it, “Site analytics only show the behaviour of those visitors who’ll tolerate the experience being delivered.” Sessions like this are extremely unlikely to ever show up in your data. Far more often than not, folks are going to jump away before analytics ever has a chance to note that they existed.
There are plenty of stories floating around about how some organization improved performance and suddenly saw an influx of traffic from places they hadn’t expected. This is why. We build an experience that is completely unsuable for them, and is completely invisible to our data. We create, what Kat Holmes calls, a “mismatch”. So we look at the data and think, “Well, we don’t get any of those low-end Android devices so I guess we don’t have to worry about that.” A self-fulfilling prophecy.
I’m a big advocate for ensuring you have robust performance monitoring in place. But just as important as analyzing what’s in the data, is considering what’s not in the data, and why that might be.
]]>While the bulk of the post is about the A/B testing setup (which I am very happy with), I did note at the end that I was seeing some small improvements from Instant.Page, though the results were far from conclusive yet.
Alexandre, the creator of Instant.Page, suggested on Twitter that the gains I was seeing were small because Netlify passes an Age header that messes with prefetching.
Tim, Netlify sends a Age header that conflicts with prefetching. A prefetched page will get fetched again on navigation if its Age header is over 300. The small gain you are seeing are due to the navigation request being a 304 and not a 200.
This lead down an interesting little rabbit hole and, eventually, a bug. I learned a few new things as I dug in, so I figured it was worth sharing for others as well (and for me to come back to when I inevitably forget the details).
First, before we dive in, let’s zero in on the critical components of what’s happening on my site specifically.
For all HTML responses, I pass a Cache-control: max-age=900, must-revalidate header. This tells the browser to cache the response for 15 minutes (900⁄60). After that, it has to revalidate—basically, talk to the server again to make sure the asset is still valid and a newer version isn’t available. As soon as the resources is revalidated, the 15 minutes starts over.
Netlify also passes along an ‘Age’ header, indicating how long they’ve been caching the resource themselves. (More on that in a bit) So, for example, if they’ve had the resource on the servers for 14 minutes, that would look like this:
age: 840
And finally, as a recap, Instant.Page works by using the prefetch resource hint to fetch links early, when someone hovers over the link instead of waiting for the next navigation to start.
Now let’s dive into each part of that and how they fit together.
The Age Header
The ‘Age’ header is used by upstream caching layers (Varnish, CDNs, other proxies, etc.) to indicate how long it’s been since a response was either generated or validated at the origin server. In other words, how long has that resource been sitting in that upstream cache.
It’s not just something that Netlify does—open just about any site and you’ll find resources with the ‘Age’ header set. That’s because if you’ve got something sitting between your origin and the browser caching your content, setting the ‘Age’ header is exactly what you’re supposed to be doing. It’s important information.
Let’s say you’re using a CDN to cache content on their edge servers instead of making visitors wait while assets are requested from wherever your origin server resides. The first time a resource is requested, the CDN is going to have to go out and make a connection to your origin server to get it. At that point, since the CDN just got the resource from the origin server, the age is ‘0’.
Then, depending on what you’ve set up at the CDN level and assuming the CDN can cache the resource, the CDN will start serving that resource as it’s requested without talking to the origin again. As it does this, the age of the resource gets older and older.
Eventually, the CDN needs to talk to the origin server again.
Let’s say your CDN is set to cache a resource for 15 minutes before it needs to validate that the resource is still fresh. After 15 minutes, the CDN talks to the origin and will either get a new version of that resource or verification that the resource is still valid. At that point, the age of the resource resets to ‘0’—we’ve got a fresh start since we know what we have on the CDN is the latest version.
The browser’s primary mechanisms for determining what to cache and for how long are headers like Expires (which provides an expiration date for the resource being served), Cache-control (a ton of stuff here, but specifically for duration is max-age), Last-Modified (the date at which the resource was last modified), and Etag (a unique version identifier for the object). (For more detail on all of those,Paul Calvano’s post on Heuristic Caching and Harry’s post about Cache-Control are both top-notch resources.)
Age, too, factors in.
Let’s say that your CDN is set to cache a resource for 15 minutes, and you’ve also told the browser to cache that resource for 15 minutes using the Cache-control header (Cache-control: max-age=900, must-revalidate). With two layers of caching, each at 15 minutes, that means we have a potential Time to Live (the time a resource is stored in a cache before it’s deleted or updated) of up to 30 minutes—if the browser requests the resource just before the CDN’s version expires, then it’s been sitting in cache for 15 minutes on the CDN and will sit in the browser cache for another 15 minutes—so 30 minutes total.
For any sort of remotely dynamic content, this could be problematic. If the content changes in that upstream cache, we could still be serving an old version of the resource for 15 more minutes until the browser cache expires.
The Age header helps to prevent against this.
Let’s go back to our example, where the browser requests the resource just before the CDN’s version expires. Only this time, let’s say the CDN communicates how long it’s had the asset in cache by providing an Age header of 840 (14 minutes). The browser knows from the max-age directive that it’s ok to serve an asset that is 15 minutes old, and it knows that the asset has been sitting on the CDN for 14 minutes. So, the browser adjusts the TTL to 1 minute (15 minutes of browser TTL minus 14 minutes it’s already been on the CDN), protecting against this problem of cache layers stacking on top of each other.
This can all get a bit funky if the max-age directive you’re passing to the browser doesn’t align with how long you’re caching the resource upstream.
For example, if you’re telling your CDN to cache a file for a week, but you’re only telling the browser to cache that resource for 15 minutes, then as soon as the Age of that resource exceeds 900 (15*60) the browser will no longer consider that resource safe to cache. Everytime it sees the request, it will note that the age is past the maximum TTL it’s been told to pay attention to, so it goes back out to the servers to try to find a new version.
There are times where having mismatched TTL's at a caching layer and at the browser may make sense. It's pretty quick to purge the cache (basically, empty it out) for most CDNs. So sometimes what you'll see is folks set a long TTL at the CDN layer and a short one at the browser level. Then, if the content does need to change, they can purge the CDN cache quickly and all they have to wait for is the browser to get past whatever short TTL they've set there. In those cases, it makes sense from a performance standpoint not to pass the Age header so that the browser can keep caching.
How prefetch works
When you use the prefetch resource hint (which is what Instant.Page does), you’re telling the browser to go grab that resource even though it hasn’t been requested by the current page, and put it into cache.
So, for example, the following example tells the browser to grab the about page and store it.
<link rel="prefetch" href="/about" as="document" />
The browser will request the resource at a very low priority during idle time so that the resource doesn’t compete with anything needed for the current navigation.
As with any request that gets cached, how long it’s cached depends on the caching headers. But with prefetch, there’s an added wrinkle.
The entire point of prefetch is to have something stored for the next navigation: making a prefetch for a resource that expires before that next navigation is wasted work and wasted bytes.
For this reasons, Chromium-based browsers have a period of five minutes where they’ll cache any prefetched resources regardless of any other caching indicators (unless no-store has explicitly been set in the Cache-control header). After that window has expired, the normal Cache-control directives kick in, minus that initial window.
In my case, I serve HTML documents with a max-age of 15 minutes. That means Chrome will save that prefetched resource for 15 minutes so this 5 minute window doesn’t really do anything special.
But if you served an asset with a max-age of 0, then Chrome is still going to hold that resource for 5 minutes before having to revalidate it. The main takeaway here is that to avoid wasted work, the browser ignores the usual indicators of freshness for a period of time.
Firefox, on the other hand, does not have this little extra window for prefetched resources—it treats them like any other cached object, paying attention to the caching headers as normal. So, if (for example) the max-age is 0 for a prefetched resource, Firefox will make the request as directed using prefetch and then make the request again once it discovers it on the next navigation.
Bringing it altogether
Phew. Ok. So we know what the Age header does, we know how the browser uses it to determine caching, and we know that Chromium-based browsers ignore all the usual freshness indicators when it comes to prefetch, at least for a short period of time, and Firefox does not.
All of this means that in Firefox, if the Age exceeds the max-age directive, then the prefetched resource is going to result in two requests: once for the actual prefetch and, because the asset is older than the TTL, once again on the next navigation.
In Chromium-based browsers, it seems likeAge shouldn’t impact prefetch behavior at all—if Chromium ignores other caching directives, why is Age any different? It seems like a bug.
Which is exactly the conclusion Yoav came to:
To clarify, sounds like a Chromium bug. Sending Age headers for cached resources is what caches are supposed to do And indeed, the 5 minutes calculation includes the Age header, which IMO makes little sense https://source.chromium.org/chromium/chromium/src/+/master:net/http/http_cache_transaction.cc;l=2716;drc=2f11470d7ad8963a9add116df64d2edd1b85d3a4;bpv=1;bpt=1?originalUrl=https:%2F%2Fcs.chromium.org%2F
The bug is the source of what Alexandre was noting. Since Age is being included in the prefetch caching considerations, any prefetched resource in Chrome with an Age higher than either that 5 minute window or the max-age (whichever is longer) can’t be cached, so the request happens twice: once on prefetch and once on the next navigation.
In my specific case, while the bug’s behavior is definitely not ideal, it also doesn’t jump out in the metrics on the aggregate because of my service worker. When the request gets prefetched, the service worker caches it. On that next navigation, the request gets made again, but the service worker has it at the ready, which accounts for why I’m seeing some performance improvements even with the bug.
Now, if we ignore the prefetch specific issues here, we do still have an issue with the way Netlify handles the Age header. Netlify is, interestingly, both the CDN and the origin here. Typically, whenever the CDN has to revalidate that a resource is still fresh with the origin, it will reset the Age header back to 0.
In this case, because Netlify essentially is our origin, there’s no other layer somewhere for Netlify to revalidate with. The buck stops here, or something like that.
By passing the Age header along, and only updating it when the content is changed or cache is explicitly cleared, Netlify creates a situation where the browser will always have to go back to the server (Netlify) to see if the resource is fresh, regardless of that max-age window. The only way around this is to set a very long max-age or make sure to clear your Netlify cache on a semi-regular basis.
I suspect Netlify shouldn’t be passing the Age header down at all. Or, if that header is being applied at their edge layer (I’m not 100% clear on their architecture), then whenever their edge layer has to revalidate with the original source, they should be updating that Age at that point to avoid the issue of an ever-increasing Age.
Where do we go from here?
So, how do we make sure that our prefetched resources are as performant as possible?
First things first: measure. I tried to emphasize this in my last post, but the data about the impact of this approach on my site says nothing about the impact on other sites. In my situation, I’m seeing a small improvement in most situations even with the bug in place. Your mileage may vary. Testing performance changes is good.
From the Chromium side of things, don’t worry about it. Yoav was all over it, and a fix has already landed.
Firefox, however, is another story. It seems they’ve been contemplating making this change for awhile now, so it’s a matter of prioritizing the work. In the meantime, there are a few things to keep in mind.
One, if you have a service worker in place and you’re using an approach where the service worker serves from the cached version first, that helps to offset the double request penalty you might otherwise pay. The first request puts it in the service worker cache, the second gets pulled from there before it has to go any further.
If you don’t have a service worker in place, then you’re going to have to make a decision regarding the Age header.
If you don’t pass the Age header, then Firefox can cache the resource according to your cache headers regardless of whether the age of the resource on the CDN (or proxy) is longer than the max-age communicated to the browser, but it does introduce the risk of extending the total TTL as we saw above. If your max-age directive is set to a short duration and you can quickly purge the upstream cache, you reduce the pain here a little.
If you do pass the Age header along, you avoid longer total TTL issues, but you now risk issuing double requests for every prefetched resource as the age of the cached resource gets older. If the resource changes frequently in the upstream cache, or if you are passing a long max-age directive to the browser, the severity of this risk is reduced a little.
In the end, this comes down to a combination of what services and tools you’re using for those upstream caches, and how frequently your prefetched resources may change.
]]>This post ended up leading to the discovery of a bug in the way Chrome handles prefetched resources. I’ve written a follow-up post about it, and how it impacted the results of this test.
prefetch resource hint to get started fetching that page before the click event even occurs. I like the idea, in theory, quite a bit. I also like the implementation. The script is tiny (1.1kb with Brotli) and not overly aggressive by default—you can tell it to prefetch all visible links in the viewport, but that’s not the default behavior.
I wanted to know the actual impact, though. How well would the approach work when put out there in the real-world?
It was a great excuse for running a split test. By serving one version of the site with instant.page in place to some traffic, and a site without it to another, I could compare the performance of them both over the same timespan and see how it shakes out.
It turns out, between Netlify and SpeedCurve, this didn’t require much work.
Netlify supports branch-bases split testing, so first up was implementing instant.page on a separate branch. I downloaded the latest version so that I could self-host (there’s no reason to incur the separate connection cost) and dropped it into my page on a separate branch (very creatively called “instant-page”) and pushed to GitHub.
With the separate branch ready, I was able set up a split test in Netlify by selecting the master branch and the instant-page branch, and allocating the traffic that should go to each. I went with 50% each because I’m boring.

I still needed a way to distinguish between sessions with instant.page and sessions without it. That’s where SpeedCurve’s addData method comes into play. With it, I can add a custom data variable (again, creatively called “instantpage”) that either equals “yes” if you were on the version with instant.page or “no” if you weren’t.
<script>LUX.addData('instantpage', 'yes');</script>I could have added the snippet to both branches, but it felt a bit sloppy to update my master branch to track the lack of something that only existed in a different branch. Once again, Netlify has a nice answer for that.
Netlify has a feature called Snippet Injection that lets you inject a snippet of code either just before the closing body tag or just before the closing head tag. Their snippet injection feature supports Liquid templating and also exposes any environmental variables, including one that indicates which branch you happen to be on. During the build process, that snippet (and any associated Liquid syntax) gets generated and added to the resulting code.
This let me check the branch being built and inject the appropriate addData without having to touch either branch’s source:
{% if BRANCH == "instant-page" %}
<script>LUX.addData('instantpage', 'yes');</script>
{% else %}
<script>LUX.addData('instantpage', 'no');</script>
{% endif %}Then, in SpeedCurve, I had to setup the new data variable (using type “Other”) so that I could filter my performance data based on its value.

All that was left was to see if the split testing was actually working. It would have only taken moments in SpeedCurve to see live traffic come through, but I’m an impatient person.
Netlify sets a cookie for split tests (nf_ab) to ensure that all sessions that land on a version of the test stay with that version as long as that cookie persists. The cookie is a random floating point between 0 and 1. Since I have a 50% split, that means that a value between 0.0 and 0.5 is going to result in one version, and a value between 0.5 and 1.0 is going to get the other.
I loaded the page, checked to see if instant.page was loading—it wasn’t which meant I was on the master branch. Then I toggled the cookie’s value in Chrome’s Dev Tools (under the Application Panel > Cookies) and reloaded. Sure enough, there was instant.page—the split test was working.

And that was it. Without spending much time at all, I was able to get a split test up and running so I could see the impact instant.page was having.
It’s early, so the results aren’t exactly conclusive. It looks like at the median most metrics have been improving a little. At the 95th percentile, a few have gotten a hair slower.
The charts in SpeedCurve so far show promise, though nothing conclusive. First CPU Idle has been 100ms faster with instant page at the median, and 200ms faster at the 95th percentile. First Contentful Paint has been 300ms faster at the median, but 100ms slower at the 95th percentile.
It’s not enough yet to really make a concrete decision—the test hasn’t been running very long at all so there hasn’t been much time to iron out anomalies and all that.
It’s also worth noting that even if the results do look good, just because it does or doesn’t make an impact on my site doesn’t mean it won’t have a different impact elsewhere. My site has a short session length, typically, and very lightweight pages: putting this on a larger commercial site would inevitably yield much different results. That’s one of the reasons why it’s so critical to test potential improvements as you roll them out so you can gauge the impact in your own situations.
There are other potential adjustments I could make to try to squeeze a bit more of a boost out of the approach—instant.page provides several options to fine-tune when exactly the next page gets prefetched and I’m pretty keen to play around with those. What gets me excited, though, is knowing how quickly I could get those experiments set up and start collecting data.
]]>The thing about JavaScript is you end up paying a performance tax no less than four times:
- The cost of downloading the file on the network
- The cost of parsing and compiling the uncompressed file once downloaded
- The cost of executing the JavaScript
- The memory cost
The combination is very expensive.
And we are shipping an increasingly high amount. We’re making the core functionality of our sites increasingly dependant on JavaScript as organizations move towards sites driven by frameworks like React, Vue.js, and friends.
I see a lot of very heavy sites using them, but then, my perspective is very biased as the companies that I work with work with me precisely because they are facing performance challenges. I was curious just how common the situation is and just how much of a penalty we’re paying when we make these frameworks the default starting point.
Thanks to HTTP Archive, we can figure that out.
The data
In total, HTTP Archive tracks 4,308,655 desktop URLs, and 5,484,239 mobile URLs. Among the many data points HTTP Archive reports for those URLs is a list of the detected technologies for a given site. That means we can pick out the thousands of sites that use various frameworks and see how much code they’re shipping, and what that costs the CPU.
I ran all the queries against March of 2020, the most recent run at the time.
I decided to compare the aggregate HTTP Archive data for all sites recorded against sites with React, Vue.js, and Angular detected1.
For fun, I also added jQuery—it’s still massively popular, and it also represents a bit of a different approach to building with JavaScript than the single-page application (SPA) approach provided by React, Vue.js and Angular.
| Framework | Mobile URLs | Desktop URLs |
|---|---|---|
| jQuery | 4,615,474 | 3,714,643 |
| React | 489,827 | 241,023 |
| Vue.js | 85,649 | 43,691 |
| Angular | 19,423 | 18,088 |
Hopes and dreams
Before we dig in, here’s what I would hope.
In an ideal world, I believe a framework should go beyond developer experience value and provide concrete value for the people using our sites. Performance is just one part of that—accessibility and security both come to mind as well—but it’s an essential part.
So in an ideal world, a framework makes it easier to perform well by either providing a better starting point or providing constraints and characteristics that make it hard to build something that doesn’t perform well.
The best of frameworks would do both: provide a better starting point and help to restrict how out of hands things can get.
Looking at the median for our data isn’t going to tell us that, and in fact leaves a ton of information out. Instead, for each stat, I pulled the following percentiles: the 10th, 25th, 50th (the median), 75th, and 90th.
The 10th and 90th percentiles are particularly interesting to me. The 10th percentile represents the best of class (or at least, reasonably close to the best of class) for a given framework. In other words, only 10% of all sites using a given framework reach that mark or better. The 90th percentile, on the other hand, is the opposite of the spectrum—it shows us how bad things can get. The 90th percentile represents the long-tail—that last 10% of sites with the highest number of bytes or largest amount of main thread time.
JavaScript Bytes
For the starting point, it makes sense to look at the amount of JavaScript passed over the network.
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 93.4kb | 196.6kb | 413.5kb | 746.8kb | 1,201.6kb |
| Sites with jQuery | 110.3kb | 219.8kb | 430.4kb | 748.6kb | 1,162.3kb |
| Sites with Vue.js | 244.7kb | 409.3kb | 692.1kb | 1,065.5kb | 1,570.7kb |
| Sites with Angular | 445.1kb | 675.6kb | 1,066.4kb | 1,761.5kb | 2,893.2kb |
| Sites with React | 345.8kb | 441.6kb | 690.3kb | 1,238.5kb | 1,893.6kb |
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 105.5kb | 226.6kb | 450.4kb | 808.8kb | 1,267.3kb |
| Sites with jQuery | 121.7kb | 242.2kb | 458.3kb | 803.4kb | 1,235.3kb |
| Sites with Vue.js | 248.0kb | 420.1kb | 718.0kb | 1,122.5kb | 1,643.1kb |
| Sites with Angular | 468.8kb | 716.9kb | 1,144.2kb | 1,930.0kb | 3,283.1kb |
| Sites with React | 308.6kb | 469.0kb | 841.9kb | 1,472.2kb | 2,197.8kb |
For the sheer payload size, the 10th percentile turns out pretty much as you would expect: if one of these frameworks are in use, there’s more JavaScript even in the most ideal of situations. That’s not surprising—you can’t add a JavaScript framework as a default starting point and expect to ship less JavaScript out of the box.
What is notable is that some frameworks correlate to better starting points than others. Sites with jQuery are the best of the bunch, starting with about 15% more JavaScript on desktop devices and about 18% more on mobile. (There’s admittedly a little bit of bias here. jQuery is found on a lot of sites, so naturally, it’s going to have a tighter relationship to the overall numbers than others. Still, that doesn’t change the way the raw numbers appear for each framework.)
While even a 15-18% increase is notable, comparing that to the opposite end of the spectrum makes the jQuery tax feel very low. Sites with Angular ship 344% more JavaScript on desktop at the 10th percentile, and 377% more on mobile. Sites with React, the next heaviest, ship 193% more JavaScript on desktop and 270% more on mobile devices.
I mentioned earlier that even if the starting point is a little off, I would hope that a framework could still provide value by limiting the upper bound in some way.
Interestingly, jQuery driven sites follow this pattern. While they’re a bit heftier (15-18%) at the 10th percentile, they’re slightly smaller than the aggregate at the 90th percentile—about 3% on both desktop and mobile. Neither of those numbers is super significant, but at least sites with jQuery don’t seem to have a dramatically worse long-tail in terms of JavaScript bytes shipped.
The same can’t be said of the other frameworks.
Just as with the 10th percentile, Angular and React driven sites tend to distance themselves from others at the 90th percentile, and not in a very flattering way.
At the 90th percentile, Angular sites ship 141% more bytes on mobile and 159% more bytes on desktop. Sites with React ship 73% more bytes on desktop and 58% more on mobile. With a 90th percentile weight of 2,197.8kb, React sites ship 322.9kb more bytes of JavaScript to mobile users than Vue.js, the next closest. The desktop gap between Angular and React and the rest of the crowd is even higher—React-driven sites ship 554.7kb more JavaScript than Vue.js-driven sites.
JavaScript Main Thread Time
It’s clear from the data that sites with these frameworks in place tend to pay a large penalty in terms of bytes. But of course, that’s just one part of the equation.
Once that JavaScript arrives, it has to get to work. Any work that occurs on the main thread of the browser is particularly troubling. The main thread is responsible for handling user input, during style calculation, layout and painting. If we’re clogging it up with a lot of JavaScript work, the main thread has no chance to do those things in a timely manner, leading to lag and jank.
HTTP Archive records V8 main thread time, so we can query to see just how much time that main thread is working on all that JavaScript.
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 356.4ms | 959.7ms | 2,372.1ms | 5,367.3ms | 10,485.8ms |
| Sites with jQuery | 575.3ms | 1,147.4ms | 2,555.9ms | 5,511.0ms | 10,349.4ms |
| Sites with Vue.js | 1,130.0ms | 2,087.9ms | 4,100.4ms | 7,676.1ms | 12,849.4ms |
| Sites with Angular | 1,471.3ms | 2,380.1ms | 4,118.6ms | 7,450.8ms | 13,296.4ms |
| Sites with React | 2,700.1ms | 5,090.3ms | 9,287.6ms | 14,509.6ms | 20,813.3ms |
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 146.0ms | 351.8ms | 831.0ms | 1,739.8ms | 3,236.8ms |
| Sites with jQuery | 199.6ms | 399.2ms | 877.5ms | 1,779.9ms | 3,215.5ms |
| Sites with Vue.js | 350.4ms | 650.8ms | 1,280.7ms | 2,388.5ms | 4,010.8ms |
| Sites with Angular | 482.2ms | 777.9ms | 1,365.5ms | 2,400.6ms | 4,171.8ms |
| Sites with React | 508.0ms | 1,045.6ms | 2,121.1ms | 4,235.1ms | 7,444.3ms |
There are some very familiar themes here.
First, sites with jQuery detected spend much less time on JavaScript work on the main thread than the other three analyzed. At the 10th percentile, there’s a 61% increase in JavaScript main thread work being done on mobile devices and 37% more on desktop. At the 90th percentile, jQuery sites are gain pretty darn close to the aggregate, spending 1.3% less time on the main thread for mobile devices and ..7% less time on desktop machines.
The opposite end—the frameworks that correlate to the most time spent on the main thread—is once again made up of Angular and React. The only difference is that while Angular sites shipped more JavaScript than React sites, they actually spend less time on the CPU—much less time.
At the 10th percentile, Angular sites spend 230% more time on the CPU for JavaScript related work on desktop devices, and 313% more on mobile devices. React sites bring up the tail end, spending 248% more time on desktop devices and 658% more time on mobile devices. No, 658% is not a typo. At the 10th percentile, sites with React spend 2.7s on the main thread dealing with all the JavaScript sent down.
Compared to those big numbers, the situation at the 90th percentile at least looks a little better. The main thread of Angular sites spends 29% more time on JavaScript for desktop devices and 27% more time on mobile devices. React sites spend 130% more time on desktop and 98% more time on mobile devices.
Those percentages look much better than at the 10th percentile, but keep in mind that the bulk numbers are pretty scary: that’s 20.8s of main thread work for sites built with React at the 90th percentile on mobile devices. (What, exactly, is happening during that time is a topic for a follow-up post, I think.)
There’s one potential gotcha (thanks Jeremy for making sure I double-checked the stats from this angle)—many sites will pull in multiple libraries. In particular, I see a lot of sites pulling jQuery in alongside React or Vue.js as they’re migrating to that architecture. So, I re-ran the queries, only this time I only included URLs that included only React, jQuery, Angular or Vue.js not some combination of them.
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| Sites with only jQuery | 542.9ms | 1,062.2ms | 2,297.4ms | 4,769.7ms | 8,718.2ms |
| Sites with only Vue.js | 944.0ms | 1,716.3ms | 3,194.7ms | 5,959.6ms | 9,843.8ms |
| Sites with Angular | 1,328.9ms | 2,151.9ms | 3,695.3ms | 6,629.3ms | 11,607.7ms |
| Sites with React | 2,443.2ms | 4,620.5ms | 10,061.4ms | 17,074.3ms | 24,956.3ms |
First, the unsurprising bit: when only one framework is used, performance improves far more often than not. The numbers for every framework look better at the 10th and 25th percentile. That makes sense. A site that is built well with one framework should perform better than a site that is built well with two or more.
In fact, the numbers for every framework look better at each percentile with one curious exception. What surprised me, and the reason I ended up including this data, is that at the 50th percentile and beyond, sites using React perform worse when React is the only framework in use.
It’s a bit odd, but here’s my best guess.
If you have React and jQuery running alongside each other, you’re more likely to be in the midst of a migration to React, or a mixed codebase. Since we have already seen that sites with jQuery spend less time on the main thread than sites with React, it makes sense that having some functionality still driven by jQuery would bring the numbers down a bit.
As you move away from jQuery and focus more on React exclusively, though, that changes. If the site is built really well and you’re using React sparingly, you’re fine. But for the average site, more work inside of React means the main thread receives an increasing amount of strain.
The mobile/desktop gap
Another angle that’s worth looking at is just how large the gap is between that mobile experience and the desktop experience. Looking at it from a bytes perspective, nothing too scary jumps out. Sure, I’d love to see fewer bytes passed along, but neither mobile nor desktop devices receive significantly more bytes than the other.
But once you look at the processing time, the gap is significant.
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 144.1% | 172.8% | 185.5% | 208.5% | 224.0% |
| Sites with jQuery | 188.2% | 187.4% | 191.3% | 209.6% | 221.9% |
| Sites with Vue.js | 222.5% | 220.8% | 220.2% | 221.4% | 220.4% |
| Sites with Angular | 205.1% | 206.0% | 201.6% | 210.4% | 218.7% |
| Sites with React | 431.5% | 386.8% | 337.9% | 242.6% | 179.6% |
While some variance is expected between a phone and a laptop, seeing numbers this high tells me that the current crop of frameworks isn’t doing enough to prioritize less powerful devices and help to close that gap. Even at the 10th percentile, React sites spend 431.5% more time on the main thread on mobile devices as they do on desktop devices. jQuery has the lowest gap of all frameworks, but even there, it equates to 188.2% more time on mobile devices. When we make the CPU work harder—and increasingly we are—folks with less powerful devices end up holding the bill.
The big picture
Good frameworks should provide a better starting point on the essentials (security, accessibility, performance) or have built-in constraints that make it harder to ship something that violates those.
That doesn’t appear to be happening with performance (nor with accessibility, apparently).
It’s worth noting that because sites with React or Angular spend more time on the CPU than others, that isn’t necessarily the same as saying React is more expensive on the CPU than Vue.js. In fact, it says very little about the performance of the core frameworks in play and much more about the approach to development these frameworks may encourage (whether intentionally or not) through documentation, ecosystem, and general coding practices.
It’s also worth noting what we don’t have here: data about how much time the device spends on this JavaScript for subsequent views. The argument for the SPA architecture is that, once the SPA is in place, you theoretically get faster subsequent page loads. My own experience tells me that’s far from a given, but we don’t have concrete data here to make that case in either direction.
What is clear: right now, if you’re using a framework to build your site, you’re making a trade-off in terms of initial performance—even in the best of scenarios.
Some trade-off may be acceptable in the right situations, but it’s important that we make that exchange consciously.
There’s reason for optimism. I’m encouraged by how closely the folks at Chrome have been working with some of these frameworks to help improve their performance.
But I’m also pragmatic. New architectures tend to create performance problems just as often as they solve them, and it takes time to right the ship. Just as we shouldn’t expect new networks to solve all our performance woes, we shouldn’t expect that the next version of our favorite framework is going to solve them all either.
If you are going to use one of these frameworks, then you have to take extra steps to make sure you don’t negatively impact performance in the meantime. Here are a few great starting considerations:
- Do a sanity check: do you really need to use it? Vanilla JavaScript can do a lot today.
- Is there a lighter alternative (Preact, Svelte, etc.) that gets you 90% of the way there?
- If you’re going with a framework, does anything exist that provides better, more opinionated defaults (ex: Nuxt.js instead of Vue.js, Next.js instead of React, etc.)?2
- What’s your performance budget going to be for your JavaScript?
- What friction can you introduce into the workflow that makes it harder to add any more JavaScript than absolutely necessary?
- If you’re using a framework for the developer ergonomics, do you need to ship it down to the client, or can you handle that all on the server?
These are generally good things to consider regardless of your technology choice, but they’re particularly important if you’re starting with a performance deficit from the beginning.
1. I considered using other baselines.
For example, we could use all sites that didn't have any of the mentioned frameworks (jQuery, React, Vue.js, Angular) detected.
Here are the numbers for the JavaScript main thread work on mobile devices for those URLs:| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 6.3ms | 75.3ms | 382.2ms | 2,316.6ms | 5,504.7ms |
Going even further, we could use all sites with no JavaScript framework or library detected at all. Here's what the main thread time looks like for those:
| 10th | 25th | 50th | 75th | 90th | |
|---|---|---|---|---|---|
| All Sites | 3.6ms | 29.9ms | 193.2ms | 1,399.6ms | 3,714.5ms |
With either baseline, the data would have looked less favorable and more dramatic. Ultimately, I ended up using the aggregate data as the baseline because:
- It's what is used broadly in the community whenever this stuff is discussed.
- It avoids derailing the conversation with debates about whether or not sites without frameworks are complex enough to be an accurate comparison. You can build large, complex sites without using a framework—I've worked with companies who have—but it's an argument that would distract from the otherwise pretty clear conclusions from the data.
Still, as a few folks pointed out to me when I was showing them the data, it's interesting to see these alternate baselines as they do make it very clear how much these tools are messing with the averages.
2. I suspect folks could start with just the framework and build something that outperforms what Next.js or Nuxt.js gives you. The long-tail of data here, though, shows that likely isn't the case for many sites. It takes time to build out all the plumbing they provide, and build it well. Still, it is worth considering how many layers of abstraction we're willing to pull in before making the decision.
In Chrome, for example, once a snippet is saved, you can quickly execute it within developer tools by hitting Command + P (Control + P on Windows). This brings up a list of all the sources (files requested by the page). If you then type the exclamation mark (!), it will only show you the snippets you have saved.
Hitting Command + P followed by ! brings up a list of any custom snippets you’ve defined in Chrome Developer Tools.
From there, it’s a matter of selecting the one you want and pressing Enter. That will open up the console and execute the snippet.
I use this for quite a few tasks, but the example I tweeted was a little snippet that grabs all the script elements on a page and then outputs their src, async and defer values to the console using console.table. It’s a pretty handy way to quickly zero in on the various scripts in a page and see how they’re loaded.
let scripts = document.querySelectorAll('script');
let scriptsLoading = [...scripts].map(obj => {
let newObj = {};
newObj = {
"src": obj.src,
"async": obj.async,
"defer": obj.defer
}
return newObj;
});
console.table(scriptsLoading);
One thing to keep in mind is that the results are all scripts: those that were included in the initial HTML as well as those that were later injected. Andy mentioned that he wished there was a way to see how many scripts were included in the initial HTML versus injected later on, specifically as part of a WebPageTest run.
WebPageTest supports custom metrics, which are pretty much what they sound like: they’re metrics that you define when you run a test. You provide the logic for the metrics using a snippet of JavaScript.
For example, you could return a custom metric called “num-scripts” by dropping the following in the “Custom” tab on WebPageTest.
[num-scripts]
return document.querySelectorAll('script').length;
The challenge that Andy noted is the same here as in-browser tooling—the document is going to include both scripts loaded by default and scripts that are later injected. If you want to see only the scripts included in that initial HTML, then it’s much trickier.
Pat saw the discussion, and being the wonderful human being he is, he quickly exposed a way to get ahold of an array of all request data for a page (for any tests run on Chrome) using string substitution. Now, a custom metric can refer to either $WPT_REQUESTS, which will be substituted with an array of all request data except for the response bodies, or $WPT_BODIES which will be substituted with the same array, only with the addition of the response bodies for each request.
Having access to raw request data opens up a ton of possibilities, but in this particular situation, we’re interested in the response data. With the response data of the initial HTML request in hand, we can distinguish between what scripts get included in the original markup and which scripts are dynamically inserted.
The following custom metric snippet sets up two custom metrics: num-initial-scripts and num-total-scripts.
[num-initial-scripts]
let html = $WPT_BODIES[0].response_body;
let wrapper = document.createElement('div');
wrapper.innerHTML = html;
return wrapper.querySelectorAll('script').length;
[num-total-scripts]
return document.querySelectorAll('script').length;
The second custom metric, num-total-scripts, should look familiar—that grabs how many script elements appear in the final version of the document, after the page has been loaded and all JavaScript has run.
The first custom metric, num-initial-scripts, counts the number of script elements in the initial HTML. First it grabs the body of the first response using the $WPT_BODIES placeholder. Since that returns a string, we then convert it to HTML so we can parse it more easily. Finally, once we have HTML, we query it as we did the original document.
These metrics would now be available through WebPageTest’s JSON results, as well as being displayed on each test run on WebPageTest itself.
Our two custom metrics are now displayed in the metric summary table for each run of our test on WebPageTest
It’s pretty darn nifty! (That’s what the kids are saying these days. Right?)
We could take this a step further and find out how many external scripts are included in the initial HTML without using async or defer by filtering out any scripts where neither of those values are true:
[num-blocking-external-scripts]
let html = $WPT_BODIES[0].response_body;
let wrapper = document.createElement('div');
wrapper.innerHTML = html;
let scripts = wrapper.querySelectorAll('script[src]');
return [...scripts].filter(obj => obj.async == false && obj.defer == false).length;
Or check how many external stylesheets are included in the HTML that are likely render blocking by finding all stylesheets that aren’t being marked as for print:
[num-blocking-css]
let html = $WPT_BODIES[0].response_body;
let wrapper = document.createElement('div');
wrapper.innerHTML = html;
return wrapper.querySelectorAll('link[rel=stylesheet]:not([media=print])').length;
Or, breaking away from the HTML for a minute, we could check to see if there were any stylesheets being included using @import from within another stylesheet (a performance no-no):
[css-imports]
let requests = $WPT_BODIES;
let cssBodies = requests.filter(request => request.type == "Stylesheet");
let re = /@import/g;
let importCount = 0;
cssBodies.forEach((file) => {
importCount += ((file.response_body || '').match(re) || []).length;
}
)
return importCount;
Those are a few quick ideas of data that would have been difficult (if possible at all) to gather before. It’s a pretty handy little bit of functionality that I’m really looking forward to playing with more.
]]>Here’s an example.
A couple of months ago, I was helping one company boost its performance online. We addressed plenty of front-end optimizations, but one of the most troubling issues was their Time to First Byte (TTFB).
The cache hit was very low (a topic for another day), and whenever the server had to provide a fresh version of the page, it was taking an unacceptably long time more often than not. This particular company was using Shopify, so it wasn’t a matter of tuning servers or CDN settings somewhere but figuring out what was taking so long for Shopify to render the Liquid templates necessary to serve the page.
Interestingly, the synthetic data was pretty much smoothing over the TTFB issues entirely. Synthetic tests, even properly tuned to match device and network characteristics, only occasionally surfaced the issue. The RUM data made it clear as day (and it was easily reproduced in individual testing).
Looking at the page itself, we noticed a block of JSON data on product pages. This JSON data was very large, providing detailed product information for all the products in the catalog. It added a lot of size to the HTML (anywhere from 50-100kb depending on the page), and we suspected it was a big part of the server delay as well. Just how much of the delay it accounted for, and whether or not it was the primary culprit, we weren’t sure. Trial and error is fine, and often the only way forward, but it’s always nice to have some definitive evidence to guide decisions.
I pinged someone I know at Shopify around this time, and they hooked me up with a beta version of a new profiler they built for analyzing Shopify Liquid rendering times. (The profiler has since been released as a Chrome extension, so it’s a bit easier to get up and running now.) The profiler takes a JSON object of data about how long Shopify spent rendering the Liquid template and then presents that data in the form of a flame graph.
Sure enough, running the profiler showed that the creation of this JSON object was a significant bottleneck—the primary bottleneck, in fact. The image below is from a profile where the template rendering took 3.8 seconds.
The Shopify profiler shows how much time Shopify spends on each part of a given template. Here, we see including the template that creates a JSON object takes 2.8s.
If you don’t yet speak flame graph, here’s what it’s telling you.
First, on the bottom, is all the work for the Page itself. Everything above that are tasks that had to complete as part of the work to render that page.
The part highlighted, which is showing 2.8 samples (seconds), is the time it took to handle a particular include in the theme. As you move further up the stack, you can see there’s an if statement (if template.name == "collection") that triggers the creation of our JSON object (collection.prodcuts | json).
The width of that final chunk of work to create the JSON object is nearly the same as the width for all work associated with that include, indicating that’s where the bulk of that time is coming from.
We could have put a cap on how many products would be returned in the JSON object, or maybe stripped out some of the data. Either would have made that process faster (and reduced the weight passed over the network as well). We didn’t have to go through the trouble, though. As it turns out, it was for an analytics service the company was no longer using. We removed the creation of the JSON object altogether and watched the TTFB decrease dramatically—to the tune of 50-60% at the median.
From a technical perspective, the optimization isn’t altogether that interesting—there’s nothing super exciting about commenting out a block of JSON. But, to me, the fact that the fix was so boring is precisely what makes it interesting.
The optimization was a huge savings just quietly waiting for someone to find it. If we only had synthetic testing to look at, we would have missed it entirely. As I noted earlier, synthetic tests only rarely showed the long TTFB’s. It took RUM data to both surface the problem, and make it clear to us just how frequently the issue was occurring.
Even after the problem was discovered, identifying the fix required better visibility into the work happening under the hood. Through digging through the completed HTML, we were able to come up with a reasonable guess as to what was causing the problem. But, the availability of a tool with a bit more precision was able to tell us exactly what we were dealing with (and would have no doubt saved us some time if we had it from the beginning).
The combination of quality monitoring and quality tooling is one that tends to have powerful ripple effects. Browsing around Shopify sites, I see a lot of long TTFB issues, and I’m confident that profiling the templates for a few of these pages would surface plenty of areas for quick improvement. If Shopify were also able to find a way to better surface cache hit ratio’s and what exactly is triggering cache invalidation, I suspect a bunch of common patterns would emerge. These patterns would then lead to both individual companies using the platform and likely Shopify itself, identifying changes that would reap huge rewards for performance (and revenue).
It’s not always clever tricks and browser intricacies (though those are always fun) that lead to performance improvements. Often, it’s far more mundane tasks—cleaning up here, adjusting there—just waiting for the right combination of tooling and monitoring to make them apparent.
]]>Continuing on the health-related analogies, friction is a big part of how I manage my sweet tooth. I work by myself in a small office. Nothing is preventing me from constantly snacking on a bunch of sweets, and wow would I ever love to. I’m a sucker for just about anything with sugar.
But I discovered something else about myself: I’m also kinda lazy. So I take a two-part approach. The first is to make the right thing easy. I have apples, oranges, almonds, dried cranberries, and all sorts of healthier snacking options right next to my desk. If I’m hungry, I don’t even have to move. I reach out my arm, and there they are.
But the second part of that process is just as important. I make the wrong thing harder. I do have some sweets, but they’re tucked away in an adjacent room. It’s not difficult to get to them, but it does require more effort than the healthy alternatives right next to me. It doesn’t stop me from having sweets, but it means that reaching for that chocolate involves a conscious decision to put in more work than if I decide to have an apple. It’s just enough friction to change the way I snack.
A lot of modern workflow improvements have been around removing friction. We want to make it easier to deploy rapidly. Tools like npm make it very easy to gain access to any and all modules we could think of. Tag management enables folks to very quickly add another third-party service.
All of these things, on the surface, provide some value, but the consequences are tremendous. Because these processes remove friction, they don’t ever really give us a chance to pause and consider what we’re doing.
Re-introducing some healthy friction, some moments of pause, in our processes is critical to ensuring a higher level of quality overall.
For example, let’s tackle the trouble with npm.
npm transformed the way we build, but I don’t think anyone can argue that it hasn’t wreaked some serious havoc in the process. The ready availability of a JavaScript module for pretty much anything you can imagine has lead to security issues, accessibility concerns and overall bloat. It’s made it too easy to add more code to our sites without ever considering the trade-offs.
I’m with Alex on this one. Adding more code should be a very intentional decision:
JavaScript should be a *deeply” intentional choice on the client. Tools that remove intentionality, whatever else they may have done for your team, probably sunk your perf battleship.
Here’s an example of how we could introduce some friction into the process to help with the performance challenges by focusing on two critical points in our workflow: install and build/deploy.
During install
The first thing we can do is introduce some friction when we first install a script. After all, the easiest issues to fix are the ones that haven’t happened yet.
I like bundle-phobia-install for this. bundle-phobia-install is a wrapper around npm install that uses information from Bundlephobia to conditionally install npm modules. It does this by comparing the size of the package against some predetermined limits. It defaults to a size limit of 100kB overall (as in, the total of all dependencies), but you can configure that however you would like.
You can also set up limits on individual packages.
For example, the following settings (configured in a package.json file) would ensure that no individual package with a size of over 20kB could be installed, and that the total size of all dependencies can be no more than 100kB.
...
"bundle-phobia": {
"max-size": "20kB",
"max-overall-size": "100kB"
},
...
Now, if we were to try to install, say, lodash, the install would fail because lodash exceeds our individual package size limit.
Running bundle-phobia-install instead of npm install lets us enforce size limits on npm modules, preventing us from adding significantly heavy dependencies to our site.
You could still install lodash, but that now requires you to run bundle-phobia-install with the interactive flag (-i) and manually approve the install despite the fact that it exceeds your limits. It turns an unconscious decision into a conscious one.
During build/deploy
By having some friction on the install process, we help to provide a better base for size of our JavaScript. It’s still critical to put some friction on the build and deploy process, though. For one, our install approach is only limiting npm modules, not really our own code. We also don’t really know the exact shape of our bundles at install—that comes later.
For webpack-driven projects, you can take advantage of webpack’s performance hints. There are two hints available to us: performance.maxEntrypointSize and performance.maxAssetSize. performance.maxEntrypointSize lets us set a limit for all webpack produced assets for a given route. performance.maxAssetSize lets us set a limit for any individual webpack produced assets.
By default, the hints are just that—hints. They show up as warnings but don’t do anything concrete. You can change that by setting the peformance.hints property to error.
So, given the following configuration, webpack would throw errors whenever an individual asset exceeds 100kB or all total assets for a given route exceed 150kB.
module.exports = {
//...
performance: {
hints: 'error',
maxEntrypointSize: 100000,
maxAssetSize: 150000
}
};
webpack’s performance hints let us throw errors if individual assets are too large, or if all assets for a given route get too heavy.
If you’re not using webpack, or if you are and still want to augment these hints, we can also introduce some bundle size checking at the pull request or deploy levels. Bundlesize is a common choice here.
With Bundlesize, we setup maximum sizes for each bundle we want to track. Then we can run Bundlesize against those limits on every pull request or during our continuous integration process to stop us from deploying if any of those bundle sizes have been exceeded.
Bundlesize will check each bundle against the limits we set so that we can break the build if any of those limits are exceeded.
Building with friction
Healthy friction in our processes, paired with automation and reporting where appropriate, can have a substantial impact on what we ship. When we force ourselves to take these moments to consider the implications of what we’re about to add to our codebase, when we make it hard to add more bloat to our applications by default, we not only change the way we build, but we change the way we think about building. It’s the observer effect applied to the way we code.
When we have to consider the weight of every module we add to our project (or which vulnerabilities are included or what accessibility concerns they bring along), we start to inherently pay a little more attention to at least a part of performance every single day. It won’t magically fix all our performance woes by itself, but it certainly gets us pointed in the right direction.
]]>Feature-Policy is a relatively new feature that lets you opt-in or out of certain browser features on your site.
For example, you could tell the browser not to allow the use of the Geolocation API by providing the following header:
Feature-Policy: geolocation 'none'
There are a lot of benefits from a security and performance standpoint to Feature-Policy, but what I’m excited about at the moment are the ways you can use Feature-Policy to help make easy-to-overlook performance issues more visible. It essentially provides in-browser performance linting.
Oversized-images
By default, if you provide the browser an image in a format it supports, it will display it. It even helpful scales those images so they look great, even if you’ve provided a massive file. Because of this, it’s not immediately obvious when you’ve provided an image that is larger than the site needs.
The oversized-images policy tells the browser not to allow any images that are more than some predefined factor of their container size. The recommended default threshold is 2x, but you are able to override that if you would like.
So, given the following header, the browser will not allow any origins (that’s the ‘none’ part) to display an image that is more than 2x its container size (either width or height).
Feature-Policy: oversized-images 'none';
If you wanted to be more lenient, you could tell the browser not to display any images more than 3x their container size:
Feature-Policy: oversized-images *(3) 'none';
In either case, if an image exceeds the threshold, a placeholder is displayed instead, and an error is logged to the console.
With the oversized-images policy in place, large images are still downloaded, but placeholders are shown instead, and an error is logged to the console.
Unoptimized Images
Another common image related performance problem is unoptimized images. It’s all too common to find images that may be appropriately sized, but haven’t been adequately compressed. A lot of unnecessary metadata gets added to images when they’re taken and created, and that often gets passed along. One particularly annoying example are images that have thumbnails of themselves embedded in their metadata. I’ve seen plenty of instances where the embedded thumbnail (that the designers and developers didn’t even know was there) weighed more than the image itself!
On top of that, there’s also just general compression that many formats provide to get the ideal balance of quality and file size.
Using both the unoptimized-lossy-images and unoptimized-lossless-images policies, we can tell the browser to compare the file size to the dimensions of the image.
Feature-Policy: unoptimized-lossy-images 'none';
Feature-Policy: unoptimized-lossless-images
'none';
If the byte-per-pixel ratio is too high, the browser will display a placeholder image and log an error to the console.
The unoptimized-* policies result in a placeholder image being displayed, just as with the oversized-images policy.
The recommended byte-per-pixel ratio for lossy images is 0.5 and the recommended ratio for lossless images is 1. There’s a little wiggle room here. Right now, there’s an overhead allowance of 1kb for lossy images and 10kb for lossless images.
For example, let’s say we have a 200px by 200px JPEG. JPEG is a lossy format, so the recommended byte-per-pixel ratio is .5, and the overhead allowance is only 1kb. To figure out what image size would be acceptable, we would multiply the dimensions by the accepted ratio and then add in the overhead allowance.
(200 x 200 x .5) + 1024 = 21,024 bytes or 20.5kb
If the image were a lossy format, then our allowance would be 10kb, and the accepted byte-per-pixel ratio would be 1. Other than that, the calculation would look the same.
(200 x 200 x 1) + 10,240 = 50,240 bytes or 49.1kb
That allowance is likely to change in the future. In fact, while Blink defaults to a 10kb allowance for lossless images, they’re already experimenting with an unoptimized-lossless-images-strict policy that changes that allowance to only 1kb.
Unsized Media
What’s old is new and all that.
For a long time, putting height and width attributes on your image was more or less a best practice. Without those in place, the browser has no idea how much space the image should take up until the image has actually been downloaded. This leads to layout shifting. The page will be displayed, and then the content will shift once the image has arrived, and the browser does another layout pass to make room for it.
When we started wanting images to scale fluidly with the help of CSS, we more or less recreated the issue regardless of if those attributes existed or not. As a result, a lot of folks stopped using them altogether.
But, thanks to recent work spearheaded by Jen Simmons, Firefox and Chrome can compute the aspect ratio of an image from its height and width attributes. When paired with any applied CSS, this means they can preserve space for those images during the initial layout phase.
The unsized-media policy tells the browser that all media elements should have size attributes, and if they don’t, the browser should choose a default. It’s a little more complicated than that, but the gist is that if you don’t have size attributes, the browser will use 300px by 150px.
Feature-Policy: unsized-media 'none';
With this policy in place, media will still be displayed, but if the size isn’t defined in the HTML, you’ll very quickly see as all the images will be sized at the default dimensions. And, as always, an error will be reported in the console.
With the unsized-media policy, the original image or video is still displayed, but the browser defaults to a 300px x 150px size for each image without the height and width attributes.
It's probably worth noting because it tripped me up at first, but if you're using the unsized-media policy in conjunction with the oversized-images policy, don't be surprised if you suddenly see a bunch more violations from oversized images. Because the unsized-media policy now changes your unsized images to 300px by 150px, the browser will use that size as its starting point when determining if an image is oversized.
Surfacing Less-Visible Policy Violations
What I love about the image related policies is that they take something that isn’t usually noticeable and makes it jump out at us as we’re building. We know if we’ve neglected to optimize an image or provide sizing attributes because the display of the page is impacted. In fact, reporting is their primary benefit. While unsized-media would potentially reduce layout shifting, the other policies still result in the images being downloaded, so the sole benefit is this increased visibility.
There are a few other potentially helpful policies from a performance linting perspective. Policies like sync-script (which blocks synchronous script execution), sync-xhr (which blocks synchronous AJAX requests) and document-write (which blocks any document.write calls) all come to mind.
These other policies great from a performance and control standpoint, but out of the box they’re a little less exciting from a linting perspective. Unless you have a synchronous script that is necessary for your page to display (which, ok, is not that hard to find) most of the visibility benefits these policies provide are in the form of console errors and, frankly, I suspect most developers don’t pay super close attention to those (though we all probably should).
That being said, we can make them more visible by using the ReportingObserver API to watch for violations and display them prominently on the page:
let reportingAlerts = document.createElement('ul');
reportingAlerts.setAttribute('id','reportingAlerts');
document.body.appendChild(reportingAlerts);
const alertBox = document.getElementById('reportingAlerts');
new ReportingObserver((reports, observer) => {
let fragment = document.createDocumentFragment();
Object.keys(reports).forEach(function(item) {
let li = document.createElement('li');
li.textContent = reports[item].body.message + ': ' + reports[item].body.featureId;
fragment.appendChild(li);
});
alertBox.appendChild(fragment)
}, {types: ['feature-policy-violation'], buffered: true}).observe();
I setup a quick and dirty CodePen to show how it might look.
An example of how you could display feature policy violations on in your local development or staging environments.
The Catch
The big catch: browser support. Only Blink-based browsers (Opera, Edge, Chrome, Samsung) seem to support the header right now. (Firefox and Safari support the allow attribute intended for iFrames.) Even there, you have to enable “Experimental Web Platform features” (found in about:flags) for many of these to work.
How I’m Using Them
That’s not a huge issue for me personally. Since I like to use these policies as in-browser linting, I don’t need to try to ship any of these headers to production or have them work for everyone—they need to be there for me and anyone actively building the site. I use Chrome as my primary development browser anyway, so it’s just a matter of turning the flag on once and forgetting about it.
The simplest way I’ve found for doing this is through the ModHeader extension. The extension lets you define custom headers to be passed along as you’re browsing the web.
The ModHeader extension lets me set up a Feature-Policy header that I can easily toggle on and off as I browse around the web.
I have three different Feature-Policy headers that I primarily toggle between:
oversized-images 'none'; unoptimized-lossy-images 'none'; unoptimized-lossless-images 'none';unsized-media 'none'; oversized-images 'none'; unoptimized-lossy-images 'none'; unoptimized-lossless-images 'none';script-sync 'none'; unsized-media 'none'; oversized-images 'none'; unoptimized-lossy-images 'none'; unoptimized-lossless-images 'none';
I keep the first one on a lot—it’s fascinating to browse the web with these policies applied. It’s scary how massive some images are. There’s a lot of room for improvement.
There is a LOT of unsized-media out there (I’m guilty too!) so that one gets annoying if it’s on for general browsing, which is why I have it in a separate policy I can toggle on. The same thing goes for sync-script—it breaks a lot of sites.
A few teams I’ve worked with have started using a similar flow to have those policies running so that when they’re working on the local development and staging environments, they can quickly see if something is amiss. Of course, in those situations, I recommend turning on any and all performance related policies so that they’re able to catch issues right away.
I’m hopeful that we’ll see a few other browsers add support eventually—while Chrome is my primary development browser, I do bounce between browsers, and it would be helpful for this to be available across them all. This is one of those rare times, however, where experimental support is enough to make a feature like this instantly useful.
A lot of performance issues stem from them simply not being very noticeable to those of us doing the building. Changing that wherever we can is one of the best ways to make sure that all that low-hanging fruit doesn’t go overlooked.
]]>preload as well as a useful, real-world demonstration of how the order of your document can have a significant impact on performance (something Harry Roberts has done an outstanding job of detailing).
I’m a big fan of the Filament Group—they churn out an absurd amount of high-quality work, and they are constantly creating invaluable resources and giving them away for the betterment of the web. One of those great resources is their loadCSS project, which for the longest time, was the way I recommended folks load their non-critical CSS.
While that’s changed (and Filament Group wrote up a great post about what they prefer to do nowadays), I still find it often used in production on sites I audit.
One particular pattern I’ve seen is the preload/polyfill pattern. With this approach, you load any stylesheets as preloads instead, and then use their onload events to change them back to a stylesheet once the browser has them ready. It looks something like this:
<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="path/to/mystylesheet.css"></noscript>Since not every browser supports preload, the loadCSS project provides a helpful polyfill for you to add after you’ve declared your links, like so:
<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript>
<link rel="stylesheet" href="path/to/mystylesheet.css">
</noscript>
<script>
/*! loadCSS rel=preload polyfill. [c]2017 Filament Group, Inc. MIT License */
(function(){ ... }());
</script>Network Priorities Out of Whack
I’ve never been super excited about this pattern. Preload is a bit of a blunt instrument—whatever you apply to it is gonna jump way up in line to be downloaded. The use of preload means that these stylesheets, which you’re presumably making asynchronous because they aren’t very critical to page display, are given a very high priority by browsers.
The following image from a WebPageTest run shows the issue pretty well. Lines 3-6 are CSS files that are being loaded asynchronously using the preload pattern. But, while developers have flagged them as not important enough to block rendering, the use of preload means they are arriving before the remaining resources.
Lines 3-6 are CSS files being loaded asynchronously using the preload pattern. While they aren’t critical to initial render, the use of preload means they arrive before anything else in this case.
Blocking the HTML parser
The network priority issues are enough of a reason to avoid this pattern in most situations. But in this case, the issues were compounded by the presence of another stylesheet being loaded externally.
<link rel="stylesheet" href="path/to/main.css" />
<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript>
<link rel="stylesheet" href="path/to/mystylesheet.css">
</noscript>
<script>
/*! loadCSS rel=preload polyfill. [c]2017 Filament Group, Inc. MIT License */
(function(){ ... }());
</script>You still have the same issues with the preload making these non-critical stylesheets have a high priority, but just as critically and perhaps a bit less obvious is the impact this has on the browsers ability to parse the page.
Again, Harry’s already wrote about what happens here in great detail, so I recommend reading through that to better understand what’s happening. But here’s the short version.
Typically, a stylesheet blocks the page from rendering. The browser has to request and parse it to be able to display the page. It does not, however, stop the browser from parsing the rest of the HTML.
Scripts, on the other hand, do block the parser unless they are marked as defer or async.
Since the browser has to assume that a script could potentially manipulate either the page itself or the styles that apply to the page, it has to be careful about when that script executes. If it knows that it’s still requesting some CSS, it will wait until that CSS has arrived before the script itself gets run. And, since it can’t continue parsing the document until the script has run, that means that stylesheet is no longer just blocking rendering—it’s preventing the browser from parsing the HTML.
This blocking behavior is true for external scripts, but also inline script elements. If CSS is still being downloaded, inline scripts won’t run until that CSS arrives.
Seeing the problem
The clearest way I’ve found to visualize this is to look at Chrome’s developer tools (gosh, I love how great our tools have gotten).
In Chrome, you can use the Performance panel to capture a profile of the page load. (I recommend using a throttled network setting to help make the issue even more apparent.)
For this test, I ran a test using a Fast 3G setting. Zooming in on the main thread activity, you can see that the request for the CSS file occurs during the first chunk of HTML parsing (around 1.7s into the page load process).
That tiny sliver of activity directly below the Parse HTML block is when the CSS is first requested, 1.7 seconds into the page load process.
For the next second or so, the main thread goes quiet. There are some tiny bits of activity—load events firing on the preloaded stylesheets, more requests being sent by the browser’s preloader—but the browser has stopped parsing the HTML entirely.
When you zoom back out in Chrome’s Performance panel, you can see the main thread goes quiet after the CSS is requested for over 1.1 seconds.
Around 2.8s, the stylesheet arrives, and the browser parses it. Only then do we see the inline script get evaluated, followed by the browser finally moving on with parsing the HTML.
Finally, the CSS arrives around 2.8 seconds into the load process, and we see the browser starts parsing the HTML again.
The Firefox Exception
This blocking behavior is true of Chrome, Edge, and Safari. The one exception of note is Firefox.
Every other browser pauses HTML parsing but uses a lookahead parser (preloader) to scan for external resources and make requests for them. Firefox, however, takes it one step further: they’ll speculatively build the DOM tree even though they’re waiting on script execution.
As long as the script doesn’t manipulate the DOM and cause them to throw that speculative parsing work out, it lets Firefox get a head start. Of course, if they do have to throw it out, then that speculative work accomplishes nothing.
It’s an interesting approach, and I’m super curious about how effective it is. Right now, however, there’s no visibility into this in Firefox’s performance profiler. You can’t see this parsing work in their profiler, whether that work had to be redone and, if so, what the performance cost was.
I chatted with the fine folks working on their developer tools, though, and they had some exciting ideas for how they might be able to surface that information in the future—fingers crossed!
Fixing the issue
In this client’s case, the first step to fixing this issue was pretty straightforward: ditch the preload/polyfill pattern. Preloading non-critical CSS kind of defeats the purpose and switching to using a print stylesheet instead of a preload, as Filament Group themselves now recommend, allows us to remove the polyfill entirely.
<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">That already puts us in a better state: the network priorities now line up much better with the actual importance of the assets being downloaded, and we’ve eliminated that inline script block.
In this case, there was still one more inline script in the head of the document after the CSS was requested. Moving that script ahead of the stylesheet in the DOM eliminated the parser blocking behavior. Looking at the Chrome Performance panel again, the difference is clear.
Before the changes, the browser stopped parsing at line 1939 of the HTML, when it encountered the inline script and stayed there for over a second. After the change, it was able to parse through line 5281.
Whereas before it was stopped at line 1939 waiting for the CSS to load, it now parses through line 5281, where another inline script occurs at the end of the page, once again stopping the parser.
This is a quick fix, but it’s also not the one that will be the final solution. Switching the order and ditching the preload/polyfill pattern is just the first step. Our biggest gain here will come from inlining the critical CSS instead of referencing it in an external file (the preload/polyfill pattern is intended to be used alongside inline CSS). That lets us ignore the script related issues altogether and ensures that the browser has all the CSS it needs to render the page in that first network request.
For now, though, we can get a nice performance boost through a minor change to the way we load CSS and the DOM order.
Long story short:
- If you’re using loadCSS with the preload/polyfill pattern, switch to the
printstylesheet pattern instead. - If you have any external stylesheets that you’re loading normally (that is, as a regular stylesheet link), move any and all inline scripts that you can above it in the markup.
- Inline your critical CSS for the fastest possible start render times.
But, wow, do they always sound so….neat and tidy. Like:
- Get up at 4am
- Meditate for an hour
- Drink coffee and plan out my day
- Workout
- Etc, etc, etc
I realized I, too, have a routine. A messy one. So here’s how my daily routine stacks up.
- Somewhere between 5:00am-6:00am (basically, way too early)
- My two-year-old wakes up and calls for me from his room, so I begrudgingly oblige. He hands me his pillow, his blankets (each one by one) and his book (the kid insists on sleeping with his 100 Animal Pictures board book right now), and then I pick him up and put him and all his gear into the recliner and turn on a Big Comfy Couch or Winnie The Pooh.
I don’t like putting him in front of a show to start the day, but he’s a bit grumpy until he eats, and I’m a bit groggy until I’ve had my coffee. It’s in both of our best interests to fix those issues as early as possible and this is the easiest way to get a free moment to make him some breakfast and make myself a cup of coffee.
- 6:00am-7:45am
- On most days, I start by making breakfast for the rest of my kids and myself (my usual: three fried eggs, an avocado, a couple of pieces of toast, and a banana) and coffee for my wife (she doesn’t like to eat breakfast this early).
The rest of the time I’m chasing after the kiddos: making sure they’re getting ready for the day and not getting distracted (my oldest loves to find a corner to hide in so she can get lost in whatever book she’s currently reading), brushing teeth, brushing hair (I am very proficient at ponytails and ok at braids, but that’s my current limit) and helping get their backpacks ready. During this time I’m also getting myself dressed and ready for the day.
I say most days because right now I’m coaching kids basketball on Tuesdays and Thursdays which means I have to leave work early. So today, for example, I was in the office by 6am. I replied to a few emails, finished writing this post and then got to work.
- 8:00am-9:30am
- The kids are off to school by now and I’m heading in to the office I rent, in a building owned by a physical therapist. The best part about renting an office next to a physical therapist? There’s a gym 20 feet away. So the first thing I do is drop my lunch and laptop off in my office, grab my gym bag, and walk over for a workout. Mondays, Tuesdays, Thursdays and Fridays are strength, Wednesday is a long run. That changes. In the warmer months, for example, I’ll switch to two long runs each week on Tuesdays and Thursdays.
On strength training days, I’ll listen to some music. When I run, I’ve found podcasts work better. I’ve always sort of mentally tuned in and out of podcasts as I listen to them, and that’s exactly why it works so well to listen to them when I run. I’ll let my mind wander in whatever direction it wants based on what I’m hearing. It’s, frankly, relaxing.
This is the closet thing I have to anything like those neat and tidy routines I’m always so interested in. I don’t always get to workout during this time, but I would say I’m batting somewhere around 80%. Honestly, what happens in this slot is pretty critical. When I workout, I feel far more focused during the day. It also starts me off on the right foot: I’ve planned something and it happened according to schedule. If I miss this time slot for any reason, the likelihood I will get a workout in that day drops dramatically and so does my general productivity. I am very protective of these hours when scheduling meetings.
I thought that would be a challenge, but it’s been very easy to make happen. When I have customers overseas, then I’ll put their meetings here and bump the workouts a bit later. But I otherwise I keep this open. I use Calendly for most of my meeting scheduling because it’s just simpler than going back and forth. It also lets me define what hours during the day I’m “available” for meetings so I block off everything before 10am. It’s never been a problem for anyone.
- 9:30am-12:00pm
- Workout done, I walk back to the office, make a smoothie and get to work. Like…I dunno, nothing super specific. I’ll usually take a few minutes to read through any emails that might need responding, but from there I’ll switch to whatever work I have to get done. I keep a backlog on written notebook paper (I find it works much better for me than putting it somewhere digitally) and I’ll work off of that, usually targeting the highest priority items first. Sometimes, though, I just need to feel like I’ve finished something so occasionally, I’ll grab something that might be a little lower priority but that I know I can finish quickly, just to get the momentum going.
Yesterday was a bit of both. I’m helping one company bolster their performance monitoring, so I spent most of this time digging through the data they currently have to see how the metrics they currently have are being collected and jotting down some ideas of things I think are missing.
Then I switched to wrapping up my JavaScript-related performance findings for an audit I’m finishing for another client.
- 12:00pm-1:00pm
- Lunch. I hear a lot of folks suggest that if you work from home or work for yourself, you should protect your lunch hour at all costs. I mostly agree—I think it’s important that you have some time during that day that you can reliably say belongs to yourself. But for me, since I tend to do that with the start of my day for my workout, I’m less protective here. I still try to avoid scheduling meetings during this time slot, but those west coast folks love to have their meetings around this time it seems so I never really push to hard. If I have to slide lunch a little later, that’s not really an issue for me. I figure you have to have some flexibility in your schedule, especially when you’re working for yourself.
Whenever I do end up taking my lunch, I’ll sit down (I use a standing desk the rest of the day) and turn something on to watch while I eat. I’d love to pretend that it’s always some sort of fascinating technical talk, but that’s probably only 20% of the time. The rest of my lunches are filled with whatever TV shows I happen to be enjoying (Monday’s are for The Outsider right now, Friday’s for The Good Place).
I used to feel guilty about that, but I’ve learned over the last several years that it’s ok if not everything I do is “educational”. Sometimes, you just need to let your brain chill out and watch the latest episode of some cop/mystery thriller thing that has literally no value outside of entertainment and escape.
- 1:00pm-4:45pm
- More work. This is where I tend to be the most productive. I’m not a morning person, and my brain doesn’t really get going until late morning-early afternoon.
Sometimes, I’ll step out around 3:00pm for 15 minute or so to pick up my kids from school and drop them off at home, but most days my wife is able to do that.
Other days, like yesterday, this afternoon slot gets broken up a bit with meetings. But I still managed to do a deep dive into the wonders contained in a client’s Tealium script in an attempt to help them figure out how to clean it up a bit.
If I were a better planner, I think there could be some real value in keeping these afternoons meeting free. I don’t think I could do it entirely, but I suspect I could at least get away with stacking up the meetings in the morning, say, Monday afternoon while keeping the rest of the week relatively clear.
I’ve always loved Lara’s suggestion to defrag your calendar, but I haven’t committed to it yet. I have a million excuses coming to mind, but then, I also had a million excuses for why I’d never be able to commit to a regular workout schedule and I was able to get that settled so I should probably just shut up and make it happen.
- 4:45pm-5:00pm
- Ooo…another part of the routine that sounds like it’s planned out properly! I end each day by looking at my backlog and reprioritizing it based on what I accomplished that day. Then I look at tomorrow’s schedule and figure out what I should work on. This came out of my brief, failed attempts at bullet journaling. It’s the one part of that process that stuck, and it’s been wildly beneficial for me.
I’ve been putting the next day’s tasks into blocks on my calendar. It’s not quite as detailed as Brad, and I tend to leave 10-15 minutes between most tasks, but it’s been working pretty well. I like having the calendar reminders as prompts and I also like knowing I have a plan. Even if a day is fairly chaotic, having a plan makes me at least feel like there is some intent and purpose behind all the chaos. It also helps me to see exactly what I’m giving up if I agree to someone’s last minute meeting request.
- 5:30pm
- Somewhere around here we’re sitting down for dinner with the kids. Again, usually. Kids and set routines don’t exactly mix well. One kid has piano lessons, one has guitar, one has violin. There are constantly playdates and sleepovers. But, we try to do dinner together and most nights that works out.
Also, “sitting” is a loose term. Our two-year-old doesn’t exactly get the concept of sitting down for dinner yet and in retrospect we ditched his high-chair too early. It’s not the kind of peaceful “down to dinner” you see on TV. My wife and I want to use the GoPro to record a few of these dinners just so that when they’re older the kids can see just how loud and chaotic (but fun) these dinners were.
- 6:00pm-7:00pm
- This is my time to play with the kids. It’s a wonderful variety. Sometimes it’s wrestling. Sometimes it’s nerf gun battles or pillow fights. Sometimes we bust out some board games or puzzles or legos. And sometimes it’s whatever crazy imaginative game they’ve come up with with their toys. Kids are wonderful at playing, much better than adults. I’m constantly learning from them about what it means to get absorbed in the act of having fun.
- 7:00pm-7:30pm
- The boys go to bed at 7:30pm so about 15 minutes before we switch modes and I help them get their teeth brushed and find their pajamas. Then we sit down and I read to them for 20 minutes or so before I tuck them into bed.
- 7:30pm-8:00pm
- Same thing, only now with the two younger girls. Only they can get themselves ready, so really I’m just reading to them and then tucking them into bed.
- 8:00pm-9:00pm
- Now it’s my oldest daughter’s turn. If she needs help with her math, I’ll sit down and we’ll work through that together. But we prefer to keep it fun if we can.
We mix it up to keep it fresh. Some nights I read to hear. She loves graphic novels so it’s a lot of that sort of thing or some sort of fantasy novel.
Other nights, we’ll start watching the Spurs play (that’s the San Antonio Spurs, not the soccer team). She’s getting really into it, and it gives me an excuse to purchase League Pass and watch more basketball, so I am more than happy to encourage it.
Other nights we’ll fire up YouTube and watch old comedy sketches (she likes Abbott and Costello, Lucille Ball, Carol Burnett, and, especially, Tim Conway). And other nights we’ll play some sort of board game together. Once 9:00pm comes around, I tuck her in and head upstairs.
- 9:00pm-10:00pm
- Finally, some one-on-one time with my wife! This is my favorite time of day, and it’s scary that this window of time is getting smaller and smaller. It won’t be very long before my oldest is staying awake later than my wife.
We usually work together to get the kids’ lunches packed for the next day and then sit down and just chat, or watch a show or something. We’re working our way back through The Office right now.
We sometimes watch a movie, but there are some challenges there. For one, neither of us gets super excited about a lot of the movies coming out anymore—we’re more likely to fall back on our favorites than to grab something new most of the time (I suppose that means we’re getting old now). But more critically, movies are very rarely finished in one sitting anymore. Two or three nights is our norm nowadays (like I said, getting old).
- 10:00pm-11:00pm(-ish)
- Most nights, I’ll read during this time. My wife has the ability to fall asleep at the snap of the fingers, but it takes me a while. So I’ll go to bed and read (right now I’m tearing through the excellent Being Mortal) until I start feeling like I might be able to fall asleep.
So there it is. It’s funny—even reading this sounds more neat and tidy than the reality. With the kids’ activities, doctor appointments, random meetings, the stupid siren call of checking email that I still haven’t 100% ignored, and a multitude of other things that pop-up, the reality is that my day very rarely fits into this little routine.
Still, I suppose if I wanted to clean it up into one of those neat and tidy routines I always enjoy reading, I probably could (with a few embellishments). But I’ve grown pretty comfortable with admitting that while there is a little structure, my daily routine is, more often than not, beautifully chaotic.
]]>For the second year in a row, my reading trended down. In fact, this year marks the fewest books I’ve read since 2012! There are several reasons for that, I suppose. Partly, my brain felt pretty overloaded and distracted much of the year for various reasons and I found it harder to enjoy what I was reading.
I took a look and I started and put down 13 books this year. I don’t keep track of that, but it’s gotta be close to a record. My guess is that if I were to pick up those books sometime in the future, when I’ve got my mojo back a bit, I’d likely enjoy most of them.
I also got behind on writing reviews, again. I’m gonna cut myself a little slack there, but I’m still aimin to do a better job of it this year.
As always, there were some stand-outs.
For fiction, I loved The Winter of the Witch, Katherine Arden’s conclusion to her fantastic Winternight Trilogy. I’ll miss that world—those books have been some of my absolute favorites in recent years. I could also pick all of the Fredrik Backman books I read this year, but I’ll choose just one: And Every Morning the Way Home Gets Longer and Longer. It’s a beautiful little novella and with every book of his I read, he further cements himself as my favorite author currently writing. Rounding out my top three fiction books would be The Book of M by Peng Shepherd. Like And Every Morning the Way Home Gets Longer and Longer, The Book of M revolves a lot around memories, but in more of a science-fiction way.
For non-fiction, Switch, Mismatch and Range stand out from the rest of the group. It wasn’t a particularly strong year for me in non-fiction reads, and I tended to lean on fiction as the year progressed because, again, I was having a hard time getting into many of the books I was picking up.
Not all of the books below have reviews written, which I feel bad for. But the ones that do are linked up.
- Head on by John Scalzi 4⁄5
I wrote a full review for Head On.
- Educated by Tara Westover 5⁄5
I wrote a full review for Educated.
- The Winter of the Witch by Katherine Arden 5⁄5
I wrote a full review for The Winter of the Witch.
- Mismatch by Kat Holmes 5⁄5
I wrote a full review for Mismatch.
- The Business of Expertise by David C. Baker 4⁄5
I wrote a full review for The Business of Expertise.
- And Every Morning the Way Home Gets Longer and Longer by Fredrik Backman 5⁄5
I wrote a full review for And Every Morning the Way Home Gets Longer and Longer.
- The Fall of Io by Wesley Chu 4⁄5
- Good to Go by Christie Aschwanden 4⁄5
I wrote a full review for Good to Go.
- Switch by Chip and Dan Heath 5⁄5
I saw Lara Hogan recommend this to someone, so I grabbed a copy. It didn’t disappoint. Chip and Dan have a very engaging writing style, and I’ve taken a lot of the stuff I read here and incorporated it into the work I do.
- Us Against You by Fredrik Backman 5⁄5
This is a sequel to the exceptionally good Beartown. I think I still like Beartown a bit more, but it was interesting to follow-up with the characters and town as they continued to struggle with the aftermath of the events of the first book. There’s a lot more focus on Benji this time around as well, and once again Backman tells a story that is deep, meaningful, emotional and raw.
- Deep Work by Cal Newport 4⁄5
I’ll be honest. When I finished reading this, I wrote “4 stars” down in my notebook, but in retrospect, I can’t remember much about this book. I went back and looked and I see I have plenty of highlights, but I don’t know that anything in there is necessarily revolutionary. It is, however, well put. I suspect this was a case of a familar topic resonating with me because of the way it was presented.
- The Trusted Advisor by David H. Maister, Charles H. Green and Robert M. Galford 4⁄5
This book has been recommended to me so many times. The authors do a good job of detailing not just why trust is a foundational aspect of being a good advisor/consultant (which, you know, of course it is) but also providing actionable insight into how little behaviors of ours (typically fostered by some level of fear) can make that trust harder to come by.
- True Grit by Charles Portis 4⁄5
I’ll be honest: after how much I loved Lonesome Dove, I was hoping another western classic would provide the same level of enjoyment. But while True Grit never got me hooked the same way, it was still an enjoyable (and short) story.
- The Real Town Murders by Adam Roberts 3⁄5
There was a lot I wanted to like about this book, but it never really clicked. The storytelling just seemed…disjointed. I never really felt like I connected with any of the characters and it felt like a narrative thread would start only to be dropped unceremoniously pages later. That, and the fact that Roberts reminded us of Alma’s bed-ridden friend’s “enormous” size so many times without any sort of rationale for why that mattered to us in any way.
- Quit Like a Millionaire by Kristy Shen and Bryce Leung 3⁄5
As with any “money” related book, I can’t say I enjoyed a ton of it. Actually, most of my favorite parts of this book came from hearing about Kristy’s background and upbringing and how that influenced how she approached her money.
- My Grandmother Asked Me To Tell you She’s Sorry by Fredrik Backman 5⁄5
More Backman brilliance. This one has some definite similarities to A Man Called Ove, from the grandmother’s character through to the story being used as a way to explore a small community of people. But telling the story from the perspective of a seven year old child provides a fresh point of view, and allows for some hints of magic realism that help to elevate the story.
- Recursion by Blake Crouch 4⁄5
Crouch’s novels don’t push a ton of new ground, but he’s pretty good at spinning an fun tale. They’re popcorn-movie novels, which is not meant to be an insult in anyway. Personally, sometimes I need a good popcorn novel: a story that is light, entertaining and fast-paced. If you have enjoyed Crouch’s other books, you’ll like this too. If you haven’t, you probably won’t.
- In An Absent Dream by Seanan McGuire 4⁄5
I don’t know what it is about this Wayward Children series, but each of these little books has been a lot of fun to read. The first book, Every Heart a Doorway, felt like a fairly self-contained story. You met a lot of characters, all kids who have been to various different “worlds” and come back, but the story lived well enough on its own. In reality, it was a jumping point for so many other interesting little fairy tales. Each subsequent book has taken characters from that original story and provided us with their own stories, with their own unique worlds. It’s been a great series and I hope McGuire never runs out of stories to tell here.
- Small Spaces by Katherine Arden 4⁄5
With as much as I loved her Winternight Trilogy there was no way I was going to pass on reading anything else Arden has written. This is a much simpler tale, and it’s labeled as a book for “middle graders”. Though as many a wise author has said, there’s no such thing as stories for kids or adults—there are just stories and they are either good stories or they’re not. This is a good story. It’s eerie and spooky, but it’s not just an empty ghost-story. The main characters all have good depth, and they each have things they’re struggling with. This is apparently the start of a series and while I won’t be dropping everything to read them like I did with the Winternight Trilogy, I’ll definitely be picking up the next one.
- Circe by Madeline Miller 4⁄5
This was a fun re-telling of the story of Circe, from her perspective. It repaints many of the events and assumptions from these myths in a totally different light. I can’t say I loved it quite as much as the friends who recommended it, but I still thought it was great storytelling.
- Summer Frost by Blake Crouch 3⁄5
Similar to my review of Recursion, this was fun without really presenting any sort of mind-blowingly new ideas. Being a shorter read (75 pages) it didn’t have as much time to pull me in as his other books.
- Ark by Veronica Roth 4⁄5
This, like Summer Frost, is a short story from the Forward Series. It’s a bit on the nose perhaps, but it was still an enjoyable short read.
- Redemption in Indigo by Karen Lord 3⁄5
This fable is written to feel like someone telling you a story, and it does that for better or worse. I’ve read that style before and enjoyed it, but it’s tricky to pull off. You have to feel in some way connected to the narrator, but they also have to disappear enough for you to focus on the main tale. This time, it just didn’t quite work for me. The story was fine and interesting enough, but it felt like it dragged on a bit too long.
- Range by David Epstein 4⁄5
- The Book of M by Peng Shepherd 5⁄5
Gosh I loved the way Peng played with memories in this story! The pace is just right: fast enough to keep things moving, slow enough for you to get settled in with the characters and world. People are losing their shadows (for reasons we never learn entirely, but appear to be more magical than scientific) and as they lose their shadows, they start to lose their memories. As they forget things, they end up changing the reality and world around them. I really wish I could dive into how much I love the ending, and how it almost seems to invert some of the core arguments around memory earlier the story, but that would give too much away. Great stuff!
- Fast 5K by Pete Magill 4⁄5
Past years
]]>This comes after years of saying I wanted to do these things but never sticking with it for very long.
One of the biggest reasons it has stuck is also one of the simplest and least revolutionary: I made it easy to do the healthy thing.
I bought a water bottle that I like and carry it with me pretty much wherever I go. When I’m at the office, I set it right on my desk, next to the computer, so that it’s never more than an arm’s length away.
For the snacks, I went to the grocery store and found healthier options that I still enjoy. I keep them a few feet from my desk so that if I feel the urge to snack on something, they’re right there. I do still have some more “fun” goodies (I have a massive sweet tooth), but I keep them in a separate room. If I’m going to opt for chocolate over an apple, I’m going to have to work a little more to do it.
I keep a gym bag in my office, with gym clothes washed and ready to go. When I walk in, the bag is sitting right there by the door. I don’t have to think—I grab the bag and walk out. I do the same thing when I’m traveling: my gym stuff is always right there, ready to go. All it takes is a few seconds of willpower to get moving. I keep a pair of adjustable dumbbells at home, so even if I can’t get to the gym, there’s never an excuse not to workout.
I changed my default state so that it supports the habits I want to develop.
It’s one of the most important things any organization can do as well.
Our default tooling stack makes it all too easy to build sites and applications that don’t perform well. The next library is just an npm install away. Images, fonts, CSS—the starting point for pretty much every resource you can think of is “just add more here.” It’s easier to do the wrong thing than it is to do the right thing.
So for every company I work with, I try to flip that script.
How do we make it easier to build well? What changes can be made to make it easier to make something performant, secure, and accessible than to create something that doesn’t live up to those standards?
While most folks have the best of intentions, we all fall back on whatever our default state is from time to time. A combination of seamless automation (for example, automatic image optimization) and carefully placed friction (for example, enforcing performance budgets at build or even install) is necessary to change the default of our processes to promote things like accessibility, security, and performance.
If we want to see lasting improvements, we need to make sure the default state of our workflows makes it easy to build well, not just to build quickly.
]]>Hence, why I’m a big fan of BigQuery.
You can use it for your own private data, and I’ve worked at or with organizations that did, but there is also a wealth of public datasets available for you to dig into. I spend 90% of my time querying either the HTTP Archive or the Chrome User Experience Report (CrUX) data. Typically, I’m looking for performance trends, doing some level of competitive comparison, or digging through data for a company that doesn’t yet have a proper real-user monitoring (RUM) solution in place. But I’ve also dug into data from Libraries.io, GitHub, Stack Overflow, and World Bank, for example. There are more than enough public datasets to keep any curious individual busy for quite a while.
One of the great things about the BigQuery platform is how well it handles massive datasets and queries that can be incredibly computationally intensive. As someone who hasn’t had to write SQL as part of my job for at least 10 years, maybe longer, that raw power comes in handy as it makes up for my lack of efficient queries.
One thing that it doesn’t hide, though, is the cost. If you’re constantly querying BigQuery, you can end up with a pretty hefty bill fairly quickly.
Jeremy Wagner mentioned being concerned about this, and it’s something I was worried about when I first started playing around with it as well.
I’m no expert, but I do have a handful of tips for folks who maybe want to start digging into these datasets on their own but are wary of wracking up a big bill.
Don’t set up billing
If you’re just starting fresh, don’t even bother to set up billing yet. The free tier provides you with 1TB of query data each month. While it’s easier to burn through that than you might think, it’s also not a trivial amount.
If you stick with the free tier, when you exhaust your limits, BigQuery will fail to execute your next query and tell you you need to setup billing instead. It’s a safe way to play around, knowing that you aren’t going to be charged unless you explicitly decide to level up.
Set a budget
If you have moved beyond the free tier and your payment information is already set, then the next best thing you can do is use the different budgeting features BigQuery provides.
For each individual query, you can set a “maximum bytes billed” limit. If your query is going to exceed that limit, the query won’t run, and you won’t be charged. Instead, you’ll be told your limit is going to be exceeded. To run it successfully, you’d have to first up the budget or remove it entirely.
With a maximum bytes billed limit set on a query, the query will fail without charge if it will exceed that data limit.
You can also set a budget for the month as a whole. You can then set a few thresholds (BigQuery will default to 50%, 90%, and 100%), each of which will trigger an alert (like an email) warning you that they’ve been reached.
So, let’s say you set a monthly budget of $20. With alerts in place, you would be emailed as soon as you hit $10, again when you hit $18, and the once more when you hit your $20 budget. With these in place, you can rest easy knowing you aren’t going to be surprised with an obnoxiously high bill.
BigQuery lets you set a monthly budget, with different thresholds so you can be alerted as you get closer to using your budget.
Use BigQuery Mate for Query Cost Estimates

If you use Chrome, you can use BigQuery Mate to keep you informed of the anticipated cost of a query before you ever run it. BigQuery already tells you how much data you’re going to use in a given query. This extension adds the cost as well (something BigQuery should probably just do by default).
If you don’t use Chrome or don’t want to install the extension, you can also use Google’s cost calculator. It works, but it’s certainly a more manual and clunky process.
Test against smaller tables first, if possible
Some datasets have numerous tables that represent the data, just sliced differently.
For example, the CrUX data is contained in one massive table, as well as broken up into smaller tables for each country’s traffic. The structure, however, is identical.
When I’m writing a new query against CrUX data, and I know it’s gonna take some tweaking to get it right, I’ll pick a country table to query against instead. That way I’m using less data on all my experiments. When I’ve got the query returning the data I’m after in the format I want, that’s when I’ll go back to the main table to query the aggregate data.
If there aren’t smaller tables, make them
For other datasets, the smaller tables don’t exist, but you can make your own.
For example, I was recently querying HTTP Archive data to find connections between JavaScript framework usage and performance metrics. Instead of running my queries against the main tables over and over, I ran a query to find all the URL’s that were using one of the frameworks I wasn interested in. Then I grabbed all the data for those URL’s and dumped it into a separate table.
From there, every query could be run against this table containing only the data that was relevant for what I was investigating. The impact was huge. One query which would have gone through 9.2GB of data had I queried CrUX directly instead ended up using only 826MB of data when I queried the subset I created.
Plenty more, I’m sure
This is far from an exhaustive list of tips or advice, and I’m certain someone who spends more time than I do in BigQuery (or who actually knows what they’re doing in SQL) would have plenty more to add, but these have all been enough to make me really comfortable hopping into BigQuery whenever I think there might be something interesting to pull out.
]]>Among the things that she discussed is the care that has to go into new browser features because once shipped, it’s there for good. Most of us don’t have to worry about things to that same level because:
…you can always delete all your code later. You can always say, ‘Oh, this thing I shipped quickly for my project, that was a bad idea. Let’s obliterate it and re-do it better later.’ But with the web, there’s two things. One, we don’t get to change it and ship it again later, almost ever. If we hurry up and we ship subgrid, and subgrid is crappy, there’s no, like, fixing it. We’re stuck with it.
This permanence to the web has always been one of the web’s characteristics that astounds me the most. It’s why you can load up sites today on a Newton, and they’ll just work. That’s in such sharp contrast to, well, everything I can think of. Devices aren’t built like that. Products in general, digital or otherwise, are rarely built like that. Native platforms aren’t built like that. That commitment to not breaking what has been created is simply incredible.
Jen’s other point, too, is an important one to remember:
…And the other thing is that we’re not solving for one website. We’re not solving for facebook.com, or youtube.com, or codepen.io or for…whatever. We’re solving for the entire web and every use case ever all at the same time.
She gives an example, later on, discussing how even something seemingly simple, underlines, becomes so much more intense when you need to solve for everyone:
Well, what about these languages that are typeset vertically? What is the typography in Japan? What’s needed for this kind of script that is completely different than the Latin alphabet? And there’s a long conversation about that and then, ‘Wow, we’re shipping something that actually works for all the languages and all the scripts around the world.’ Or it almost does and there’s a few pieces missing but we’re dedicated to going ahead and finishing those pieces as soon as we can.
There’s a lot of thought and consideration that goes into deciding what makes its way into this incredible platform and what doesn’t.
Another person I have a ton of respect for and who has been doing incredibly important work for a long time is Alex Russell. In particular, he’s put an absurd amount of time and energy into advocating for being careful about the overreliance on JavaScript that is leading to much of the web’s current performance issues.
I thought about Jen’s comments when I saw one person stating that Alex was trying to “sell you on fairy tales of Use the Platform”.
I don’t want to single that person out because I’m not here to encourage a pile-on, but also because they’re hardly the first person to express that general sentiment. But, that statement about the “fairy tale of Use the Platform” has really stuck with me, because it feels…wrong.
So much care and planning has gone into creating the web platform, to ensure that even as new features are added, they’re added in a way that doesn’t break the web for anyone using an older device or browser. Can you say the same for any framework out there? I don’t mean that to be perceived as throwing shade (as the kids say). Building the actual web platform requires a deeper level of commitment to these sorts of things out of necessity.
And as some frameworks are, just now, considering how they scale and grow to different geographies with different constraints and languages, the web platform has been building with that in mind for years. The standards process feels so difficult to many of us because of the incredible amount of minutiae that becomes critical. That security issue that might maybe be a problem? Maybe you feel comfortable taking that risk but when you’re creating something that everyone, everywhere is going to use, it becomes a valid reason for not shipping.
People talk a lot about the web being accessible or performant by default, and while it’s not perfect, it’s also not that far from being true. Creating the platform means you have to prioritize these things.
If you care at all about reaching people outside of the little bubbles we all live in, using the platform can’t be a fairy tale: it has to be the foundation for everything that we build.
That doesn’t mean that foundation is enough, or always right.
Are there limitations? Absolutely! There’s a reason why we still have a standards body, 26 years or so after HTML was first specified: because the work isn’t done and never (knock on wood) will be. (It’s also why I find it very encouraging that folks like Nicole Sullivan are hard at work to identifying some of the things we need frameworks for that should probably be in the browser instead.)
The web thrives on a healthy tension between stability and the chaos of experimentation. It’s perfectly fine, and necessary at times, to use tools to augment issues and limitations we may have on the web. I have no problem with that at all.
But it’s important that we do so very carefully because there are definite trade-offs.
To create the standards that make it into the platform, careful care is given to each and every feature to minimize the security risks. Every new feature has to be carefully considered from an accessibility perspective to make sure that not only does it not cause harm, but that assistive technology has all the information it needs to be able to provide people with a usable experience. Performance has to be top of mind for each new standard, to ensure that shipping it won’t cause undue bloat or other performance issues.
And each of these things must not be considered simply in one single context, but for all sites and across geographies, languages, devices, and browsing clients.
Can you say, with confidence, that the same level of care is given in the tools and frameworks we use or build?
Use the platform until you can’t, then augment what’s missing. And when you augment, do so with care because the responsibility of ensuring the security, accessibility, and performance that the platform tries to give you by default now falls entirely on you.
]]>One thing that Netlify announced during the event was their new build plugins functionality. Netlify’s build process now exposes different events during the build and deploy lifecycle that you can use as hooks to attach certain functionality to. The simplest way to do that at any sort of scale is to create a build plugin that you can then install for any site you may want to use it.
It’s not all that different from any other sort of build process, I suppose, but it does give Netlify some continuous integration functionality which is nice.
I really like Netlify, and I really like SpeedCurve so I thought building a SpeedCurve plugin for Netlify would be a fun way to play around with the new feature.
Getting setup
The first step was signing up for the private beta and getting access to build plugins in the first place.
There wasn’t much to do at all to get started playing with the build lifecylcle and plugins locally. I had to make sure I had a netlify.yml configuration file for my site and use the latest version of the Netlify CLI, and that was about it.
To enable build plugins remotely, in the live Netlify Account, requires a teeny, tiny bit of setup. I’m afraid I can’t tell you that or the Netlify team will probably lock me out of my account and send Phil Hawksworth over to “tie up the loose ends” or something.
What I will say is that, in addition to the setup they suggested, I had to also change my build image from Ubuntu Trusty 14.04 (their legacy build image) to Ubuntu Xenial 16.04 (their current default). Don’t let the Linux names and versions scare you—these were the only two options and all I had to do was tick a radio button.
Building the plugin
Building the plugin itself turned out to be fairly painless. The build documentation was pretty helpful, though I mostly relied on Sarah’s post and the demo plugins she links to.
As I mentioned, there are a number of different lifecycle events that a plugin can hook into. In this case, my goal was to trigger a round of tests in SpeedCurve for each deploy, so it made sense to hook into the finally event. So, I created a folder for the plugin, made an index.js file, and setup the basic structure:
module.exports = {
async finally() {
console.log('Preparing to trigger SpeedCurve tests');
}
}
That little bit of code is all that’s really necessary to do something during the finally event of the lifecycle. To test it locally, you then add a plugins section to your netlify.yml configuration file:
build:
# this is basic configuration stuff telling
# netlify where to publish to and what command
# to run to build the site
publish: dist
lifecycle:
build:
- eleventy
plugins:
# here's where we pull in the plugin
speedcurveDeploy:
type: ./plugins/netlify-plugin-speedcurve-deploy
What you call the plugin in the YAML file doesn’t really matter, as long as the path to the plugin is correct.
So with that setup, I was able to use the Netlify CLI to confirm things were going to work alright by running:
netlify build --dry
That command spits out a bunch of information, but what’s relevant here is that it tells you what steps are going to run and what actions are attached to them.

The SpeedCurve API docs are pretty straightforward about how to trigger a deploy. Alongside some helpful options, there are two bits of information that are required: the API key and the Site ID.
The API key, at least, is sensitive information, so it made sense to put that in an environment variable (Netlify makes it easy to add them to your configuration in their admin area). I decided to also place the Site ID in an environment variable. I could’ve used the configuration feature in the YAML file, but it just seemed to be neater to put it all together in one spot for now.
Once again, I followed Sarah’s lead (never a bad idea) on neatly pulling in those variables into my plugin’s index.js file`:
const {
env: {
//Your SpeedCurve API Key (Admin > Teams)
SPEEDCURVE_API_KEY,
SPEEDCURVE_SITE_ID
}
} = require('process')
All that was left was to use that data to fire off the test. I pulled in node-fetch (because I find the ergonomics of the Fetch API very nice) and then used that to trigger the SpeedCurve tests from within the finally event already setup:
fetch('https://api.speedcurve.com/v1/deploys', {
method: 'POST',
headers: {
'Authorization': 'Basic ' + Buffer.from(SPEEDCURVE_API_KEY + ':' + 'x').toString('base64'),
"Content-type": "application/json",
"Accept": "application/json",
"Accept-Charset": "utf-8"
},
body: JSON.stringify({
site_id: SPEEDCURVE_SITE_ID
})
})
.then(function (data) {
if (data.status == 200) {
console.log('SpeedCurve test submitted!');
} else {
console.log('SpeedCurve test couldn\'t be submitted. Status: ' + data.statusText);
}
})
.catch(function (error) {
console.log('Error: ', error);
});
And that was it. I ran a local build and things worked smoothly and, after tweaking the build image, a remote deploy worked on the first try as well. Now, each time I deploy a new version of the site, SpeedCurve will automatically run a series of tests so I can quickly see if I’ve changed anything from a performance perspective.
Better yet, because it’s built as a plugin, setting this up for any other Netlify sites I have will take only a minute or two.
Imperfections
There are a few things I’d like to tweak. The finally lifecycle event doesn’t technically fire after the deploy right now (something Netlify is working on fixing). That’s not a huge issue here because the Netlify build process is so fast that by the time SpeedCurve actually runs the tests, the new site has been deployed. Still, it should really be run after the deploy occurs just to be safe.
Because it runs a little early, there’s also currently no way to get the deploy ID or anything else to make it a bit easier to track the change in SpeedCurve. I’d love to be able to pass back the deploy ID to the SpeedCurve API once it’s available. For now, SpeedCurve sees no title applied so it just uses the current date to identify the deploy.
None other than Sir Hawk Philsworth himself pointed me in the direction of some docs that I completely whiffed on. Turns out, you can grab the git commit hash using an environment variable, so I’ve updated the plugin to pass that along to SpeedCurve.
I think it’s still handy in its current form, so if you have access to the beta and want to give it a whirl, I’ve put the code up in a repo and added the plugin to npm as well if that’s your cup of tea (it does make using the plugin a hair easier, I think).
All in all though, Netlify build plugins seem pretty slick and straightforward. I’m a big proponent of baking basic optimizations and checks and balances into your build process, and Netlify’s build lifecycle gives us another place to hook into.
I’ve got a few other plugins in mind already that I think will be pretty helpful. Now to find the time to build them.
]]>It’s far from the first time I’ve heard this concern expressed. A user sees “Save Data” as an option and says, “Yeah, of course!” but they may not want a lesser experience as a result.
It’s a fair concern, I think. We make a lot of decisions on people’s behalf online and certainly deciding to provide a degraded experience in this situation would be a questionable one.
But it’s also an avoidable one. One of the things I think is so great about the Save-Data feature is that it gives companies some sort of control over how their brand is experienced in data-constrained environments. They’re not relying on a proxy service to do a bunch of manipulation on their behalf, hoping it turns out alright. Instead, they have an opportunity to be proactive and careful consider how they can provide a low-data experience that still reflects their brand in a positive light.
There are endless ways you could do this without causing your the experience to feel lesser in any way. Here are a few ideas.
Lower resolution images
Often we serve high-resolution images to high-resolution screens. When the Save-Data header is enabled, we could instead serve up a lower resolution image by default.
That’s exactly what Shopify is doing now. If I open the homepage with Save-Data off on my phone, the site weights 906kb. 292kb of that are images.
If I pass the Save-Data header and reload the page, the site weighs 791kb, with 176kb of that being images. That’s a 12% drop in page weight. And visually, I frankly can’t tell the difference.
One of these screenshots of the Shopify homepage loads high-resolution images, one doesn’t. Yet the two pages look virtually identical.
This may not be the most significant change you could make, but it’s also one of the least intrusive. After yesterday’s post, Gatsby was super quick to get an issue filed for adding support to their gatsby-image plugin, and it looks like someone already submitted a PR for review to handle it.
This is also something browsers could do by default, though it sounds like that isn’t the case so far.
Fewer images
Taking it a step farther, you could serve fewer images (something Jeremy Wagner recommends in his excellent article on Save-Data).
One of my favorite examples to use here is a news site. I don’t know of one that does this currently, but it’s easy enough to imagine what it would look like.
Take, for example, the BBC site. Like many news sites, you have some major stories with larger hero images, and then you have supporting stories with thumbnails for each. On my desktop, that’s 1.5MB of images.
I went through and removed the thumbnails for the supporting stories and got that total down to 966kb of images. I was pretty conservative too. I didn’t touch any of the thumbnails used as backgrounds and didn’t touch any of the thumbnails related to videos. There’s plenty more that could have been shaved.
As it is, the experience looks different, sure, but I would argue not in the least bit degraded. It’s still very clean and reflects nicely on the BBC brand.
Remove web fonts
Another example from Jeremy’s post (he’s smart, that guy) is to remove web fonts if Save-Data is turned on.
Now, depending on the site and font in question, this may be a more controversial change to make. As always, your mileage may vary, you certainly don’t want to start implementing every optimization here without first considering if it’s right for your situation.
But I would argue that more sites than not could get away with this, provided they took the time to have a solid fallback font stack in place.
Take, for example, the CNN site. On the home page, CNN loads 6 different files for their CNN Sans font, totaling 251kb. (Let’s just ignore for a moment that they could probably cut a few out or do some subsetting to help reduce that a bit.)
Falling back to Helvetica Neue, as they currently do, is a bit too jarring even for me. But falling back to Helvetica or Arial isn’t. There’s a difference, sure, and I do think the CNN font looks a bit better. But 251kb better when I need to save data? Probably not. Again, nothing here looks broken.
The CNN site uses a web font (left) for text display, but falling back to Helvetica (middle) or Arial (left) has only a very minimal visual impact.
You could probably do even better than this in most cases. I’ve had a lot of success using Monica’s Font style matcher tool to tweak different CSS properties to create fallback font stacks that are nearly indistinguishable from the original web font (often with similar results to what Harry has seen).
Cut back on the ads and tracking
Ok, I’ll be the first to admit this is a broader discussion around the business implications, but I promise you none of your visitors will complain if you decide to scale back a bit on the ads and tracking in a data-constrained situation.
Some companies are already doing something similar when GDPR applies. USA Today is a notable example. Someone profiled their page soon after GDPR and found that while their site originally loads 5.2MB of data, the GDPR version only loads 500kb. That’s a massive data saving right there, and again, the site doesn’t feel broken in any way.
Progressive enhancement without the enhancement
If you build using progressive enhancement, Save-Data is a chance to consider maybe skipping a few of the enhancements. That’s not that far from the “cutting mustard” approach that was popularized by the BBC.
Maybe leave the carousel out in place of a single image
Maybe don’t load that extra JavaScript file that turns some of the static content into more dynamic interfaces.
With progressive enhancment in place, not only would the change be pretty simple to implement, but you would also know that the experience you provide would work and look just fine. Because building with progressive enhancement means you’ve already considered what happens without those extra resources showing up.
Less doesn’t mean broken
That’s just a small set of ideas. The possibilities are endless. If you treat data as a constraint in your design and development process, you’ll likely be able to brainstorm a large number of different ways to keep data usage to a minimum while still providing an excellent experience. Doing less doesn’t mean it has to feel broken.
You may even end up finding opportunities to apply those same considerations to your site, no matter if Save-Data is enabled or not.
If you carefully consider the experience and treat data as a valuable resource not to be wasted, I don’t think anyone is going to be complaining.
]]>@tkadlec https://shopify.com is now Save-Data aware. About a 13% reduction in page weight, https://webpagetest.org/video/compare.php?tests=190828_NC_2480ff19108ef9d64a3cbb0a3061b15c,190828_1B_bd8a98e3b3ed3758c5586c4bc371e004
Early data shows 20% of Indian/Brazilian requests contain this header so happy days #webperf 🎉
I love seeing companies paying attention to the Save-Data header. I’m not one to get super excited about headers normally (I leave that to Andrew Betts), but I’m pretty excited about this one.
The Save-Data is a header that gets passed along by a browser when the user has turned on some sort of data saving feature in that browser. Companies spend millions of dollars on surveys and research every year trying to figure out what their customers want. The Save-Data header is one of those rare times when the customers are explicitly telling us what they want: to use the site, but without using so much data.
There aren’t a lot of real-world examples of sites optimizing for when the Save-Data header is enabled. That’s a shame because it’s looking like it may be more common than we think.
In that tweet, Brendan noted that:
Early data shows 20% of Indian/Brazilian requests contain this header so happy days #webperf 🎉
20% is a pretty substantial number, but it’s not far off from what I’ve seen and heard from others.
I had someone from a large global travel company tell me that out of their roughly 6 million daily unique visitors, about 20% of them have Save-Data turned on.
Tim Vereecke has talked quite a bit about Save-Data and how, on his site, he sees about 50% of end-users passing the header and about 10% of those having it enabled.
I’ve been tracking Save-Data here, on this site, using custom data in SpeedCurve, like so:
if ("connection" in navigator) {
LUX.addData('save-data',navigator.connection.saveData);
}
So far, around ~4% of all sessions have Save-Data enabled. While nowhere near the percentage seen the companies above have seem (which makes sense, given the more focused audience) that’s still nothing to sneeze at.
And here’s the thing: if my site—a site read overwhelmingly by people in tech who are far more likely to have decent devices and networks—is seeing 4% of folks have Save-Data enabled, I imagine there are a lot of major publishers and e-commerce shops who are seeing a much larger percentage. They probably don’t yet realize it, but I’ll bet it’s there.
I would love to see more data on this. So please, if you have data to share, let me know!
Because while the web keeps getting heavier and we keep moving further away from page weight as a primary performance metric, the data I’ve seen so far indicates folks who want low-data experiences are far more common than we may think.
]]>I sometimes find it’s nice to have a version of their site running locally to make it easier for me to dig deeper and test different optimizations.
I see a lot of similar tech-stacks between my clients nowadays. The most common from the last year and a half has involved React/Vue.js.js, Node.js and webpack (with random tools sprinkled in of course). Despite the similar stacks, the process for getting things running locally ranges quite a bit. Usually, it involves a bit of back and forth with the team as we get access to the right repo’s and navigate the various different build tools and containers and configurations to get things all set. We get there in the end, but it’s rarely a quick process.
Our tools are more powerful than ever, and I’m grateful that they are! Particularly as someone who cares a lot about performance, accessibility and security—those three critical but often invisible considerations—I love how much testing and low-hanging fruit we can take care of automatically.
It’s fantastic that our web plumbing has gotten more powerful—tooling today is capable of so much. But all too often, that power comes with increased complexity that negatively impacts developer efficiency. Sometimes that’s unavoidable. The simplest approach doesn’t always win. But that should be the goal—to make things as simple as possible while still accomplishing what needs to be done. Like excellent plumbing, these systems should be as mostly invisible—chugging along, doing what we need them to do without getting in our way.
A good system not only considers the technical impact of the tools in use but the impact it has on the people who have to use it as well. The best systems don’t just automate a lot of setup, testing, and optimizations—they do it in a way that lets team members get on with their work as quickly and efficiently as possible.
Considering the cognitive overhead can also make your codebase more approachable to new hires. Having a good process in place reduces the time it takes for a new team member to be able to start making meaningful contributions.
When I started working with a recent client, I wanted to get a local version of their site up and running. They’re using Vue.js, Nuxt, webpack—as I mentioned, an architecture I’m pretty familar with. I expected there would be a little time involved in getting things set up and configured, as there usually is.
But there wasn’t. It just….worked, mostly. I had to tweak the Node.js version and get access to one more repo, but that was it. I had a local version of the site up and running in minutes. It was the most seamless onboarding experience I’ve had in a long time.
It wasn’t by accident.
Their documentation was pretty straightforward. The process and tooling itself were carefully considered to be as frictionless as possible. It was clear the intent here was not just to have a robust, flexible set of tools in place, but to make sure that set of tools was approachable. New team members, or folks like me looking to help out, can start making contributions almost immediately.
Making the approachability of our systems a priority lets us take advantage of their tremendous power without compromising on flexibility and ease of use.
This particular system stood out for its simplicity. It would be even better if it hadn’t.
]]>The first is to blame them.
I mean, you understand it. It’s not that hard. They should be able to figure it out too. Maybe they don’t want to put in the work. Or they’re slow on the uptake. It would be great if they would put a little time and effort into actually understanding it instead of complaining.
The other response is to consider it as feedback.
Is there documentation or other information that is unclear or misleading? Does the documentation assume a certain level of pre-existing knowledge? Is there something that you could communicate differently to make it click for them?
Maybe their context isn’t the same as you—there could be situations where maybe this tool or technique doesn’t make sense. Or perhaps there are downsides you haven’t considered. It would be great to take some time to understand better where they’re coming from.
Blaming them is easy. It’s an emotional response that lets you off the hook for having to put in any work.
Taking the time to understand their perspective is harder. It requires you to put any initial emotional response aside and think critically about what is different for them.
It takes work to think critically about something and how it is being presented. It takes work to take the time to understand why someone else may not find it particularly intuitive or useful.
Blaming them is a missed opportunity. Critically evaluating why they feel the way they do is how you make progress.
It’s how you become more informed about the tools you use every day.
It’s how you make better decisions about what to use, and when.
It’s how you make tools more robust, and it’s how you make technology more accessible and approachable.
]]>But after that? I didn’t do much at all until I decided to commit to the Couch-to-5k thing last fall. I haven’t stopped working out since, mixing in everything from more running to strength training to HIIT (gosh that’s painful stuff). Naturally, as an athlete now (riiiightt), I started trying to learn more about effective recovery.
You don’t have to look hard to find endless articles and posts online detailing exactly what kinds of recovery you should be doing, and what types of recovery you should be avoiding. But like a lot of health advice, they can be very contradictory. There’s a lot of confusion and misunderstanding about not just which types of recovery are effective, but just when and how to best take advantage of them.
Good to Go is Christie Aschwanden’s attempt to parse through all the cruft to find out what recovery methods actually work. She does so in a very conversational, readable way. But this isn’t just a book for folks who like to exercise. Along the way, Aschwanden helps the reader to learn to think more critically about the research and studies that a lot of health advice is based on.
Many of the studies that these results come from, for example, are based on a very small sample size—10 or so people. Other data is far from conclusive, but the results were “marketable” (like research around sports drinks, for example) so they were promoted as more definitive than the data showed.
Sometimes the studies themselves were designed in a way that adhered to pre-existing biases. Take, again, sports drinks. It turns out, what you use as a placebo ends up dramatically impacting the significance of the benefits of drinking something like Gatorade.
When people volunteer for a study to test a new sports drink, they come to it with an expectation that the product will have some performance benefits. Studies use a placebo group to factor out such effects, but a placebo only controls for these expectations when it’s indistinguishable from the real deal. So it’s telling, Cohen says, that studies using plain water for the control group found positive effects, while the ones that used taste-matched placebos didn’t.
Other times, the results of a study get widely spread, but not the context. Ice baths were a good example. It’s a commonly cited recovery method, but it depends quite a bit on context. Turns out, if you’re in the “building phase” (trying to get faster, stronger, etc) it’s probably best to avoid the ice bath. If, however, you want short-term recovery (say, a long run with another soon to follow) then it can be beneficial.
Over and over, Aschwanden breaks down advice being spread without consideration of the size, biases and overall validity of the underlying studies. It’s a good lesson in critical thinking. It is also, likely, a little discouraging to anyone who was hoping to find a foolproof, silver bullet for recovery.
She also takes on fitness trackers and related apps. To be clear, I think there are definite benefits to using those sorts of tools. They can provide good motivation, prompt you towards making better health decisions, and the social aspects can help you stay accountable. But Aschwanden also points out the negatives. If we’re not careful, we can get too tied up in the numbers even if, ultimately, they may only have a loose connection to our overall health.
Her final conclusion on recovery? Ultimately the only thing we can say definitively helps with recovery is sleep (not a surprise if you’ve read Why We Sleep). Other than that, it’s mostly about the placebo effect. If you find something that feels like it’s making a difference for you, then stick with it.
]]>If I’ve learned anything about recovery, it’s that the subjective sense of how it feels is the most important part.
That’s a large, discouraging number, but it’s not entirely surprising. In my experience, teams want to build fast sites. When given the opportunity, like a large organizational push, they relish it and can make tremendous progress.
However, these concerted efforts often fix the symptoms without dealing with the underlying cause. A detailed, prioritized audit of a site’s performance provides clear direction on which optimizations to tackle, but unless you also deal with the organizational constraints that caused the site to perform slowly in the first place, those seconds will keep coming back over time.
That’s why so many folks over the years have stressed the importance of fostering a culture of performance inside of organizations. I think few organizations would argue against doing just that, but the path forward isn’t always clear, and it’s far from easy. Fixing the symptoms is much easier than fixing the cause.
Characteristics of a strong performance culture
It’s always easier to get somewhere when you know where you’re going. (Says the person with zero navigational skills.) I’ve been lucky enough to work with organizations who have built up that culture, as well as learn from people inside of other organizations that have successfully navigated that journey. As I was thinking back to those conversations, articles, and presentations, I tried to identify down the common characteristics of an organization with a good performance culture.
This is far from an exhaustive list, but every organization I’ve worked with and talked to that seems to have a good handle on performance has these traits in common:
- Top-down support
- Data-driven
- Clear targets
- Automation
- Knowledge sharing
- Culture of experimentation
- User focused, not tool focused
Top-down support
This is a big one. A huge one. If you don’t have the top-down support, you are very, very unlikely to get the resources needed to establish a culture of performance for the long haul.
To be clear, if you don’t have this right now, that doesn’t mean you throw up your hands in despair. Few companies start with top-down support. More frequently, it’s something that has to be established through the hard work of others in the organization, taking steps to make improvements and sell their organization on the value of investing in performance.
Data-driven
Companies with good cultures of performance use data to support their efforts. They carefully monitor the impact of their optimizations on raw performance metrics, but also user-focused and business metrics.
They’ve invested (whether financially in an external tool, or with resources for an internal tool) in robust performance monitoring. They use both RUM and synthetic tools where appropriate, and they know what metrics they’re trying to target.
Clear targets
A performance budget by itself won’t solve all your problems, but you’re not going to get very far if you don’t have very clearly defined budgets in place. Whether they call it a budget or not, companies that have a strong performance culture typically have clear performance goals that drive their work.
Automation
Companies with strong cultures of performance understand that performance has to be baked-in for it to stick, and they use automation to help lay the groundwork for their performance efforts.
They have steps in their build processes to take care of a lot of the low-hanging performance fruit by default.
They have tools in place to test for changes in performance automatically. These tools can be third-party or homegrown. In some cases, they break they build; in others they’re part of the review process. But they’re there, proactively identifying potential performance hiccups.
Knowledge sharing
It doesn’t matter if you have a dedicated performance team or not: if knowledge about optimizations, metrics, and monitoring is locked up within a few individuals or even one team, you’re unlikely to achieve sustainable success. Companies with strong performance cultures find ways to share knowledge across teams through training, lunch-and-learns, performance champions, documentation, and more.
Going a step further, they encourage sharing by finding ways to celebrate and recognize teams and individuals who make meaningful performance improvements.
Culture of experimentation
A lot of performance work relies on experimentation. You think an optimization is going to provide meaningful returns for the business, but that doesn’t always play out in practice.
It’s essential to foster a culture where experimentation is encouraged. Optimizations get applied and tested to see their impact both on performance and on key business metrics.
This means that, frequently, these optimizations get rolled out slowly—first to a fraction of the people visiting a site and then, assuming the impact is positive, to the entire user base. It’s always exciting to make a big performance improvement and, if you’re like me, you can’t wait to get it shipped. Sometimes, though, you have to slow down to win the race.
User focused, not tool focused
You’ll notice that none of the characteristics above prescribe a specific tool or framework. In fact, only automation could be described as tool-centric. Every other characteristic is much more about the people and the processes. That’s because companies with strong cultures of performance put their focus on the user, not on specific tools.
I’ve seen companies achieve success using everything from PHP and jQuery driven sites to Node and the latest JS framework. I’ve seen them use SpeedCurve, Calibre, Sitespeed, WebPageTest and mPulse. I’ve seen them run Webpack, Phing, Grunt, Gulp, npm scripts and any of a number of different build processes.
They’re not afraid of tools, but they’re critical of them. They recognize that tools are there to support the culture, not dominate it.
One step at a time
Few companies carry all of these characteristics, so it’s important not to get discouraged if you feel you’re missing a few of them. It’s a process and not a quick one. When I’ve asked folks at companies with all or most of these characteristics how long it took them to get to that point, the answer is typically in years, rarely months. Making meaningful changes to culture is much slower and far more difficult than making technical changes, but absolutely critical if you want those technical changes to have the impact you’re hoping for.
They also all, unanimously, express that they understand there’s so much more work to be done. Getting that strong web performance culture in place isn’t a destination, but a constant revolving wheel of mistakes and improvements.
As I mentioned before, it’s quite possible I’ve overlooked some traits. But if you’re trying to build up a stronger focus on performance in your organization, focusing on improving on each of these characteristics is going to get you much farther than an audit alone.
]]>I remember how, later on, a common question I would get in after giving performance-focused presentations was: “Is any of this going to matter when 4G is available?”
The fallacy of networks, or new devices for that matter, fixing our performance woes is old and repetitive.
To be fair, each new generation of network connectivity does bring some level of change and transformation to how we interact with the internet. But it does so slowly, unevenly, and in ways that maybe aren’t what we originally envisioned.
It takes time, money and significant other resources to roll out support for a new network. We’re not talking months in most places, but years. Inevitably that means it’s going to hit some of the big market areas first and slowly trickle down to everyone else. I don’t even get to pretend that I’m in a particularly remote area and yet even here, it’s only in the past few months that I’ve been able to connect to a reliable 4G network.
Even if it is 4G (5G, or whatever else) that doesn’t exactly guarantee you’ll be getting the amazing, theoretical speeds promised. For one, carriers like to throw around new networks as marketing, and (shockingly) they’re not always 100% honest about it.
AT&T’s “5G E” is a great example. While they technically do state in the description of the technology that it’s not 5G, they still call it 5G E and display it as such on your phone if you connect to it. Embarrassingly, if you use a “5G E-capable” phone on other providers, who aren’t pretending to have shipped anything related to 5G yet, those other providers 4G networks outperform AT&T’s 5G E according to Open Signal.
This is nothing new. There was all sorts of similar controversy when the first carriers started rolling out supposed 4G networks.
Once a new network does get rolled out, it takes years for carriers to optimize it to try and close in on the promised bandwidth and latency benchmarks.
We’re still nowhere close for 4G. In theory, the maximum downlink speed is 100 Mbps. Compare that, for example, to recent data from Open Signal about actual speeds observed in India. The fastest 4G network clocks in around 10 Mbps, and the slowest around 6.3 Mbps.
And those speeds aren’t constant. A few months earlier, Open Signal reported on the variance of 4G network performance in Indian cities based on the time of day and found that 4G download speeds can be 4.5 times slower during the day than at night.
In other words, new network technologies sound amazing in theory—and certainly do provide substantial benefits—but not for everyone and not at the same pace.
All of this makes me more than a little leery when I read articles like the one the New York Times posted about how they plan on experimenting to see how they can use 5G to push storytelling online further.
Over the past year The Times has honed its ability to tell immersive stories, allowing readers to experience Times journalism in new ways. As 5G devices become more widely adopted, we’ll be able to deliver those experiences in much higher quality — allowing readers to not only view more detailed, lifelike versions of David Bowie’s classic costumes in augmented reality, but also to explore new environments that are captured in 3D.
I’m excited about what this could mean for their readers.
I’m also terrified of what this could mean for their readers.
There’s already a massive, and rapidly growing, divide between the “have’s” the “have not’s” online—I worry about us doing things that will only widen that gap.
Experimentation is great. Moving the web forward has always involved a healthy level of friction between those seeking to push its boundaries and those looking for ways to improve its stability and resilience. There’s a bit of yin and yang involved here for sure.
What worries me is how often that experimentation ends up hurting users. It’s one thing to experiment and test limits, it’s another thing to push those experiments onto people who can’t afford, or don’t have access to, the technology required to use them.
I share Jeremy’s concern:
One disturbing constant in web development is that as network connections and devices improve in speed and quality, we will inevitably eat those gains by shipping more crap in our apps people never asked for.
It echoes one of my favorite quotes from Jeff Veen’s episode of Path to Performance.
..as bandwidth grows, and as processing power grows, and as browsers get better we just keep filling everything up. We often lose track of the discipline of now that bandwidth is faster let’s work on making our sites load faster rather than now we can do more with that available bandwidth.
There’s a scientific name for this: Jevons paradox. Personally, I favor the more approachable—and humorous—Andy and Bill’s Law.
In either case, the meaning is the same: as the efficiency of a resource increases, so does our consumption of that resource. It’s why Uber and Lyft have increased traffic congestion, not reduced it, and it’s why, even with the massive improvements to CPU and network performance over the last few decades, performance is still a business critical issue needing to be addressed.
I’m always happy when we see network technology take a leap forward because I do know that, eventually, billions of people stand to benefit from it. But even as I drool over theoretical promises of those new technologies, I think it’s important to remember that those technologies won’t solve our issues for us. It takes a lot of time, and we still have to do the work ourselves.
Whether we choose, as Jeff Veen said, to focus on how we can use those new technologies to provide a more performant experience or to focus on how we can use them to provide more stuff plays a massive role in determining just how effective those new technologies are ultimately going to be.
]]>The announcement post was a fairly typical product announcement post which is to say it was light (no pun intended) on the technical details and leaves a lot of open questions. Sometimes that’s fine. But in this case, the announcement has to deal with Google making changes to HTTPS content which, as you would expect, makes folks a little more nervous. Some more detail would have been nice.
What We Know
Despite the vague announcement, between conversations with Chrome folks over the years, digging around and some general knowledge of how proxy services work, we can put together a decent chunk of the puzzle.
Lite pages aren’t new. Well, not exactly.
Chrome has offered a proxy service through its browser for several years now. In that time it’s undergone a few different rebrands.
The first name I’m familiar with was Flywheel. That’s what it was called back in 2015 when the team working on the service wrote up a detailed paper about the optimizations Flywheel applied, and why.
Flywheel wasn’t used as a public name for long, if at all. (I honestly don’t remember if they ever talked much about it in public as Flywheel.) It’s not exactly intuitive. Data Saver was the more common name and the one that has continued to be used to date.
You can think of Lite pages as an extension of Data Saver for when loading conditions are especially bad. Data Saver still does its optimizations, just like always. But when conditions are especially bad (2G-like connections or longer than 5 seconds until First Contentful Paint), Lite pages will kick in and potentially provide additional interventions (more on those later).
Perhaps the most significant difference between the Lite pages announcement and Data Saver as we knew it is that Lite pages work over HTTPS traffic. Data Saver was always HTTP only. Good for security, not great for anyone who needed data savings on an increasingly HTTPS-driven web.
Data Saver !== Save-Data
One big source of confusion comes from the very similarly named Data Saver mode and the Save-Data header.
Save-Data is a header that can be passed along by any browser or service when a user has explicitly requested an experience that uses less data. Save-Data can, and should, be used by developers to help reduce page weight regardless of what the browser may or may not be doing.
In theory, by itself, the Save-Data header doesn’t necessarily indicate that a proxy service is being used. A browser could ask a user if they want less data, indicate that decision to developers with the Save-Data header and leave all the work up to the developer.
In practice, that is not the case. Right now, to my knowledge, the Save-Data header is passed by Chrome when the Data Saver mode is enabled, Yandex and Opera Turbo. In other words, at the moment, Save-Data is only ever being seen by developers when a browser is doing some sort of proxy service to optimize the page.
Data Saver is Chrome’s proxy service. Users can opt into the Data Saver service by turning the feature on in the settings on Chrome for Android, or by installing an extension on Chrome for Desktop.
When Data Saver mode is enabled, Chrome will attempt to make optimizations to the page to reduce data usage and improve overall performance. Traditionally, these optimizations have only applied to HTTP traffic—something that has changed now with Lite pages.
If the Data Saver mode is enabled, Chrome passes along the Save-Data header with each request, as a responsible proxy service should. That’s the only relationship between the two.
Lite Pages !== AMP
Lite pages are also in no way related to AMP. AMP is a framework you have to build your site in to reap any benefit from. Lite pages are optimizations and interventions that get applied to your current site. Google’s servers are still involved, by as a proxy service forwarding the initial request along. Your URL’s aren’t tampered with in any way.
Lite Pages are only applied in specific situations
The release post was vague on details but fairly clear on when the optimizations would be applied. In addition to the requirement that Data Saver mode is enabled, Lite pages will be applied:
…when the network’s effective connection type is “2G” or “slow-2G,” or when Chrome estimates the page load will take more than 5 seconds to reach first contentful paint given current network conditions and device capabilities.
In other words, the optimizations are likely going to be applied to only a subset of a given site’s traffic and only on slow pages. If you’ve done a good job of optimizing your site for less than ideal network scenarios and kept it lightweight, you’re unlikely to see it impact your site at all.
The optimizations are a little unclear, and likely fluid.
The original Flywheel paper detailed some basic optimizations that the proxy service would apply when appropriate, including:
- Transcoding images
- Ensuring text-based resources are compressed
- Minifying JS and CSS
- Providing lightweight error pages when the user is unlikely to see them (ex: favicon that 404’s)
- Preconnect and prefetching
I haven’t seen any documentation to counter this, nor to add to it. I think it’s safe to say most of these are still in play.
The original paper also showed that optimizing images provided, by far, the most significant data reduction. Nothing has really changed there. You can make the case JavaScript is a bigger deal for overall performance, but not data reduction. Optimizing or removing images is the safest way of ensuring a much lighter experience and that’s the primary optimization Data Saver relies on.
Lite pages take it a step further: they provide more performance gains, but at the same time are also a bit more intrusive in regards to the design of the site. Whereas Data Saver would apply optimizations (making improvements to resources and connections to speed them up), Lite pages apply interventions (eliminating some slow or heavy qualities of the page altogether). For example, Data Saver may transcode your images, but Lite pages will replace them with placeholders.
Lite pages will replace images with placeholders that display the image weight. Pressing on the image with a long tap lets you choose to download the intended image.
Chrome has been playing with various interventions over the past year or two that were all triggered by the user opting into Data Saver mode, and typically the detection of a 2G-like connection. Most of those interventions were very experimental.
Andy Davies posted to the Web Performance Slack group a link to a Chrome Status report describing which interventions are currently a part of Lite pages. At the moment, it looks like there are four that could possibly be applied:
- Disabling scripts
- Replacing images with placeholders
- Stop loading of non-critical resources
- Show offline copies of pages if one is available on the device
I spent the past two days using Chrome on my phone with Data Saver enabled and an effective connection type of 2G (using the #force-effective-connection-type flag).
The image replacement intervention has been applied everywhere. No images are loaded unless you long-press on one and specifically choose to download it, or you opt to show the original version of the page. Alt text is available on long hold as well, though it would be really handy to have that displayed by default over the placeholders to provide some context (I’ve submitted a bug for this).
I’ve seen offline copies of pages a handful of times as well, usually when I have closed the browser and come back to it a bit later.
If any non-critical resources have stopped loading or if scripts have been disabled yet, I haven’t noticed. So either they’re really well done, or they are not an intervention that is commonly applied in my region (Houssein stated on Twitter that the interventions do vary a lot based on overall conditions).
You’ll be able to tell when Lite Pages are applied to your site
When Lite Pages are shown for your site, Chrome will allow you to record these interventions with the Reporting API. If you set a Report-To header telling the browser where to send the reports, you can set up an endpoint somewhere to collect them all for further analysis. Each report will detail exactly which intervention was applied.
Users and developers can both opt out
If users don’t enable Data Saver mode, they’ll never see a Lite page. If users do have Data Saver mode and opt out of the Lite pages interventions for the same site or URL frequently enough, then Chrome will stop applying interventions to that page for that user.
Developers can also opt out of Lite pages for their sites by applying a Cache-Control: no-transform header to their responses. If Chrome sees this header, they won’t apply optimizations or interventions regardless of whether or not the user has opted into Data Saver mode.
Please, please, please only use Cache-Control: no-transform if you are already checking for the Save-Data header and optimizing the experience accordingly (or if your initial experience is already blazingly fast and lightweight). Users explicitly telling us they want a faster experience is the kind of direct feedback we don’t usually get, and respecting their request is important.
What We Don’t Know
By now it’s clear there’s a lot going on, but we have a few unanswered questions that make me (and from what I can tell, many others) a little uneasy.
When do optimizations and interventions get applied?
We have some inkling of the optimizations and interventions, as I mentioned above, but I would love to see an authoritative source of documentation from Chrome that lists exactly what can be applied and, critically, when.
I understand the heuristics take into consideration network and region, and I would suspect potentially device as well as other factors. So it’s not exactly straightforward when they’re applied. Still, I think particularly when you’re talking about making changes to the intended experience, the more transparency the better.
For more experimental interventions, flag them as such in the documentation. They could even be a little less clear about the heuristics for those, because I imagine that’s a big part of the experiment—figuring out exactly when those interventions make sense, and when they fail.
How does Chrome apply the optimizations over HTTPS?
This is the big one. The one that’s going to bother people until it’s made very clear how this works.
The post mentions that only the URL is passed to Google servers, and no sensitive data. I think everyone would feel a bit more at ease if we could see some clear documentation about how that process works.
In my conversations with folks working on Chrome and Data Saver over the years, they were always very opposed to the Man-in-the-Middle behavior other proxy browsers took to optimize HTTPS traffic. I’m very interested in hearing how they managed to avoid that.
Addy elaborated a little on Twitter but I’m still a bit fuzzy on the mechanics.
Šime Vidas clarified this a bit for me on Twitter, and I chatted a bit with Addy Osmani to make sure I was understanding correctly.
Basically, it sounds like Chrome triggers an internal redirect to Google servers for the URL requested. Google’s servers make the request and apply any optimizations. Then those servers pass the optimized content back to Chrome to provide to the user.
It’s pretty close to a MITM on the surface, but with a really important exception: by using a redirect it ensures that any cookies or sensitive information scoped to the origin does not get sent to Google. So there’s no passthrough or manipulation to the HTTPS connection potentially exposing private information. In other words, if you do have a page with session-based information, Lite pages won’t be able to do anything with it.
So yes, you still get something different that what you requested, and Google’s servers are still intervening, but private information is kept at least a little safer than with other proxy services.
What’s a web loving developer to do?
On the one hand, I’m not foolish enough to trust any company to have my best interest at heart all the time. On the other hand, I do know many of the people working at Google, and on Chrome, and those folks I do trust. It’s enough to make me a little less wary of this announcement than perhaps some others are.
And providing a data reduction service through one of the biggest browsers out there does solve a very real need. I know I use proxy browsers, or things like Firefox Focus, on a daily basis. Not just when I travel, but also at home to help me stay under my monthly roaming data limit. For many folks all over the world, their need for such a service extends far beyond my own.
Still, a healthy skepticism is warranted for any proxy service, let alone one provided by a company that also happens to provide a lot of advertising online. Some clarification and additional details from the Lite pages team would go a long way towards alleviating those concerns.
In the meantime, if you’re completely uncomfortable with the idea altogether, the best thing you can do is optimize the heck out of your site and then use Cache-control: no-transform to opt out of Data Saver and other proxy services.
Just make sure you really are pushing performance to the extreme before you do. Make the default experience as fast as possible and whenever you see the Save-Data header being passed, apply further optimizations to reduce the amount of data being used (replace webfonts, eliminate or reduce images, etc).
Respect your users desire for a faster, lightweight experience and you’ll be able to take control of your experience yourself, without any need for third-party interventions.
]]>It’s not really about the performance budget, though. Or rather, it’s not the idea of performance budgets that doesn’t work in those cases—it’s the execution and reinforcement around the budget.
My definition of a performance budget has evolved over the years, but here’s my current working draft:
A performance budget is a clearly defined limit on one or more performance metrics that the team agrees not to exceed, and that is used to guide design and development.
It’s a bit lengthy, I know, but I think the specifics are important. If it’s not clearly defined, if the team doesn’t all agree not to exceed the limits, and if it doesn’t get used to guide your work, then it’s a number or goal, but not a budget.
On the surface performance budgets sure seem pretty simplistic, but in my experience working with a variety of different organizations at different points in their performance journey, establishing a meaningful budget is one of the most critical components in successfully changing the way that company approaches performance on a day-to-day basis.
But as anyone who has ever set a budget on their spending will tell you, merely setting up a budget doesn’t accomplish anything. To be effective, a performance budget has to be concrete, meaningful, integrated, and enforced.
Concrete
Being concrete means that we have to pick a number and get specific about what we’re after.
Phrases like “fast as possible,” “faster than our competition,” “lightning-quick” are great, but they’re not concrete. They’re too subject to interpretation and leave too much wiggle room.
Performance budgets need to be a specific, clearly-defined metric (or metrics, in some cases). For example: “We want our start render time to be less than 3 seconds at the 90th percentile.”
Or: “When tested over a 3G network on a mid-tier Android device, our Time to Interactive should be no larger than 4 seconds.”
You can have multiple budgets, but each of them needs to be very clearly defined. Someone who joins the team tomorrow should be able to look at them and know exactly what they mean and how to tell how well they’re stacking up.
Meaningful
That metric (or metrics if you choose to have multiple budgets) needs to be meaningful if it’s going to stick. You can opt for a performance budget on your load time, but if it doesn’t accomplish anything for your business and provides no significant change to the user experience, it won’t be long before people simply don’t care about it. We don’t make sites faster for the sake of making them faster, we make them faster because it’s better for the people using our sites and it’s better for the business.
In the best case scenario, you look at your real user data and find a metric that has a clear tie to your business.
First, identify some business critical metrics that you pay attention to. Maybe that’s conversion rate or bounce rate. Whatever it is, you want to look for a performance metric that has a clear connection (tools like SpeedCurve and mPulse should be able to help with this).
Let’s say bounce rate is critical for your organization, and that you find a clear connection between start render and the bounce rate on your site (it’s a fairly common connection in my experience). It probably makes sense to set a budget on your start render time and work towards that. That way you know that if you make improvements towards your budget, you create a better experience for users and improve your site’s effectiveness as well.
If you don’t have access to real user data, you do the best with what you’ve got (while hopefully working on getting solid real user data in place). You can look around at what similar companies have found and consider your performance in user-focused terms to come up with potential metrics to prioritize. Then you can do a little benchmarking of your organization and some competitors to find a target that puts you at the top of the list and gives you a competitive advantage.
The need for meaningful metrics is also why I always recommend using some sort of timing-based metric (custom or not) as your primary performance budget, rather than a weight or something like a Lighthouse score. Those can be great supporting budgets, but if they’re not connected to a larger goal they’re far less likely to stick for the long-haul.
Integrated
Once you have a meaningful budget chosen, it needs to be integrated into your workflow from start to finish.
Put it into your acceptance criteria for any new feature or page.
Display it in your developer environment so that as developers are making changes, they’re getting immediate feedback on how well they’re adhering to the budget.
Translate the metric into a set of quantity-based metrics, which are more tangible for designers and developers alike during their day-to-day work. It’s an approximation, but it’s a critical step. It enables a designer to have some constraints to play with to help them make decisions between different images, fonts, and features.
Display your budget on dashboards throughout the organization so that everyone is continuously reminded of what they are, and how you’re doing.
The idea is you want to make your budget, and how you stack up to it, as visible as possible to everyone throughout the workflow.
Enforceable
Once the budget is firmly integrated into your workflow, the next step is to make it enforceable. Even the most dedicated teams are going to make mistakes. We need to put the checks and balances in place to make sure the budget doesn’t get ignored.
Most monitoring tools let you establish budgets now and will alert team members via Slack, email or a similar format when that budget is exceeded. That’s a pretty passive way of keeping tabs on the budget, but it can clue you in quickly when something goes wrong.
Even better is being proactive and building checks and balances into your continuous integration environment or build process.
You can set custom budgets in Lighthouse (something that will get more powerful soon), for example, that are checked on every pull request. You can test against WebPageTest automatically using its API.
For anyone building a JavaScript-heavy application, using something like bundlesize which alerts you to budget issues in your JavaScript bundles is an absolute must.
Enforcing hard limits on your pull requests can seem intense, but those constraints can turn into a fun challenge and really change the way your entire team approaches their work.
On a recent episode of Shop Talk Show, Jason Miller talked about Preact’s upcoming release, Preact X, and their limits on total library size. When a pull request for a new feature would land that added even a few bytes to the weight of the library, people would start playing “code golf”—finding technical debt that could be optimized to keep the library size under budget. Contributors started aiming to reduce the size of the library with every pull request as they added features, a refreshing inverse of the usual situation.
Supporting Your Budget
The point is not to let the performance budget try to stand on its own, somewhere hidden in company documentation collecting dust. You need to be proactive about making the budget become a part of your everyday work.
It’s not just a number, but a hard line. It’s a unifying target that your entire team can rally around. It provides clarity and constraints that can help guide decisions throughout the entire workflow and enable teams to focus on making meaningful improvements.
And, if you make sure it’s clearly defined, meaningful to your users and business, integrated into your workflow at every available step and enforced in your build process, then a performance budget can be an indispensable part of your performance journey.
This was not the case at all with And Every Morning the Way Home Gets Longer and Longer.
This book is just…it’s beautiful. The story, as Backman puts it in his opening letter to the read, is “…about memories and about letting go. It’s a love letter and a slow farewell between a man and his grandson, and between a dad and his boy.”
The story takes place both in the grandpa’s head and in real life, though it takes a while for you to be able to properly separate the two. The combination of settings is powerful. It magnifies the confusion, giving us a little glimpse of what the man himself is going through. And by having part of the story take place in the man’s brain, it makes the memories he is losing more concrete to the reader. We see about the people walking past, their faces blurry. We see about the rain that comes down, wiping bits of his memories away with it. We see the dark paths and roads that the old man no longer goes down because he can’t remember what they hold and is worried he won’t find his way back.
Particularly moving are the scenes in the man’s head where he is walking with his wife, who passed away some years ago. She helps him to hold onto what is real, helps to fill in some of his memories, and to assure him it will be alright when he panics about the memories he is losing. His struggle to hold onto his memories of her and his fear of forgetting her and all of the moments that shaped their life together hit me particularly hard.
Backman is an incredible storyteller, and he is able to connect the reader to his characters almost immediately. There is a tenderness and empathy that permeates every word in this story (without ever once being sappy or cheesy). That’s true of everything I’ve read by Backman, and it’s particularly true of this story. Given the personal nature of the story (Backman explains it was written for himself, as he tries to deal with, as he puts it, “saying goodbye to someone who is still here”), I suppose that’s no surprise.
It’s a brisk read. It’s under 100 pages and can be read in one sitting. If you’re like me, will be read in on sitting. Not because it’s a page turner with some big mystery at the end, but because it’s powerful and you will find yourself caring so much about the characters that you can’t let go. Just, maybe don’t read this one in public unless you’re comfortable with folks seeing you cry. I can’t fathom how anyone could make it through this book with dry eyes.
]]>The book started off a bit slow. The first several chapters are pretty foundational and while there were a few nuggets there that were interesting, nothing was really blowing me away. Combine that with a few anecdotes that rubbed me the wrong way and I nearly put the book down.
But once Baker gets into the meat of positioning (starting around chapter 6), the book really takes off. There was so much valuable information here, and some of those most actionable and concrete advice I’ve ever seen on the subject. Baker talked about how to find your positioning, the pros and cons of positioning vertically versus horizontally and how to test your positioning (now and later) to make sure you’re on the right track.
Baker also provides plenty of excellent advice around identifying what it is that you do that provides the most value and whether you’re doing a good job (through positioning and the way you interact) of communicating that to prospective clients. Among the tips there, two stood out in particular. One was to stop and think about what part of your process you most often shorten when the client is pressed on time. If it’s the research and analysis phase, it’s time to rethink your approach a bit. That’s both a critical step and the ability to do it well separates the wheat from the chaff (so to speak).
Another rock solid tip that I’m going to start doing immediately is to record your side of a conversation by setting a phone on your desk when you talk to a client. Baker advises listening back, without hearing what the client is saying, to zero in on how you are presenting yourself: Are you doing too much talking? Are you asking enough questions? Are you agreeing with everything the client says or are you pushing back when appropriate?
As I mentioned before, a few of his analogies and anecdotes rubbed me the wrong way, though as I’ve acknowledged before, that’s a frequent occurrence anytime I’m reading anything around “business” so that could just be me. Ultimately the helpful, actionable insights in the latter parts of the book more than made up for the slow start.
]]>Much of the concepts of the book will be familiar if you’ve already read much about the topic, but Holmes’ presentation of those concepts is often unique and, for me, made me consider familiar ideas in unfamiliar ways.
I absolutely loved her use of the term “mismatches” as a way to consider when an experience doesn’t align with the reality of how a person needs to interact with that experience. An example she gives is trying to order from a menu written in a language you can’t read. That’s a mismatched experience. I’ve already started experimenting with using the term in my own work when I’m helping clients to identify audiences who are getting a subpar, or even unusable, experience from their sites. So far, it seems to be getting the point across better than terminology I’ve used in the past.
Some mismatches may seem minor (like, perhaps, ordering from the menu) but as Holmes points out, they add up fast and can lead to a significant feeling of not belonging:
Mismatches are the building blocks of exclusion. They can feel like little moments of exasperation when a technology product doesn’t work the way we think it should. Or they can feel like running into a locked door marked with a big sign that says “keep out.” Both hurt.
The response to these mismatches may be emotional on the part of the person experiencing them, but Holmes is quick to point out that viewing “inclusion” as a “nice thing to do” does it a disservice.
Treating inclusion as a benevolent mission increases the separation between people. Believing that it should prevail simply because it’s the right thing to do is the fastest way to undermine its progress. To its own detriment, inclusion is often categorized as a feel-good activity.
So Holmes tries to be more concrete—both about how businesses benefit from building more inclusive experiences and about the first steps we can take to start improving the inclusivity of the things we create.
She does so with a practicality that is refreshing and encouraging. Trying to design more inclusively, or accessibly, can be intimidating. You want to do the right thing, but you’re worried about messing up what you don’t know. Given the nature of what it means to leave people out, when you do mess up the blowback can be difficult to bear. Holmes advice for building a more inclusive vocabulary applies just as well to starting to design more inclusively in general:
Building a better vocabulary for inclusion starts with improving on the limited one that exists today. Sometimes we will use words that hurt people. What matters most is what we do next.
What happens next is the right question. Mismatch is an entry point, not a conclusion. If you’re expecting something comprehensive, you will be disappointed—there’s a lot more work ahead of you. Holmes doesn’t set out to solve all the problems or give you some checklist to follow to suddenly be more inclusive (though she does give several tangible “to-do’s” at the end of each chapter).
What she does is more important. She gives us a gentle nudge towards thinking more inclusively about what we design and build. More than any checklist, it’s this way of thinking that stands to provide the most significant change in the way our experiences impact people. We’ll never build a perfectly inclusive experience, but we can make changes to the method we use to create to help us eliminate those mismatched experiences one by one, allowing more people to benefit from what we build, and for us to benefit from their participation in the process.
]]>Content blockers have been a great addition to WebKit-based browsers like Safari. They prevent abuse by ad networks and many people are realizing the benefits of that with increased performance and better battery life.
But there’s a downside to this content blocking: it’s hurting many smaller sites that rely on advertising to keep the lights on…..
The situation I’m envisioning is that a site can show me any advertising they want as long as they keep the overall size under a fixed amount, say one megabyte per page. If they work hard to make their site efficient, I’m happy to provide my eyeballs.
If Webkit pursues the idea further, they wouldn’t be alone.
Alex Russell has been working on a Never-Slow Mode for Chrome since October or so. The Never-Slow Mode is much more refined, as you would expect given how long it has been brewing. It doesn’t merely look at JavaScript size, but also CSS, images, fonts and connections. It also disables some features that are harmful to performance, such as document.write and synchronous XHR.
Never-Slow Mode isn’t that far removed from two ideas that we had back in 2016 when Yoav Weiss and I met with the AMP team to discuss some standards-based alternatives to AMP. One of the ideas that came out of that discussion was Feature Policy which lets you disable and modify specific features in the browser. Another idea that came out of that discussion was the idea of “Content Sizes” which would enable first-party developers to put specify limits on the size of different resource types. This was, primarily, a way for them to keep third-party resources in check. Never-Slow Mode would combine these two concepts to create a set of default policies that would ensure a much more performant experience.
Not only would WebKit not be alone in pursuing some sort of resource limits, but they wouldn’t exactly be breaking new ground either.
Browsers feature similar limits and interventions already today. iOS imposes a memory limit (a high one, but it’s still a limit) that folks most usually run into when using large, high-resolution images. And Chrome’s NOSCRIPT intervention skips right past the idea of limiting JavaScript and disables it altogether.
In other words, the idea itself isn’t as radical as maybe it appears at first blush.
Still, there are a few concerns that were raised that I think are very valid and worth putting some thought into.
Why’s Everybody Always Pickin’ on Me
One common worry I saw voiced was “if JavaScript, why not other resources too?”. It’s true; JavaScript does get picked on a lot though it’s not without reason. Byte for byte, JavaScript is the most significant detriment to performance on the web, so it does make sense to put some focus on reducing the amount we use.
However, the point is valid. JavaScript may be the biggest culprit more often than not, but it’s not the only one. That’s why I like the more granular approach Alex has taken with Chrome’s work. Here are the current types of caps that Never-Slow Mode would enforce, as well as the limits for each:
- Per-image max size: 1MB
- Total image budget: 2MB
- Per-stylesheet max size: 100kB
- Total stylesheet budget: 200kB
- Per-script max size: 50kB
- Total script budget: 500kB
- Per-font max size: 100kB
- Total font budget: 100kB
- Total connection limit: 10
- Long-task limit: 200ms
There’s a lot more going on than simply limiting JavaScript. There are limits for individual resources, as well as their collective costs. The limit on connections, which I glossed over the first time I read the description, would be a very effective way to cut back on third-party content (the goal of Craig’s suggestion to WebKit). Finally, having a limit on the long-tasks ensures that the main thread is not overwhelmed and the browser can respond to user input.
It does seem to me that if browsers do end up putting a limit on the amount of JavaScript, they should consider following that lead and impose limits on other resources as well where appropriate.
How big is too big?
Another concern is the idea that these size limits are arbitrary. How do we decide how much JavaScript is too much? For reference, the WebKit bug thread hasn’t gone as far as suggesting an actual size yet, though Craig did toss out a 1 MB limit as an example. Chrome’s Never-Slow Mode is operating with a 500kB cap. That 500kB cap, it’s worth noting, is transfer size, not the decoded size the browser has to parse and execute. Regarding the actual code the device has to run, that’s still somewhere around 3-4MB which is…well, it’s a lot of JavaScript.
That being said, the caps currently used in Never-Slow Mode are just guesses and estimates. In other words, the final limits may look very different from what we see here.
Exactly what amount to settle on is a tricky problem. The primary goal here isn’t necessarily reducing data used (though that is a nice side-effect), but rather reducing the strain on the browsers main thread. Sizes are being used as a fuzzy proxy here which makes sense—putting a cap on CPU usage and memory is a lot harder to pull off. Is focusing on size ideal? Probably not. But not that far off base either.
The trick is going to be to find a default limit that provides a benefit to the user without breaking the web in the process. That 500kB JavaScript limit, for example, is right around the 60th percentile of sites according to HTTP Archive, which may end up being too aggressive a target. (Interestingly, when discussing this with Alex, he pointed out that the 50kB limit on individual JavaScript files broke sites far more often than the 500kB restriction. Which, I suppose, makes sense when you consider the size of many frameworks and bundles today.)
One thing that seems to be forgotten when we see browsers suggestion things like resource limits, or selective disabling of JavaScript, is that they aren’t going to roll something out to the broader web that is going to break a ton of sites. If they did, developers would riot and users would quickly move to another browser. There’s a delicate balance to be had here. You need the limit low enough to actually accomplish something, but high enough that you don’t break everything in the process.
Even more than that, you need to be careful about when and where you apply those limits. Currently the idea with Never-Slow Mode would be to selectively roll those restrictions out only for limited situations, according to Alex:
Current debate is on how to roll this out. I am proposing a MOAR-TLS-like approach wherein we try to limit damage by starting in high-value places (Search crawl, PWAs install criteria, Data-Saver mode) and limit to maintained sites (don’t break legacy)
In other words, they would take a very gradual approach like the did with HTTPS Everywhere, focusing on specific situations to apply the restrictions and careful consideration into how to progressively enable a UI that keeps users informed.
Data-Saver mode (the user opt-in mode that indicates they want to use less data), to me, is so obvious a choice that it should just happen.
Progressive web app (PWA) installs are an interesting one as well. I can definitely see the case for making sure that a PWA doesn’t violate these restrictions before allowing it to be added to the homescreen and get all the juicy benefits PWA’s provide.
It’s also worth noting, while we’re on PWA’s, that Never-Slow Mode would not apply those restrictions to the service worker cache or web workers. In other words, Never-Slow Mode is focused on the main thread. Keep that clear and performant and you’ll be just fine.
How would browsers enable and enforce this?
Still, the risk of broken functionality will always be there which brings us to consideration number three: how do browsers enable these limits and how do they encourage developers to pay attention?
The surface level answer seems relatively straightforward: you give control to the users. If the user can opt into these limits, then we developers have zero right to complain about it. The user has signaled what they want, and if we are going to stubbornly ignore them, they may very well decide to go somewhere else. That’s a risk we take if we ignore these signals.
The issue of control is a bit more nuanced when you start to think about the actual implementation though.
How do we expose these controls to the user without annoying them?
How do we make sure that the value and risk is communicated clearly without overwhelming people with technical lingo?
How do we ensure developers make responding to the users request for a faster site a priority?
Kyle Simpson’s suggestion of a slider that lets the user choose some level of “fidelity” they prefer is an interesting one, but it would require some care to make sure the wording strikes the right balance of being technically vague, and yet clear to users as to what the impact would be. Would users really have an idea of what level of “fidelity” or “fastness” they would be willing to accept versus not?
Kyle also suggested that these sliders would then ultimately send back a header which each request to the site so that the site itself could determine what it should and should not send down to the user. That idea is a better articulation of a concern that seemed to be underlying much of the negative feedback to the idea: developers are leery of browsers imposing some limit all on their own without letting sites have some say in it themselves.
And I get it, I do. I love the idea of a web that is responsible and considerate of users first and foremost. A web that would look at these user signals and make decisions that benefit the user based on those preferences. I think that’s the ideal scenario, for sure.
But I also think we have to be pragmatic here.
We already have a signal like this in some browsers: the Save-Data header. It’s more coarse than something like Kyle’s suggestion would be—it’s a very straightforward “I want to save data”—but it’s a direct signal from the user. And it’s being ignored. I couldn’t find a single example from the top 200 Alexa sites of anyone optimizing when the Save-Data header was present, despite the fact that it’s being sent more frequently than you might think.
If these requests for less data and less resources being utilized have any chance at all of being seriously considered by developers, there needs to be some sort of incentive in place.
That being said, I like the idea of the developer having some idea of what is happening to their site. So here’s what I’m thinking might work:
- The browser sets a series of restrictions that it can enforce.
These limits need to be suitably high enough to reduce breakage while still protecting users (Sounds so simple, doesn’t it? Meanwhile the folks having to implement this are banging their heads against their desks right now. Sorry about that.) These limits also need to be very carefully applied depending on the situation. The direction Never-Slow Mode is headed, both in terms of granularity and progressive rollout, make a lot of sense to me. - These restrictions could, optionally, be reduced further with user input.
Whether it’s in the form of a coarse “put me in a never slow mode” or a more granular control, I’m not sure. If this step happens, it needs to be clearly communicated to the user what they’re getting and giving up. Right now, I’m not sure most everyday people would have a clear understanding of the trade-offs. - The browser should communicate to the site when those limits apply.
If the user does opt into a limit, or the browser is applying limits in a certain situation, communicate that through some sort of request header so developers have the ability to make optimizations on their end. - The browser should communicate to the site if those limits get enforced.
If and when the browser does have to enforce the limits that the site violates, provide a way to beacon that to the site for later analysis, perhaps similar to reporting on Content-Security policies.
I don’t see this approach as particularly troublesome as long as those defaults are handled with care. Is it applying a band-aid to a gunshot wound? Kind of, yes. There are bigger issues—lack of awareness and training, lack of top-down support, business models and more—that contribute to the current state of performance online. But those issues take time to solve. It doesn’t mean we pretend they don’t exist, but it also doesn’t mean we can’t look for ways to make the current situation a little better in the meantime.
If a limit does get enforced (it’s important to remember this is still a big if right now), as long as it’s handled with care I can see it being an excellent thing for the web that prioritizes users, while still giving developers the ability to take control of the situation themselves.
Katherine Arden is one heck of a storyteller and this was a wonderful conclusion to a fantastic trilogy.
Book three picks up moments after the conclusion of the second book and takes off at a tremendous pace that doesn’t really ever let up.
Vasya continues to be one of my favorite characters of fiction in the last several years. Here we see her finally coming into her own. She’s slowly become more confident and powerful over the course of the series, and here, finally, is the big payoff. As much as she has gone through in the previous two books, it feels like nothing compared to the events of this finale. She’s beaten by a mob, locked in a cage to be burned, beaten again (a few times)—and all of that is in the first third or so of the book.
But all of that terrible chaos is what helps her to finally really accept who she is and what she’s capable of without apology. Her family, for their part, comes to terms with it at last as well.
While Vasya is the center of the story, there’s a rich cast of supporting characters. Sasha, Olga, Medved, Dmitrii, Konstantin, Morozko—they’re all back along with plenty of new faces mixed in. Few, if any, of them could be said to be shallow or one-dimensional. Even the villians are complex, well-rounded characters that are conflicted in their own ways.
Much of this entire series, really, is devoted to that: characters who are conflicted about who they are and the role they’re meant to play. Vasya, of course. But also Sasha, torn between his life as a monk and his love for his sister, a witch. Olga, torn between that same love for Vasya and her position of status. Konstantin, torn between the pioutous priest he presents to the public and his cravings for power that truly rule his actions. That internal conflict drives them all. It’s how they react to it that differentiates them most significantly.
Throughout the series, Arden does such a masterful job of weaving authentic historical events and contexts alongside the more fantastical elements. The epic conclusion (epic is fitting here) takes place at the very real Battle of Kulikovo, a fitting setting that she had planned to use for the ending from the very beginning. Arden’s beautiful writing and knowledge of medieval Russia creates a rich backdrop, and also helps to ground what is a very fantastical story.
The ending is not entirely happy, which in this case is good. A perfectly happy ending would have felt misplaced. Instead the end is bittersweet, and very satisfying.
While the trilogy has come to an end, I would be more than happy to read more of these characters, or, at least, stories with the same rich, mythological setting. Whatever Arden writes next, I’ll be reading it as soon as it comes out.
]]>Tara was one of 7 children, raised in rural Idaho, with a father who was a Mormon survivalist and a mother who, while occasionally would show signs of not being entirely on board with some of his actions, was more or less willing to go along with it.
They were “homeschooled” (if you were to ask her parents), but it’s clear from Tara’s telling that there was precious little “schooling”. She hasn’t heard of the Holocaust before an embarrassing sequence in college. She has to teach herself algebra, geometry, and trigonometry as part of her self-studying to prep for the ACT. Much more of her time is spent helping her mother and father with their respective businesses than doing anything resembling school.
While the book does tell the story of how Tara eventually made her way to college, and even to ultimately earning a doctorate from Cambridge University, it really centers around relationships. At the beginning of the book, her relationship with her father dominates the story. For awhile her brother Tyler is the focus. Later it’s her relationship with her mom, with her brother Shawn, with her sister Audrey.
That they dominate a lot of the narrative isn’t a surprise. Tara was surrounded by strong, frequently abusive, personalities and they shaped her perception of herself and the world around her. It isn’t until much later in her life, while at Brigham Young University, that she slowly starts to find her own voice.
Not knowing for certain, but refusing to give way to those who claim certainty, was a privilege I had never allowed myself. My life was narrated for me by others. Their voices were forceful, emphatic, absolute. It had never occurred to me that my voice might be as strong as theirs.
I mentioned those relationships were often abusive, which is the understatement of the century. Her father set the tone, with his extreme beliefs and temperament (Tara suspects he may be bipolar, her parents of course dispute this). He is frequently angry, irrational, and judging of, well, everyone around him. He often complains about the immodest attire of women in church. He despises and distrusts the government. He firmly believes doomsday is coming and so he spends much of his time preparing for it by storing goods, guns, and ammo.
And he despises the healthcare system. To the extent that when his wife is severely injured in a car accident, when his son is severely burnt, when he himself is severely burnt later on—the hospital is never considered an option. His rationale that “God will do all the healing they need” is not so much as an expression of his faith as it is a misunderstanding of it.
As jarring as the early stories in the book are, they pale in comparison to what happens as Tara gets older. Her brother (“Shawn” in the book) is abusive physically and mentally. Her interactions with him are truly horrific, and it’s beyond gut-wrenching hearing the impact they had on her self-worth. There was one scene in particular that I don’t really want to detail because it’s incredibly upsetting. But it happens in front of her boyfriend. Tara talked about how she panicked. Not because it was happening to her, but because it was happening in front of her boyfriend. Because, as she put it:
He could not know that for all my pretenses—my makeup, my new clothes, my china place settings—this is who I was.
I don’t know the effect that has here, in my abbreviated review, but in the context of the book it was one of many times where I had to pause, had to set the book down for a minute just to process and deal with the gravity of it all.
She gets very little help from her family. The majority of her family, instead of dealing with it, look the other way and make excuses. Tara takes awhile to confront it herself, something far from surprising given the situation.
When she finally does confront everything head-on, when she finds her own voice and stands up to her dad and to Shawn, she pays the price of having a relationship with her family. Only a couple remain in touch from the sounds of it. It’s easy for us on the outside to feel that perhaps that’s the best thing after all given the horrors she went through, but as Tara points out, it’s never that easy when it’s your family.
Tara seems to be writing this book for herself as much as anyone else and you see the impact abuse had on her, the way she struggles with her confidence and self-worth and the way she grapples with coming to terms with the toxic relationships with people she cared deeply about.
The most common criticism I’ve seen of the book is that it can’t possibly be true. There is so much that happened to Tara. How could that much bad happen? How could someone with no education manage to score well enough on her ACT to get to BYU? How could she go on to get a doctorate? In the context of today, with the wide dissemination of misinformation and examples of other popular memoirs exaggerating facts, I suppose that’s fair to some extent.
Her parents have stated that the book is not accurate, as you would fully expect given the terrible stories the book tells.
But two brothers also are on record as having said that while their exact recollection of some of the stories differs from Tara’s, they felt the book accurately depicted Tara’s upbringing. Doug, an ex of Tara’s who is mentioned quite a bit later in the book, has also written that Tara’s book lines up with his experience with her family as well.
In other words, there’s enough smoke here that there must be something to the fire. Tara is also incredibly transparent about where her telling differs from that of family members. The book is peppered with footnotes where bits and pieces of stories are contradicted by the family members she still has an active relationship with. She even has a whole final section of the book detailing how her accounts of some scenarios differ from what others remember.
And, perhaps, some of the memories aren’t entirely accurate. Who of us could claim to have 100% fidelity of our memories? Or tell stories of the people in our lives without perhaps missing some details? Tara’s very open in pointing out that her memories likely aren’t 100% correct. In an interview after the book’s publication, Tara addressed the criticism:
Everyone knows that human memory is fallible and there are problems with it. In my book, I acknowledge that by consulting other people, by having footnotes when there were major disagreements or things I couldn’t reconcile, and by trying to acknowledge why certain narratives persist and where they come from. But I think sometimes people use the basic fact of the fallibility of human memory to try and undermine other people’s sense of reality and their trust in their own perception, and that has a lot more to do with power than with the limitations of memory. It’s a way to control other people by saying ‘my memory is the truth and yours isn’t valid.’
The criticism leaves me with a similar feeling to what I had after reading The Road of Lost Innocence and finding out that some argued the author was lying. That is to say, I almost want the story to be a fake. I want to find out I was duped. As inspiring as it is that she was able to overcome, what she went through and what she still is going through is awful and I don’t want it to be true. That doesn’t seem to me to be the case here, however.
I had expected, knowing little about the book before picking it up, for something dealing with a more traditional definition of “education”. But what Tara wrote is more powerful and more important. Education, as Tara sees it, is about self-discovery and transformation. It’s about finding your own voice amidst the cacophony of voices that surround you.
]]>It had played out when, for reasons I don’t understand, I was unable to climb through the mirror and send out my sixteen-year-old self in my place.
Until that moment she had always been there. No matter how much I appeared to have changed—how illustrious my education, how altered my appearance—I was still her. At best I was two people, a fractured mind. She was inside and emerged whenever I crossed the threshold of my father’s house.
That night I called on her and she didn’t answer. She left me. She stayed in the mirror. The decisions I made after that moment were not the ones she would have made. They were the choices of a changed person, a new self.
You could call this selfhood many things. Transformation. Metamorphosis. Falsity. Betrayal.
I call it an education.
It’s a fair question, I suppose. Advocates of any technique or technology can be a bit heavy-handed when it suits them if they’re not being careful–myself included. But I’m not sure if that’s the case here. When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.
Performance as exclusion
Poor performance can, and does, lead to exclusion. This point is extremely well documented by now, but warrants repeating. Sites that use an excess of resources, whether on the network or on the device, don’t just cause slow experiences, but can leave entire groups of people out.
There is a growing gap between what a high-end device can handle and what a middle to low-end device can handle. When we build sites and applications that include a lot of CPU-bound tasks (hi there JavaScript), at best, those sites and applications become painfully slow on people using those more affordable, more constrained devices. At worst, we ensure that our site will not work for them at all.
Forget about comparing this year’s device to a device a couple of years old. Exclusion can happen on devices that are brand-new as well. The web’s growth is being pushed forward primarily by low-cost, underpowered Android devices that frequently struggle with today’s web.
I recently profiled a page on a Pixel 2 (released in 2017), and an Alcatel 1x (released in 2018). The two devices represent very different ends of the spectrum. The Pixel 2 is a flagship Android device while the Alcatel 1x is a $100 entry-level phone.
On the Pixel 2, it took ~19 seconds for the site to become interactive.
On the Alcatel 1x, it took ~65 seconds.
Similarly, there is a growing gap between what a top of the line network connection can handle and what someone with a poor mobile connection or satellite connection can handle. We frequently position this issue as one around reaching a more global audience (and often, it is), but it can also hit much closer to home.
My home internet connection gives me somewhere around 3 Mbps down. It seems blazingly fast compared to the 0.42 Mbps download speed Jake Archibald mentioned his relative getting or the 0.8 Mbps download speed my in-laws get at their house.
The cost of that data itself can be a barrier, making the web prohibitively expensive to use for many without the assistance of some sort of proxy technology to reduce the data size for them—an increasingly difficult task in a web that has moved to HTTPS everywhere.
This isn’t just hyperbole. When I worked with Radio Free Europe a few years back, it was staggering to consider that many of the visitors of the site were breaking the law, jumping through hurdles and risking their livelihood to access the site. Poor performance was not an option.
The YouTube feather story—where they improved performance and saw an influx of new users from areas with poor connectivity who could, for the first time, actually use the site—is well documented by now. Other companies have had similar stories that they’re unable to tell.
We can point the fingers at the networks and businesses behind them all we want for the high costs associated with connectivity, but the reality is that we play our own part in this with the bloated web we build.
Performance as waste
Less often considered is the sheer waste that poor performance results in.
Let’s say that you’re one of those people with a device a year or two old. The web runs a little slower for you than it does for those with the latest phone off the shelf. It makes sense. Hardware gets better, sure. But just as critically sites are getting heavier and more computationally expensive. It’s not just that an older device will be less equipped to deal with the complexity of today’s sites, but also that it’s far less likely that the folks building those sites will have tested on your device.
This is a point Cennydd Bowles made in his excellent book, Future Ethics.
Software teams can reduce the environmental impact of device manufacturing, even if they don’t make devices themselves. Durable software not only saves the expense of frequent product overhauls; it also reduces the temptation of unnecessary device upgrades. Why buy a new handset when the old one works just fine?
Or, for our specific purposes, why would I need an expensive device with higher-powered CPU if the sites and applications run well on a lower-powered device?
Poor performance can also result in actually reducing the life-span of the devices we do have, even if we are able and willing to suffer through the slowness. Anything that is taxing on the processor (JavaScript, high-resolution images, heavy layout costs) is going to be taxing on the battery as well, causing wear and tear to the device.
It’s not just the short shelf-life of devices that is impacted by poor performance. Cennydd also makes the case that performance also has an impact on energy consumption:
In 2016, video, tracking scripts and sharing buttons caused the average website to swell to the same size as the original version of Doom. Ballooning bandwidth and storage have fostered complacency that we can do without. Performance is conservation. Habits like compressing images, reducing HTTP requests, preferring standards to third-party plugins, and avoiding video unless necessary have well-known benefits to usability, but are also acts of environmental protection.
Again, this isn’t hyperbole.
Just how much kWh are expended per one GB of data is up for some debate depending on your method of analysis. In a 2012 paper, The American Council for an Energy-Efficient Economy estimated the internet uses 5 kWh on average to support every GB of data. Let’s run with that for a second.
The median desktop site (5 kWh was not looking at the energy consumption of mobile networks which is almost certainly significantly higher) is 1848 kb.
Let’s say just 2 billion people (somewhere around 4 billion are connected to the internet) view 5 pages at that weight in a day (surveys from 2010 showed the number to be around 10 sites on any given day, so I’m being overly safe here). That would be about 1,500,000 GB of data transferred on a single day. Based on the 5 kWh energy consumption estimate, we’re looking at spending 17.6 million kWh of energy to use the web every single day.
Performance is an ethical consideration, now what?
When you look at the evidence, it’s hard to see one could argue performance doesn’t have ethical ramifications. So clearly, folks who have built a heavy site are bad, unethical people, right?
Here’s the thing. I have never in my career met a single person who set out to make a site perform poorly. Not once.
People want to do good work. But a lot of folks are in situations where that’s very difficult.
The business models that support much of the content on the web don’t favor better performance. Nor does the culture of many organizations who end up prioritizing the next feature over improving things like performance or accessibility or security.
There’s also a general lack of awareness. We’ve come along way on that front, but the reality is that a lot of folks are still not aware of how important performance is. That’s not a knock on them so much as it’s a knock on the way our industry prioritizes what we introduce to new folks who want to work on the web.
I don’t want it to sound like I’m making excuses for poor performance, but at the same time, I’m pragmatic about it because I’ve been there too. I’ve built sites that were heavier and less performant than I would have liked. It happens.
That it happens doesn’t make any of us bad people. But it does us no good to try to ignore the real repercussions of what we’re building either.
There are consequences to the way we build. Real consequences, felt by real people around the world.
Performance is an ethical issue, and it’s one each and every one of us can work towards improving.
]]>Scalzi is one of those author’s who very rarely, if ever, steers me wrong. He’s got a knack for writing gripping, fast-paced novels that are wildly entertaining from start to finish and Head On fits the bill perfectly.
It’s the second book (well, maybe the third if you include the short story prequel) in this world were a number of the population has been struck by “Haden’s Syndrome”—a disease that leaves people fully awake, but completely unable to move or react to any outside stimuli. So instead, Hadens (as the victims are called) navigate the world in personal robots called threeps.
That’s the backdrop for another fun “whodunnit” following Chris Shane, the child celebrity who now works as a rookie at the FBI. Honestly, it’s as much (if not more so) a murder mystery as it is a science fiction book.
What separates it from similar books is Scalzi’s willingness to use threeps and Haden’s sufferers as a way to explore how culture responds to disabilities and minorities. Throughout the book, you see that there are very few ways in which the Haden’s have the upper hand. Society tends to have them a little lower on the totem pole than folks without the disease both through obvious and subtle biases (like the way folks bump into threeps all the time while walking). In the few areas that Haden’s survivors do have the upper hand, people without Haden’s are starting to use the technology to better themselves. On the surface, it’s not a big issue, but as Scalzi deftly explores, you begin to see the way that impacts Haden’s sense of being as well as their wallets.
As with everything I’ve ever read by Scalzi, he manages to make you think about these things without you ever really realizing it. He never gets heavy-handed—the conversation and action seamlessly take you through these discussions as part of the action.
As I mentioned, it’s book two in this world, but it stands alone pretty well. You don’t need to read Lock In to enjoy Head On, though I’d argue you’ll enjoy the characters and world a bit more if you do. Head On ends up being everything you’d expect from Scalzi: entertaining and gripping with much more to think over than it first seems.
]]>Neither of those things really panned out.
I started the year strong, but the reading slowed down as did the review writing. I could make up some excuses, but the reality is I just didn’t give myself as much time to read as I had been the past few years. I spent too much time checking email and sipping news through a firehose. I’ll fix that for 2019, starting with curbing my email and Twitter issues (I’ve made good progress on that already and will likely write about that soon).
Still, what I did read was really high-quality stuff. There were a lot of great books I enjoyed this past year and very few duds that I had to put down. For fiction, I’d say my three favorites were Lonesome Dove, Beartown and A Man Called Ove (though The Body Library was really close). For non-fiction, I’d have to go with Why We Sleep, Factfulness and A River of Darkness. All six are highly recommended.
- Why We Sleep by Matthew Walker 5⁄5
I’ve written a full review for this one.
I do blame this book for some of my reduction in reading. I do most of my reading at night and this book really hammered home how important it is to get a full night’s rest. So on some evenings, where I would usually let myself get sucked into a book until the late hours of the night, I would force myself to put it down and get some sleep.
- Gut by Giulia Enders 4⁄5
I’ve written a full review for this one.
- Inclusive Design Patterns by Heydon Pickering 4⁄5
I’ve written a full review for this one.
- A River of Darkness by Masaji Ishikawa 5⁄5
I’ve written a full review for this one.
- Designing Interface Animation by Val Head 5⁄5
I’ve written a full review for this one.
- Lonesome Dove by Larry McMurtry 5⁄5
I’ve written a full review for this one.
- Technically Wrong by Sara Wachter-Boettcher 5⁄5
I’ve written a full review for this one.
- A Man Called Ove by Fredrik Backman 5⁄5
I’ve written a full review for this one.
- Exit West by Mohsin Hamid 3⁄5
I’ve written a full review for this one.
- The Underground Railroad by Colson Whitehead 4⁄5
I’ve written a full review for this one.
- The Real World of Technology by Ursula Franklin 4⁄5
I’ve written a full review for this one.
- Million-Dollar Consulting Proposals by Alan Weiss 4⁄5
- Million-Dollar Consulting by Alan Weiss 4⁄5
I don’t necessarily agree with everything in these books and I really balked at the whole “million-dollar” part of the title, but both of these books had a TON of useful information.
- The Irresistible Consultant’s Guide to Winning Clients by David A. Fields 4⁄5
This is my pick of the best business book I read this year. Practical with lots of great advice.
- Beneath the Sugar Sky by Seanan McGuire 4⁄5
I didn’t enjoy it quite as much as the other Wayward Children books, but still a fun tale.
- A Win Without Pitching Manifesto by Blair Enns 4⁄5
- Binti by Nnedi Okorafor 4⁄5
- All Systems Red by Martha Wells 4⁄5
- Bored and Brilliant by Manoush Zomorodi 4⁄5
- The Rook by Daniel O’Malley 3⁄5
- Clockwise and Gone by Nathan Van Coops 4⁄5
- How Clients Buy by Tom McMakin and Doug Fletcher 3⁄5
Maybe I would’ve liked this book more had I not read similar books the same year. In comparison, it felt a bit…light.
- Factfulness by Hans Rosling, Anna Rosling Ronnlund and Ola Rosling 5⁄5
I’ve seen this one at the top of several other people’s lists and it absolutely should be. Beautifully optimistic, but never falsely so: everything is backed up by solid data. It gives you a new perspective on current affairs while training you to have a keener eye when you see someone using data to paint a story. I also really admired how respectful the book is towards people who still have different views on some of the topics presented.
- Beartown by Fredrik Bakman 5⁄5
This book deals with sexual assault: the way our culture is complicit, and the toxic ways we treat victims of assault afterwards. It’s going to be hard reading for many. I posted a one-word review on Twitter shortly after reading this one: Wow. It’s profound and deep and gut-wrenching. Between this and A Man Called Ove, I will now read anything Backman writes. Anything.
- The Body Library by Jeff Noon 5⁄5
These Nyquist books by Noon are so weird and creepy and I am absolutely there for it. A Man of Shadows dealt with a world where time was a tangible commodity and The Body Library revolves around the premise that stories are living things can consume people in very real ways. Both are an absolute pleasure to read.
- Image Performance by Mat Marquis 5⁄5
Here’s the blurb I wrote for A Book Apart. I think it does a good job of explaining how I felt about Mat’s excellent book:
Image Performance is an entertaining, practical introduction to the finer details of image compression and performance online. The topic isn’t simple, but after reading this book, you could be forgiven for thinking it is—Mat’s just that good at explaining it.
- Progressive Web Apps by Jason Grigsby 5⁄5
Look. Jason can never know I gave this a positive review. It’ll go straight to his head.
But this book really is fantastic. I love that it goes beyond the usual “how to build a progressive web app” discussion to explore the business case for progressive web apps and some of the overlooked considerations about when and where to use each associated technology.
- Future Ethics by Cennydd Bowles 5⁄5
Gosh, this book was so great. Cennydd explores the ethical considerations of technology in a way that felt fresh even with all the books I’ve been reading on the topic. His was the first book I’ve read to present ethical frameworks (old news for those of you versed in ethics, newer to me) for considering the ramifications of technology. By doing so, you become so much more aware of just how messy it all is.
What I like about this approach is that you learn to consider how people may be arriving at different conclusions than you. It’s not necessarily that they’re bad people or wrong, but that they are using a different ethical framework. It makes it much easier to have meaningful discussions about complicated topics when you can understand where others are coming from. It’s an important and underutilized skill nowadays, but this book helps you develop it a bit more.
As a bonus, I now feel smarter when Chidi goes into his ethics lessons on The Good Place.
- Word by Word: The Secret Life of Dictionaries by Kory Stamper 4⁄5
Kory’s enthusiasm is infectious and makes this a far more interesting book than you would think a book about dictionaries would be.
- Les Miserables by Victor Hugo 4⁄5
I’ve been meaning to read this for years, and finally got around to it. It’s not the breeziest books to get through. The core story moves along at good pace and keeps you roped in, but Hugo’s sidebars (if you can call 100 pages or so a sidebar) on topics like Waterloo and convents slow that momentum down. Most of the time, those sidebars pay off though. It was great to have a better perspective of Waterloo, for instance, as it is a major focal point of the revolution that drives the actions of characters later.
So, not an easy read, but a very powerful one. I was familiar with the story from the musical, but there’s so much more depth here. Fontaine’s story is heart-breaking, as is Eponine’s. And Javert’s. And Valjean’s. Ok fine just about everyone’s story is a real downer. But that was sort of the point: to look at the everyday people impacted by the revolution that history would overlook and cast a critical eye on a society that allows these things to happen.
- Unsubscribe by Jocelyn K. Glei 4⁄5
- The Warp Clock by Nathan Van Coops 5⁄5
Van Coop’s In Times Like These series are essentially popcorn movies in book form. That’s a compliment, not a criticism. They’re fast-paced, and loads of fun. This latest entry is one of my favorites so far.
Past years
]]>I spend a lot of time in WebPageTest. A lot of time. A while back, I made myself a little Alfred (like Spotlight but significantly more powerful and useful) workflow to make it easy for me to fire off a test quickly. It’s nothing fancy, but it does make it very convenient.
When I want to run a test on WebPageTest for any URL, I can open Alfred (I use CMD + Space for the hotkey) and then start typing wpt. When I do, a few options come up:
A screenshot of the options (listed below) for the WebPageTest Alfred workflow
wpt
Test on a Moto G over a 3G network, from Dulles, VA (my most common test case, so I made it the default)wpt:firefox
Test on Firefox over a cable connection, from Dulles, VAwpt:chrome
Test on Chrome over a cable connection, from Dulles, VAwpt:safari
Test on Safari over a cable connection, from Dulles, VAwpt:iphone
Test on an iPhone 8 over a 3G connection, from Dulles, VA
For each option, I type in the URL I want to test, press “Enter” and my default browser opens and fires up a test. Of course, I still need to go to WebPageTest.org directly for a lot of testing, but these commands are great for quick off-the-cuff tests.
I have a couple of default options enabled for each test:
- Number of runs: 3
- Capture Video: true
The workflow is pretty straightforward. For each keyword (like wpt) it passes the query to an “Open URL” action which has all the options passed along in the query string.
Because the test gets passed to the public instance of WebPageTest.org by default, you need to use an API key (you can grab one for free courtesy of Akamai).
Since I decided to make the workflow available on GitHub, I didn’t want to hardcode the API key in the URL. Thankfully Alfred has some pretty slick environmental variables functionality that works perfectly for this. I also made the default run count and WebPageTest URL a variable, just in case anyone wants to default to a different number of test runs or use a private instance for testing.
A screenshot of the environmental variables for the WebPageTest Alfred workflow
As I mentioned, the workflow is up on GitHub for anyone who wants to take it for a spin. There’s a lot more I could do here—like make it possible to pass in the location or run count or some other variable dynamically—but it was meant to be a simple way to fire up WebPageTest quickly and for that, it has been working pretty well so far.
]]>Or a few.
Anyway.
It turns out, the other folks at the table were not familar. I couldn’t let this injustice continue, so I told them to “Google for quokka selfies”. Christi, who works on Bing, commented with a smile: “Or you could look it up on Bing.”
It was a small thing, all with good humor. But it kind of stuck in my head. Without really ever thinking about it, Google has risen to that status (for me at least, though I doubt I’m the only one) where, like Kleenex, it has become the product itself, not just the brand. Their influence on online search is nothing short of dominant.
Their influence on the broader web is not at that same level, yet, but it’s not all that far off. I’ve mentioned in the past the mixed feelings I have about Google’s influence on the web. There’s a significant risk of any entity having too much say over what happens because no entity is infallible.
I think, for example, that their no-JavaScript intervention is a great thing. Being vocal about including performance in their search rankings has caused many organizations to put more priority on making their sites performant than they would have otherwise. This is also a really great thing. When Google pushes a feature or bit of functionality as necessary, it gets attention. When those features align with the values of the web and fundamentals of building for it, I’m pretty pleased.
But they don’t always.
Consider AMP. I haven’t been shy in stating that I feel that the way AMP has been handled is detrimental to the web. They’ve taken some steps towards alleviating some of those concerns—they’ve put an advisory board in place full of fantastic people, they’ve done some work on the URL issues—but they still fundamentally put the focus on the way the site is architectured versus the experience that ensues. When’s the last time you can remember that a framework was given preferential treatment like AMP has been given? You could argue that it’s a format, like RSS, but no one has ever tried to convince developers to build their entire site in RSS.
And yes, technically, AMP is no longer a “Google” thing, but let’s be honest: does it exist today if Google hadn’t pushed it so hard, visiting publishers and advocating for them to use it? Not a chance. If Google decided tomorrow to pull their contributers from the project and remove their integration, would AMP continue on? Almost certainly not.
The incentives Google put around AMP puts organizations in a hard place. What should be an implementation detail is now a much bigger discussion.
Do companies forgo the enhanced listings and continue building their site without AMP?
Do companies create two versions of their site—one with AMP and one without—and commit to maintaining them both? That can be a hassle as without some level of parity, the experience will become inconsistent, and engagement will suffer (after a talk at Chrome Dev Summit, I’m convinced this is at least part of why AMP case studies have been a mixed bag thus far).
Do companies decide to build their entire site in AMP, limiting themselves to the functionality that AMP incorporates and placing their trust that this framework will withstand the tests of time?
It’s true that we’ve gotten some nice web standards emerging in no small part because AMP exists. Proposals like Web Packaging and Feature Policy both look to be valuable additions to the web. But I often wonder what those standards would have looked liked if AMP wasn’t there to drive them? These are nice features, but they’re driven nearly entirely by AMP’s use cases. If Google had doubled-down on incentivizing performance instead, what different features would have emerged?
Of course, my concern around AMP and my general okayness with Chrome’s JavaScript intervention reflect my own biases. There are folks who agree, and folks who don’t. That’s a good thing, a great thing even. We need different perspectives to move the web forward in a positive way. When we lose that, that’s when we’re at risk.
Yesterday we got another example of Google’s rising influence on the web, and the increasing associated risk. Microsoft, it seems, is going to build their next browser around Chromium. Chromium is fantastic, of course, but it feels like a shame after the dramatic turnaround Microsoft has undergone already from Internet Explorer to Edge.
Still, I can understand the logic. Microsoft can’t can, but have chosen not to, put as many folks on Edge (including EdgeHTML for rendering and Chakra for JavaScript) as Google has done with Chromium (using Blink for rendering and V8 for JavaScript), so keeping up was always going to be a challenge. Now they can contribute to the same codebase and try to focus on the user-focused features. Whether this gets people to pay more attention to their next browser or not remains to be seen, but I get the thinking behind the move.
The big concern here is we’ve lost another voice from an engine perspective. For rendering engines, we’re down to Blink (Opera, Chrome, Microsoft presumably), WebKit (Safari) and Gecko (Firefox). For JavaScript engines, we have V8 (Node, Chrome, Opera, Microsoft presumably), Nitro (Safari) and SpiderMonkey (Firefox).
WebKit/Nitro has good folks working on them, but it’s a smaller group—Apple isn’t exactly making as significant an investment in the web. Apps are more their style.
The Gecko/SpiderMonkey teams are likewise excellent and some of the work they’ve done lately is incredibly exciting. They also play the most critical role of the three: they’re the one engine combination not backed by a massive for-profit corporation which gives them a unique perspective. The downside is that they don’t have as many resources to make noise and grab developers attention.
Which leaves Chromium, primarily driven by Google to this point. Google, unlike Apple, invests in the web quite a bit. They have a massive business interest in doing so. As a result, they have more people working on Chromium and more people whose job it is to discuss that work than any other browser.
Look around at any number of web conferences and count how many speakers are from Microsoft and Apple and contrast that to folks from Google (factor in folks giving talks about Node, powered by V8, and the results are even more dramatic). This is not at all a knock on the folks from Google doing this work. They do a fantastic job, and we’re better off for them doing it. But you can see the discrepancy in representation from a developer outreach perspective and it’s hard to imagine a scenario where that doesn’t end up leading to some level homogeneity.
The same is true in standards-land. Google aggressively has pushed some fantastic standards forward, but we need voices to disagree. A little dissonance is not only valuable—it’s critical. Even the best of us, if not challenged by people with different viewpoints, will end up doubling down on our own viewpoint all too frequently. It takes real work to keep the blinders off, and a considerable part of that is making sure that there are many different voices at the table. It’s true of every aspect of life, and it’s true of the web standards process.
Rachel’s words of caution from 2016 are worth remembering:
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.
We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.
]]>Specifically, the vast majority of what we know about human psychology and behavior comes from studies conducted with a narrow slice of humanity – college students, middle-class respondents living near universities and highly educated residents of wealthy, industrialized and democratic nations.
To illustrate the extent of this bias, consider that more than 90 percent of studies recently published in psychological science’s flagship journal come from countries representing less than 15 percent of the world’s population….Given that these typical participants are often outliers, many scholars now describe them and the findings associated with them using the acronym WEIRD, for Western, educated, industrialized, rich and democratic.
First, I love the WEIRD acronym, and I’m a little surprised I haven’t heard it before, or, if I have, that it didn’t stick.
More seriously though, focusing only on the WEIRD can have a damaging impact as we use research to guide how we parent, how we teach and how we interact with others. The article gives an excellent example of how even a pretty widely accepted, and simple, pattern test can lead us astray.
Consider an apparently simple pattern recognition test commonly used to assess the cognitive abilities of children. A standard item consists of a sequence of two-dimensional shapes – squares, circles and triangles – with a missing space. A child is asked to complete the sequence by choosing the appropriate shape for the missing space.
When 2,711 Zambian schoolchildren completed this task in one recent study, only 12.5 percent correctly filled in more than half of shape sequences they were shown. But when the same task was given with familiar three-dimensional objects – things like toothpicks, stones, beans and beads – nearly three times as many children achieved this goal (34.9 percent). The task was aimed at recognizing patterns, not the ability to manipulate unfamiliar two-dimensional shapes. The use of a culturally foreign tool dramatically underestimated the abilities of these children.
Naturally, this made me think about the research that we have done thus far to push the web forward. Most of it is significantly less formal than those being conducted by, for example, the psychology community the author was focusing on. So there’s already likely a few more gaps and oversights built in. Now throw in the inherent bias in the results, and it’s a little frightening, isn’t it?
Moving beyond the WEIRD is critical not just in scientific research, but in our own more web-centric research. We’ve known for a while that the worldwide web was becoming increasingly that: worldwide. As we try to reach people in different parts of the globe with very different daily realities, we have to be willing to rethink our assumptions. We have to be willing to revisit our research and findings with fresh eyes so that we can see what holds true, what doesn’t, and where.
Just how much are we overlooking?
We have anecdotal evidence that the way we view forms and shipping is overly simplistic. What other assumptions do we make in the usability of form controls that may be leaving folks out? What does a truly globally accessible form look like?
We know that data costs can be prohibitive in many parts of the globe, leading folks to have to get creative with things like local caching servers to afford to get online. We’ve started to focus less on page weight in WEIRD environments, but is that true of folks in other areas? Do the performance metrics we’re zeroing in on still represent the user experience in different situations?
There’s been some work done on better understanding what people expect from the web; I certainly don’t want to imply that there hasn’t. But the body of research is significantly smaller than the analysis based on the WEIRD. Much of what we do have is survey-based (versus more accurate forms of research) and speculation based on anecdotal evidence. I don’t think anyone could argue that we don’t still have a long way to go.
There always seems to be something new for the web to figure out, something that keeps us on our toes. Robyn Larsen has been talking a lot lately about how internationalization is our next big challenge, and I couldn’t agree more. One thing is certain: we have a lot to relearn.
]]>Recently, I decided to invest in myself a bit more by grabbing a new mouse and keyboard in an attempt to make my work environment a bit more ergonomic. The mouse didn’t make a huge difference (at least, I didn’t notice one) but I loved it immediately. The keyboard, on the other hand, had more of an impact on my comfort than any other purchase I’ve ever made but was far from love at first type.
Upgrading the mouse
The first thing I did was ditch the Magic Mouse. I’ve been using a Magic Mouse since they were first released. I had no strong affinity for it. The multi-touch surface was pretty handy, but mostly I stuck with it because it just worked. That being said, it’s not remotely close to being ergonomic. I rarely had any wrist stress, but I figured I’d get ahead of the game a little bit on this one and grab something a bit more comfortable. I looked at vertical mouses but wasn’t sure I wanted to go quite that far yet. Instead, I picked up an MX Master 2s.
The Logitech MX Master 2s
It took me all of five minutes of using the mouse to be extremely happy with the decision. For one, it’s infinitely more comfortable. But on top of that, the level of customization makes it incredibly useful as well.
There are six different programmable buttons (they’re not all buttons, but close enough). I use the place where my thumb rests for gestures to move between different windows.
The scroller (which itself is useful as it can toggle quickly between a fast scroll for moving down huge pages of text and a slow scroll for more precision) I can click to toggle play/pause for my music. I set the little square button on top of the mouse to handle panning: hold it down and move the mouse to the left/right/up/down to pan around.
There’s a thumb scroller that I currently use to navigate between my open applications. Next to the thumb scroller are two other buttons, which I use for copy and paste respectively.
It’s all little things, but the mouse is so comfortable, functional and frankly attractive that I wondered why I hadn’t switched sooner.
Switching keyboards
After doing some digging, a split keyboard sounded like the way to go thanks to the reduced tension on your upper back and shoulders. I didn’t really know where to begin though. It turns out, there are a lot of people with very strong opinions about keyboards, and not all of them agree (shocking!), so I soon found myself experiencing choice paralysis.
In the middle of trying to find which one to get, I saw Dave posted a picture of his ErgoDox EZ. It looked pretty slick, so I decided to have a closer look. The reviews were all rock solid, and the ability to customize it with various “layers” of keys sounded interesting, so I ponied up the money and grabbed one. It’s not a cheap device. At all. My wife was the one who had to convince me it was a worthwhile investment in the end given how much of my time is spent typing.
The ErgoDox EZ Shine in all its shiny glory.
The impact of the ErgoDox for me was, by far, the most noticeable out of everything I’ve tried. More than the standing desk, more than the better mouse. It didn’t take long at all for my shoulders and neck to feel significantly less tense. It’s not all gone, but certainly better than it used to be by a sizeable amount.
It’s a good thing too because otherwise, I’m not sure how long I would have stuck with it. I have a very low tolerance for making changes that slow down my productivity, and the first few days of the ErgoDox were brutal. I thought I had solid touch typing skills, but wow, was I wrong. The linear layout of the keys (versus the slightly staggered arrangement you get with most keyboards) really exposed a couple of letters I have apparently been cheating on this entire time.
I tested my typing on my wireless keyboard before plugging in the ErgoDox so I could compare. After accounting for errors, I ended up with 93 words per minute which I thought was pretty decent. I plugged in the ErgoDox and took the same test. After accounting for errors, I tested out at five words per minute.
Yeah. Not a typo. Five.
I went with blank keys, but that wasn’t the issue. I knew where the keys were. My problem was that for some letters, I have apparently been cheating this entire time. Using the wrong finger to get to, for example, “C”. On a staggered layout (like most keyboards) that’s easy to get away with. On a linear keyboard, like the ErgoDox, it’s not. So there were typos galore.
But I stuck with it partly, again, because I did notice a difference in my neck and shoulders but also because I had laid down so much on this thing darn it I was going to give it a proper try.
In a week I was back up to about 30 words per minute. In two, I was around 60. Now, about a month after I first set it up, I’m back up to my original typing speed. (For those curious, my typing on a standard keyboard has regressed, though not as much as I feared. I was at 93 at the start, and I’m now testing around 80 words per minute.)
In addition to the obvious benefit to my neck and shoulders, the ErgoDox has a few other nice features. For one, it’s a mechanical keyboard. I know there are a lot of folks who swear by the superiority of mechanical keyboards. I’m not quite there myself, but I will say there’s something oddly satisfying about the tactile and audial experience of typing on one.
The other feature I really enjoy is the customization of the ErgoDox. The ErgoDox lets you customize literally every key on your keyboard. Want to use Dvorak or some other setup? Not a problem. Want to get rid of the caps lock key? Easy. More than that though, the ErgoDox has this concept of layers. You can set up a key to toggle a layer (or make it persist, up to you) and once that other layer is active, you can have an entirely different set of keys configured.
So, for example, I have four layers. My base layer (Layer 0) is pretty typical. ASDF letters, numbers on top, some symbols, space, tab, delete-pretty plain. The only exceptions are a few keys to help me toggle on other layers, and a couple of keys set up for specific shortcuts I use a lot. One toggles my iTerm window, the other is set to open Chrome DevTools (I spend a lot of time there).
Layer 1 doesn’t use all the keys. I have function keys for the top row, media controls to the right, and then a bunch of kind of fun but far from essential keys for controlling my mouse pointer. I don’t use them a lot.
Layer 2 has my arrows under my right hand (where IJKL would be) and then a bunch of keys for controlling the Ergodox’s lighting (super essential stuff, as you’d imagine).
And my last layer is a numeric keypad, a late addition to my setup but one that I absolutely love.
I certainly didn’t start there. The thing about the customization is that it can take a while to settle on something that works for you. It was pretty apparent early on that the default Mac layout was not going to work well for me. So I dug through reviews and videos and found one layout I didn’t hate and used it as my starting point. I talked to Ally Palanzi, and she was much smarter about finding her starting point than I was. She searched the online configurator for “javascript” to see what some developers were using for theirs. Handy.
From that base layout, I then kept changing and tweaking until things stopped being annoying. I don’t know if this is good advice or not, but what worked well for me was to watch for where my fingers wanted to go anyway. If my fingers kept going to the same spot, for example, to find the delete key, I would change my configuration to put the delete key there. I must have gone through a dozen different layouts before I settled on one that seems to work pretty well. There are still a few keys I haven’t found an ideal spot for, but I’d say I’m at the last 10% of optimization.
It took a bit longer than my transition to using a new mouse, but I’m now at the point where I absolutely the ErgoDox and wish I could type on it all the time. (I’ve seen some people travel with it, but I don’t know how much sense that makes for me personally). The adjustment period was painful, but the payoff was significant.
]]>The Real World of Technology is based on, and expands on, a series of lectures that Ursula gave in 1989 about technology’s impact on the “real world”—the societal, political and ethical considerations that are so commonly overlooked.
It’s not a book for the faint of heart. The style is fairly academic and as much as I enjoyed it, I had to really slow down to fully absorb this one. In fact, I made it a point to listen to the lectures during my drives while reading the book to get the content from another angle. In retrospect, it was a really good choice. Listening to Ursula’s lectures hammered home what I was reading and, in some cases, registered with me a bit more clearly.
But the work is absolutely worth it—there are so many important concepts discussed that all too frequently go overlooked until it’s too late and we’re stuck dealing with the fallout.
She powerfully makes the case that you cannot consider technology in isolation and that technology itself can never be neutral because it is shaped by the environment around it:
What needs to be emphasized is that technologies are developed and used within a particular social, economic, and political context. They arise out of a social structure, they are grafted on to it, and they may reinforce or destroy it in ways that are neither forseen nor forseeable.
She tackles the way we repeatedly (and mistakenly) place our trust in technology over people (a timely word of caution given today’s propensity to solve problems with an algorithm):
Many technological systems, when examined for context and overall design, are basically anti-people. People are seen as sources of problems while technology is seen as a source of solutions…..The notion that maybe technology constitutes a source of problems and grievances and people might be looked up on as a source of solutions has very rarely entered public policy or even public consciousness.
All of this, and more, feeds into her primary narrative: that technology has real world ramifications and we have not given enough consideration to what that fallout is.
It’s a powerful warning that we didn’t pay enough attention to when she presented it in 1989, and one that feels critical to pay attention to today.
]]>Max posted the following question:
Given these classes:
.red {
color: red;
}
.blue {
color: blue;
}
Which color would these divs be?
<div class="red blue">
<div class="blue red">
The correct answer is that they’re both blue. The order of class names in HTML has no bearing on the styles. In this case, both selectors have the same specificity (they’re both class selectors) so there’s a tie there. Since .blue comes later in the stylesheet, it overrides the .red selector. Both div’s will have text with the color blue.
Over 14,000 people have responded as of the time I’m writing this. 43% got the answer correct. Most folks got tripped up by the order of the HTML attributes.
It doesn’t bother me too much that people are getting the question wrong. Everyone is at different stages in their career and everyone has different problems they’re facing in their daily tasks, so sure, not everyone is going to know this yet.
I do find it a bit alarming just how many folks got it wrong though. Understanding how the cascade and specificity works is essential knowledge to being able to effectively use CSS—navigating the whole “cascading” part is going to be a huge mess otherwise. It’s clear from these results that we’re not doing a good enough job discussing and teaching these fundamental concepts. That’s on us—those of us who have learned these topics—for not effectively passing that information along.
But again, what really bothers me is not so much that people are getting the question wrong, but that some are presenting this kind of knowledge as a “quirk” or even as a topic that isn’t worth learning.
I couldn’t disagree more strongly with that and I think it’s doing a disservice to all the talented CSS developers out there (who often, understandably, already feel undervalued). It’s all too easy to disrespect someone’s work when we decide that what they do isn’t worth doing without even taking the time to understand it ourselves.
Take away specificity and you have to take away the cascade. Take away the cascade and…well, you’re simply not playing with all the tools available to you with CSS.
Quite a few folks were commenting about how this seems like a great argument for “CSS-in-JS”, because they can bypass these kinds of things altogether. It makes sense on some level I suppose. If I don’t have a good fundamental working knowledge of HTML, I would want to use a tool or framework that would generate it for me. If I don’t understand how to work directly with the DOM, I’d want to use a framework that would abstract that away from me. So yeah, if I don’t understand the cascading part about CSS, I would want a tool or approach that would let me bypass it.
Thinking about it in those terms puts it in perspective. I’ve definitely used tools, at times, that help me bypass the tricky parts of something I don’t fully understand otherwise. So I can’t begrudge someone using something they’re familar with to help them avoid something they’re not familar with, at least for a short while until they have a chance to better understand the technology they’re working with. But we have to be careful not to hide behind these tools either.
I have never once regretted taking the time to learn more about the tools that I use. Never. I always pick something up that makes me a better developer. The cascade is a wildly useful feature, and specificity is critical to being able to effectively use CSS. Learning these concepts is well worth the time spent.
So to anyone that didn’t get the question right, or wouldn’t know how to answer it: don’t feel bad but don’t avoid the topic either. It’s fundamental knowledge, one of those foundational concepts that makes your life as a front-end developer much easier. Yes—even if you are taking a styled components approach. MDN has some great resources about how CSS works. Estelle’s speciFISHity is another fantastic resource for learning how specificity comes into play.
To those of us who understand the topic, let’s consider what we can do to do a better job of explaining these core concepts to people who are less experienced with them.
And to those who find the topic complex or are adamant that this is a “quirk” that doesn’t need to be learned, don’t be so quick to dismiss it. The next time you come across a developer who works with CSS as a primary part of their day-to-day work, recognize that they’ve tackled a topic you find difficult. Sit down and pick their brain. They’ll probably be more than happy to help you learn more about a critical front-end skill, and that’s never time wasted.
]]>But disabling JavaScript is a much more controversial move, it appears. Web fonts fallback very easily to system fonts so disabling web fonts is not a huge deal to most. JavaScript, however, isn’t always treated as progressive enhancement (as much as I feel it should be) and so when it goes missing, the consequences can be a bit more significant.
As you would expect, then, there’s been a lot of ensuing conversation. However, all the articles I had read were speculating on what the intervention would look like, not what it does. It took a little digging through Blink issues, but I eventually figured out how to reliably fire up the NOSCRIPT preview so that I could test it out.
What exactly does it do?
When the preview is enabled, the browser will download any necessary resources to display the page except for any JavaScript. External JavaScript files will not be requested, and inline JavaScript will not be executed. (Though it does appear that if a service worker has been installed for the domain, it will still execute).
The browser will do all the rest of the work necessary to display the page and present it to the user, with an information bar informing the user that the page has been modified to save data and giving them the option to view the “original”. When the click on the information bar, the original page will be downloaded and displayed—JavaScript included.
When I first read about the intervention, I had thought the preview was some sort of static snapshot, but it’s fully interactive. Provided your site works without JavaScript, I can click from page to page, reading articles or shopping for the product I want to buy.
Taking it for a test drive
To test the intervention, you’ll need to toggle a few flags to make sure you can see the NOSCRIPT preview. Once it’s enabled by default on Android, which presumably will happen in Chrome 69, this won’t be necessary.
To toggle the flags, open Chrome on your Android device and navigate to chrome://flags. #allow-previews and #enable-noscript-previews must each be enabled. #enable-optimization-hints should be disabled (we’ll come back to that later). You’ll also need to set the #force-effective-connection-type flag to ‘2G’ or slower.
When does it kick in?
The intervention kicks in when two criteria are met (it’s a bit more complicated than that, but we’ll get to that in a minute):
- The effective network connection type is 2G or slower
- The Data Saver proxy is enabled
If you want to see the intervention in action, you’ll need to make sure Data Saver is running (Chrome > Settings > Data Saver).
In real use, Chrome will use the Network Information API to determine if the effective connection type (ECT) is 2G or slower and, if it is, use the NOSCRIPT intervention. For testing purposes, you can force Chrome to always view the ECT as a 2G network using the #force-effective-connection-type flag I mentioned earlier.
On the surface, the decision to apply the intervention seems straightforward. If the network is slow and the user has made the decision to get let Chrome help them out in those situations, you’ll get the NOSCRIPT intervention. The reality is it’s a little more complex than that.
For one, there is a whitelist and blacklist that can opt domains in or out of this optimization. It appears that there are lists on the browser side as well as on the user side. I’m not clear on all the ways those lists can be populated, but it does look like if the user opts out of the same host often enough, the host will be added to the preview blacklist. There is also a short period (about 5 seconds from the looks of it) where Chrome will decide not to use the intervention from any site if a user has recently opted out.
Another wrinkle is that the NOSCRIPT intervention is far from the only option Data Saver has to reduce page bloat. There are other optimizations, and even other previews (like the LOFI preview which will load image placeholders instead of actual images). Again, I’m not 100% certain about the logic they’re using to determine when a given preview is the correct option, but it does appear there’s some thought applied here: they’re not just applying the NOSCRIPT intervention to every page that comes along.
That’s where the #enable-optimization-hints flag I mentioned earlier comes in. Enabled by default, this flag enables Chrome to use “hints” to determine when and where certain optimizations should apply. Right now, to apply the NOSCRIPT intervention with optimization hints enabled, the request must be whitelisted. I suspect they may get more aggressive with the optimization after they’ve had it running like this for awhile. In the meantime, to consistently see it in effect, we need to disable those hints.
So yes, it does kick in on 2G networks with Data Saver enabled, but as you can see, there are more variables at play.
It works on HTTPS too
Before testing, I made the (mistaken) assumption that since the NOSCRIPT preview intervention was tied to Data Saver, it wouldn’t apply to HTTPS sites. Data Saver, like most proxy browsers and services, tends to leave HTTPS alone. But it looks like I was wrong: the NOSCRIPT intervention appears to work on both HTTP and HTTPS sites.
I guess it makes sense. The reason Data Saver (and other proxy services and browsers) leave HTTPS alone is that applying any transformations to the content would require that they essentially act as a man-in-the-middle.
In this case, however, they aren’t transforming the content in any way. The NOSCRIPT previews simply don’t execute JavaScript, nor make any requests to external JavaScript.
How do developers know the intervention has been applied?
When the intervention kicks in, all requests will have an intervention header applied to them, like so:
<https://www.chromestatus.com/features/4775088607985664>; level="warning"
The presence of the header is enough to indicate that the browser applied some sort of intervention, and the URL in the header will point to more information about the specific intervention applied.
There’s one notable exception: the main document does not appear to get the intervention header currently. Honestly, this may just be a bug as it’s not clear to me why the header wouldn’t be applied to the main document.
All requests (including the main document) will also have the save-data header set to on, but you shouldn’t rely on that as an indication of an intervention. The save-data header will be applied whenever the proxy is enabled (or, really, any proxy service or browser that supports the header), regardless of whether the browser applied any interventions.
If you’re actively testing, you can also fire up chrome://interventions-internals/ in Chrome on your device and follow the logs to confirm when the NOSCRIPT intervention has been applied.
What does this mean for users?
For users, the intervention can be very effective for certain sites. I loaded up 10 different sites with the NOSCRIPT intervention enabled and disabled to see the difference.
| URL | NOSCRIPT Weight (KB) | Original Weight (KB) | Change in Weight |
|---|---|---|---|
| https://www.wayfair.com/ | 164 | 3277 | -95.0% |
| https://www.aliexpress.com/ | 72 | 2150 | -96.6% |
| https://www.linkedin.com/ | 151 | 1536 | -90.2% |
| https://www.reddit.com/ | 295 | 1126 | -73.8% |
| https://www.bbc.com/news | 354 | 467 | -24.2% |
| https://www.theatlantic.com/ | 11673 | 2970 | +293.0% |
| https://techcrunch.com/ | 548 | 2867 | -80.9% |
| https://www.theverge.com/ | 68198 | 3174 | +2048.4% |
| https://www.cnn.com/ | 418 | 7784 | -94.6% |
| https://www.nytimes.com/ | 379 | 16650 | -97.7% |
The two results that jump out right away as oddities are The Atlantic and The Verge which managed to get a whopping 293% and 2048% heavier without JavaScript. In case you’re curious (I was), it’s because they are doing a lot of lazy-loading of images with JavaScript. In situations where JavaScript is not available, they wrap a fallback image in a <noscript> element. Unfortunately for visitors to both sites, several fallback images are massive—ranging from 1.6MB to 9.9MB.
When the optimization works, which is more often than not, it works very well. The minimal improvement was a 24% reduction in data usage, and the remaining sites shed between 74-98% of their bytes.
It’s possible you would get similar results from the LOFI preview (which displays placeholders instead of the site’s actual images by default) for many of these sites. It’s worth noting though that the NOSCRIPT intervention has the added benefit of reducing the amount of work the actual device has to do. Images may account for the majority of network weight, but on the CPU, JavaScript is the worst offender by far.
What does this mean for site owners?
Whenever something like this comes up, naturally people want to know how to make it so that their site isn’t negatively impacted. The appropriate response is to make sure you serve a usable experience even if JavaScript isn’t enabled. That doesn’t mean you can’t use React, Vue or the like—but it does mean you should use server-side rendering if you do. The less your site relies on client-side JavaScript, the better it will appear when the intervention is applied. Treat JavaScript as an enhancement and you’re good to go.
The BBC site is a good example. Below you can see the mobile site (left) and the NOSCRIPT preview (right). There is very little difference. The branding is retained, and the content is readable.
The BBC site is a great example of how good the NOSCRIPT preview can look when JavaScript is treated as progressive enhancement: all the content and branding is in place.
Contrast that with the Engadget site, which displays nothing whatsoever in the NOSCRIPT preview:
Since Engadget requires JavaScript to display their content, the NOSCRIPT preview is blank.
Or AliExpress which does have some navigation displayed (kind of), but no branding:
AliExpress.com shows at least a little navigation in the NOSCRIPT preview, but there’s no branding without JavaScript enabled.
You can, technically, opt-out of the intervention altogether by setting Cache-control: no-transform on your main request. The no-transform value tells proxy services not to modify any requests or resources and the intervention respects that: applying it ensures no one will ever see a NOSCRIPT preview for your site.
But use this with extreme caution. I’ve always been incredibly uneasy about using the no-transform value to opt out of proxies. Users are choosing those proxy services or browsers intentionally. They’re opting into these sort of optimizations and interventions and it feels a bit uncomfortable to me when developers overrule those decisions.
If you are going to opt-out using the no-transform value, then at least make sure you’re making ample use of the save-data header to reduce weight wherever you can: eliminating web fonts, serving low-quality images, etc.
This is a good thing
Long story short, the NOSCRIPT intervention looks like a really great feature for users. More often than not it provides significant reduction in data usage, not to mention the reduction in CPU time—no small thing for the many, many people running affordable, low-powered devices.
The Chrome folks, as you would expect, aren’t being haphazard with the intervention either. In fact, by (at least initially) relying on a whitelist, they’re being pretty conservative with it. It’s just one of many tricks in their bag to provide a more performant experience and they appear to be treading carefully when it comes to applying it.
What I love most about the intervention is the attention it has gotten from developers. JavaScript isn’t a given. Things go wrong.
I have mixed feelings about Google’s influence on the web (a subject for another post, perhaps) but bringing a little more attention to the reality that we can’t always rely on JavaScript (and providing a much more usable experience for many in the process) is something I can get behind.
To counter this, the school he was visiting sets up their own local caching server. But, as he explains, this approach falls apart when HTTPS gets involved.
A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”. HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers. So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.
Eric acknowledged that HTTPS is a good idea (I agree) but also pointed out that these implications can’t be ignored.
Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here. I think HTTPS is probably a net positive overall, and I don’t know what we could have done better. All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.
Every technology creates winners and losers. HTTPS is no exception.
Many of the responses to the post were…predictable. Some folks read this as an “anti-HTTPS” post. As Brad recently pointed out, we need to get better at talking about technology “…without people assuming you’re calling that technology and the people who create/use it garbage.”
Eric’s post is exactly the kind of reasoned, critical thinking that our industry could benefit from seeing a bit more of. HTTPS is great. It’s essential. I’m very happy that we’ve reached a point where more requests are now made over HTTPS than HTTP. It took a lot of work to get there. A lot of advocacy, and a focus on making HTTPS easier and cheaper to implement.
But the side-effects experienced by folks like those in that school in Uganda are still unsolved. Noting this isn’t blaming the problem on HTTPS or saying HTTPS is bad, it’s admitting we have a problem that we still needs solving.
I was thinking about this issue myself recently. I live in a small town and our mobile data connectivity is a bit spotty, to say the least. I use T-Mobile, which is normally excellent. In my little town, however, that’s not the case. Recently, it seems T-Mobile has partnered with someone local to provide better and faster data connections. But it’s all roaming. T-Mobile doesn’t charge for that, but it does cap your mobile data usage. After you exceed 2GB in a given month, you’re cut off. In the few months since the data has become available, it’s a number I’ve exceeded more than a few times.
So I’ve been taking a few steps to help. One of those was to turn Chrome’s Data Saver (they’re proxy service) back on. It does a good job of cutting down data usage where it can, but it’s useless for any HTTPS site for the same reasons that school’s local caching server is useless—to do what it needs to do it needs to act as a man-in-the-middle. So while Data Saver is extremely effective when it works, it works less and less now.
It’s far from the end of the world for me, but that’s not the case for everyone. There are many folks who rely on proxy browsers and services to access the web in an affordable manner and for them, the fact that the shift to HTTPS has made those tools less effective can have real consequences.
This isn’t an entirely new conversation (as my nemesis1, Jason Grigsby, recounted on Twitter). I can personally remember bringing this up to folks involved with Chrome at conferences, online AMA’s and basically whenever else I had the opportunity. The answers always acknowledged the difficulty and importance of the solution while also admitting that what to do about it was also a bit unclear.
Whether or not the topic was overlooked is up for debate (there has been work done by the IETF towards solving this), and I suppose depends entirely on which discussions you were or were not involved in over the past few years. The filter bubble effect is real and works both ways. But the reality is that in the past few years we’ve made tremendous progress getting HTTPS to be widely adopted, but we haven’t done nearly as good a job ensuring that folks have an affordable and simple alternative to the tools they’ve used in the past to access the web.
Should we have moved ahead with HTTPS everywhere before having a production-ready solution to ensure folks could still have affordable access? I honestly don’t know. Is a secure site you can’t access better than an insecure one you can? That’s an impossibly difficult question to answer, and if you asked it to any group of people, I’m sure a heated discussion would ensue.
Many of us, too, are likely the wrong people to answer that. I know I’m not the right person to pose the question to. I can afford to access the web, and I don’t have the same significant privacy concerns that many around the world and down the street do. Having the discussion is essential, but ensuring it happens with the right people is even more so.
Then there’s the question this raises about how we approach building our sites and applications today.
Troy Hunt had one of the most reasoned responses to Eric’s post that I’ve seen. He pointed out that it’s critical that we move forward with HTTPS, but that this is also an essential problem to solve. He also, rightfully, pointed out the root issue: performance.
If you’re concerned about audiences in low-bandwidth locations, focus on website optimisation first. The average page load is going on 3MB, follow @meyerweb’s lead and get rid of 90% of that if you want to make a real difference to everyone right now 😎
I refer back to Paul Lewis’s unattractive pillars so often I should be paying him some sort of monthly stipend, but this is such a clear example of the reciprocal link and importance of security, accessibility, and performance.
The folks using these local caching servers and proxy services are doing so because we’ve built a web that is too heavy and expensive for them to use otherwise. These tools, therefore, are essential. But using them poses serious privacy and security risks. They’re intentionally conducting a man-in-the-middle attack and which sounds so terribly scary because it is.
To protect folks from these kinds of risks, we’ve made a move to increase the security of the web by doing everything we can to get everything running over HTTPS. It’s undeniably a vital move to make. However this combination—poor performance but good security—now ends up making the web inaccessible to many. The three pillars—security, accessibility and performance—can’t be considered in isolation. All three play a role and must be built-up in concert with each other.
Like pretty much everyone in this discussion has acknowledged, this isn’t an easy issue to solve. Counting on improved infrastructure to resolve these performance issues is a bit optimistic in my opinion, at least if we expect it to happen anytime soon. Even improving the overall performance of the web, which sounds like the easiest solution, is harder than it first appears. Cultural changes are slow, and there are structural problems that further complicate the issue.
Those aren’t excuses, mind you. Each of us can and should be doing our part to make our sites as performant and bloat-free as possible. But they are an acknowledgment that there are deeply rooted issues here that need to be addressed.
There are a lot of questions this conversation has raised, and far fewer answers. This always makes me uncomfortable. I write a lot of posts that never get published because ending with unsolved questions is never particularly satisfying.
But I suspect that may be what we need—more open discussion and questioning. More thinking out loud. More acknowledgment that not everything we do is straightforward, that there’s much more nuance than may first appear. More critical thinking about the way we build the web. Because the problems may be hard and the answers uncertain, but the consequences are real.
We look at histograms instead of any single slice of the pie to get a good composite picture of what the current state of affairs is. And for specific goals and budgets, we turn to the 90th or 95th percentile.
For a long time, the average or median metrics were the default ones our industry zeroed in on, but they provide a distorted view of reality.
Here’s a simplified example (for something in more depth, Ilya did a fantastic job of breaking down this down).
Let’s say we’re looking at five different page load times:
- 2.3 seconds
- 4.5 seconds
- 3.1 seconds
- 2.9 seconds
- 5.4 seconds
To find the average, we add them all up and divide by the length of the data set (in this case, five). So our average load time is 3.64 seconds. You’ll notice that there isn’t a single page load time that matches our average exactly. It’s a representation of the data, but not an exact sample. The average, as Ilya points out, is a myth. It doesn’t exist.
Figuring out the median requires a little less math than calculating the average. Instead, we order the data set from fastest to slowest and then grab the middle (median) value of the data set. In this case, that’s 3.1 seconds. The median does, at least, exist. But both it and the average suffer from the same problem: they ignore interesting, and essential information.
Why are some users experiencing slower page load times? What went wrong that caused our page to load 45% and 74% slower?
Focusing on the median or average is the equivalent of walking around with a pair of blinders on. We’re limiting our perspective and, in the process, missing out on so much crucial detail. By definition, when we make improving the median or average our goal, we are focusing on optimizing for only about half of our sessions.
Even worse, it’s often the half that has the lowest potential to have a sizeable impact on business metrics.
Below is a chart from SpeedCurve showing the relationship between start render time and bounce rate for a company I’m working with. I added a yellow line to indicate where the median metric falls.
This graph shows the relationship between start render time and bounce rate. The yellow dotted line at 3.1 seconds indicates the median start render time.
A few things jump out. The first is just how long that long-tail is. That’s a lot of people running into some pretty slow experiences. We want that long-tail to be a short-tail. The smaller the gap between the median and the 90th percentile, the better. Far more often than not, when we close that gap we also end up moving that median to the left in the process.
The second thing that jumps out is the obvious connection between bounce rate and start render time. As start render time increases so does the bounce rate. In other words, our long-tail is composed of people who wanted to use the site but ended up ditching because it was just too slow. If we’re going to improve the bounce rate for this site, that long-tail offers loads of potential.
It’s tempting to write these slower experiences off, and often we do. We tend to frame the long-tail of performance as being full of the edge cases and oddities. There’s some truth in that because the long-tail is where you tend to run into interesting challenges and technology you haven’t prioritized.
But it’s also a dangerous way of viewing these experiences.
The long-tail doesn’t invent new issues; it highlights the weaknesses that are lurking in your codebase. These weaknessses are likely to impact all of us at some point or another. That’s the thing about the long-tail—it isn’t a bucket for a subset of users. The long-tail is a gradient of experiences that we’ll all find ourselves in at some point or another. When our device is over-taxed, when our connectivity is spotty—when something goes wrong, we find ourselves pushed into that same long-tail.
Shifting our focus to the long-tail ensures that when things go wrong, people still get a usable experience. By honing in on the 90th—or 95th or similar—we ensure those weaknesses don’t get ignored. Our goal is to optimize the performance of our site for the majority of our users—not just a small subset of them.
As we identify, and address, each underlying weakness that impacts our long-tail of experiences, inevitably we make our site more resilient and performant for everyone.
]]>But when reading the recaps and tweets that followed the event, I was pretty intrigued to see the Apple Watch is now going to be able to support web content. The initial details were a bit scant, but Erik Runyon took the time to jot down some information he found in a video from WWDC.
It looks like WebKit on the watch is optimized for reading, as most probably thought it would. Some features are turned off (video, service workers, fonts, etc) and text-heavy pages automatically activate Reader Mode.
However, the watch also supports, from the sounds of it, a mostly full web experience.
If you don’t do anything to optimize your content, the watch will attempt to adapt already responsive sites to fit on the watch display. It does this by preserving 320 CSS pixel layouts and computing the initial scale to fit within the watch screen. So even if you set an initial scale of 1 at 320pt, the watch will scale it down to 0.49 initial scale at 156pt. It also reports its dimensions as 320x357 CSS pixels rather than the actual watch screen dimensions. It will also avoid horizontal scrolling.
The watch will also provide a way for you to optimize your layout specifically for the smaller sizes (though I’ll be honest, I question whether we needed a separate meta tag to accomplish this):
Now if you want to optimize your content specifically, you’ll need to add yet another meta tag.
<meta name=“disabled-adaptations” content=“watch”>
This will tell the watch to ignore its default adaptations and display as instructed in the CSS. So even if you have a media query at min-width 320px, the watch will not use it since its screen is smaller. Essentially, this meta tag will allow Webkit to treat the display width as the true width of Apple Watch, which are 272x340px for the 38mm, and 312x390px for the 42mm watch.
Now, who knows if this is going to take off. As Ethan pointed out in his post, we’re pretty terrible at predicting how people are going to use the web. I don’t immediately see any particular situations where I personally would want web access on my wrist. But that’s just me, and it would be foolish of me to project my situation onto everyone else. The only limitations of the web are the ones we place on it through our assumptions.
What interests me most about the web on a watch are the constraints that the hardware places.
The median site sends about 351kb of compressed JavaScript to “mobile” devices according to HTTP Archive. That’s roughly 1.7-2.4MB of uncompressed JavaScript the browser has to parse, compile, and execute. That little S3 processor is going to struggle if we try to serve anything close to the amount of JavaScript that we serve to everything else.
Of course, I haven’t seen anything yet about how the watch is going to treat all that JavaScript in the first place. They’ve decided to limit other expensive features. With JavaScript being so CPU-intensive, I wouldn’t be the least bit surprised if they limited JavaScript functionality in some way as well.
It would make complete sense to me if WebKit on the watch follows the approach that was taken by proxy browsers like Opera Mini, Puffin, UC Mini and the rest of their kin. In those cases, parsing, rendering, layout and JavaScript functionality are all handled on an intermediary server. That server then passes a snapshot of the page to the device. This enables lower-powered hardware to serve web content without overtaxing itself. In that situation, JavaScript functionality is typically limited to whatever can complete on the server within X amount of seconds.
Of course, that’s just speculation. And, I have to admit, fairly unlikely speculation. The idea of Apple using a proxy-browser seems a little un-Apple-like to me. But it doesn’t matter in the end, because where Apple finds a way to limit JavaScript execution or gives us free reign to shoot ourselves in the foot, ultimately the outcome is the same for developers.
The organizations that made the least assumptions about the people using them are going to be the ones ahead. The folks who made sure their markup was semantic and robust, so that reader mode wasn’t an issue; the folks who built their sites to be resilient and performant; the folks who used progressive enhancement—those folks are going to be just fine.
It’s not a new lesson, but it’s one we do tend to have to revisit from time to time. When we build our sites in a way that allows people using less-capable devices, slower networks and other less than ideal circumstances, we end up better prepared for whatever device or technology comes along next.
As Jeremy Keith wrote in Resilient Web Design: “The best way to be future-friendly is to be backwards‐compatible.”
]]>Somehow, I felt weird about throwing away a bit of consumer technology. Except, the Kindle to me, wasn’t that at all. I hold a great fondness to this little device, and that’s an odd concept for me to grasp.
I can relate. I have had many conversations over the years where I’ve told people that out of all the devices and gadgets I own, my Kindle—one of the least powerful and cheapest devices—is the one that I am most protective of. I’ve never been entirely able to explain why that is.
For Remy, his affection for his Kindle is pretty personal:
I’d struggled reading books in the past for a number of reasons: I used to use glasses to help my focus when reading (when I was 18), the size of the books were daunting to me (it took me 6 solid months to read Frankenstein on paperback), and all of this cumulative to a very slow and painful reading process, which put me off the entire experience.
The Kindle changed a few things for me: firstly, I had no idea the size of the book, and I’ve never really understood the percentage progress (or my thumb is over it intentionally). Secondly, I found that raising the font size and increasing the line height made the pages entirely readable for me.
Since that December in 2016 and mid-2018, I’ve read nearly 50 books. This is a huge deal for me, and my Kindle was there for every page.
That last bit (emphasis mine) resonated with me and got me thinking again about why exactly I’m so attached to mine. I think it boils down to focus and deliberate intent.
My laptop and phone are very powerful devices, but they’re not devices for focus.
I use my laptop for both work and play, and whenever it’s open, there is a myriad of different applications and processes I’m toggling between. I end up hopping between my browser, my editor, my calendar, my terminal session, Sketch if I’m doing front-end work and Slack when something comes up. I probably have music playing.
My phone is worse. Granted, it’s not as easy to hop between different applications as the laptop, but anyone with a smartphone knows “focused” would be the most inappropriate adjective to possibly apply to a smartphone. Most of the time, I pick up my phone because some notification somewhere sucked me in. I also use it when I have a few moments of downtime and am aimlessly searching for a distraction.
With either device, there is always this sense that there is more I could be doing on that device. YouTube, Twitter, Slack—you name it—they all beckon for attention.
But the Kindle is nothing like that.
When I pick up my Kindle, it’s always intentional. No notifications are pulling me in. It doesn’t buzz or light up. If I open my Kindle, it’s because I wanted to.
And while you can use the eternally “experimental” browser or browse Goodreads on your Kindle, neither of those take center stage. The Kindle is a device that focuses on reading books (and, admittedly, on buying more books to read). So when I pick it up, I open the book I want to read, sit back and get immersed in it with nothing else to distract me.
It’s deliberate. It’s focused. It’s calm.
We’re increasingly surrounded by needy technology—technology that constantly tries to distract us in order to get our attention. There’s something refreshing and appealing about technology that is calm and focused.
]]>It shows. This is a brutal novel, with some absolutely terrible scenes that will be uncomfortable for everyone, and almost certainly too much for some.
Technically, The Underground Railroad is science-fiction. The railroad in the book is an actual subway, and as you move from stop to stop, each stop has subtle indications that time may have leaped forward a bit as well. But that aspect of the story is very subdued. Like the doors in Exit West, the fantastical nature of the railroad serves as a device to let Colson get his characters to the next destination so that he can explore racism through a variety of lenses.
The story starts on a plantation, where we meet Cora and the other characters that will drive the story forward. This setting, in particular, is evidence of the incredible amount of research that went into the book. Colson’s presentation of plantation life goes beyond the shallow perspective we often get. He explores what a complete lack of power and humanity does to those enslaved, and how that causes many of the slaves to treat each other poorly as well—desperate to be able to assert some sort of control or to be able to call something theirs. It’s chilling and heartwrenching. It also sets up the mystery of Cora’s mother (when we finally find out her story late in the novel, it’s a significant and poignant moment).
When Cora eventually runs away with another slave named Ceaser, we’re moved first to South Carolina. South Carolina serves as our window into governmental attempts to intervene with race, and we quickly see just how awful their attempts are.
Each successive stop provides a different view. South Carolina explores government intervention. North Carolina is like a scene from a horror movie as it deals with white supremacy. Indiana provides a little bit of a respite as we see how the characters (nearly all former slaves) attempt to reclaim their lives and their humanity. But even Indiana isn’t as gentle as it first seems.
Without spoiling the ending, I will say that it ends in the only way a book like this could end. There’s hope, in the end, but it’s a heavy hope that is buoyed by the reality that the scars of Cora’s experience (and more generally, racism) run deep and linger on.
Whitehead’s writing is beautiful and poignant. I’ve seen lesser books on difficult subjects struggle with finding the right balance between telling their story without belaboring their message. Whitehead doesn’t have a problem with that. You’re absorbed in the story from the beginning and you can’t help but see all the painful parallels to what is still happening today.
The railroad in the book doesn’t ever stop in today’s day and age, but you end up finding your way there nonetheless.
]]>- Performance archaeology uncovers insights into your development and performance culture.
- Start with a hypothesis and do a comprehensive survey and additional resource. When you understand the context, start excavating and finally interpret your discoveries.
Hypothesis
- Focused on improving the mobiler performance of the Etsy listing page.
- Code for mobile listing page started in 2013 with multiple teams, so lots of legacy code baggage.
- RUM DOM Content Loaded times tied most closely to the user experience of Etsy users, so focused on that. 90% of visits were under 5 seconds. 52% of visitors were 2 seconds or less.
- Looking at conversion rate versus DOMContentLoaded they discovered that conversions steadily declined for every second added to DOMContentLoaded time.
- Their ultimate hypothesis was that improving the performance of the listing page would increase conversions.
Survey
- Etsy had the basics in place already, so they focused on improving the critical rendering path. Focusing on how quickly a user receives confirmation the page is loading and how quickly they can interact with a page.
- They ran their listing page through WebPageTest on an iPhone 6 over a slow 3g network to level set. Start Render (their primary focus) was 8.5 seconds and DOM Content Loaded was at 12.1 seconds.
- Their listing page had five CSS files loaded up front that blocked CSSOM construction and Start Render despite most of that CSS not being used.
- Their main listing image, the thing they wanted to load first, was the 36th resource actually requested.
- Identified four primary areas of improvement: lazyload images, reduce CSS file size, switch to SVGs for icons and background images, reduce JavaScript file size.
Excavate
- 35 of their 37 images could be moved out of the critical rendering path. Synthetic testing showed a significant performance improvement on slower networks, while faster networks remained similar.
- Started with 5 CSS files totaling 98kb of CSS over the network.
- To identify unused CSS, they used Selenium scripts to open pages in the browser and ran unCSS on them to identify unused CSS and create a file with only the CSS in use.
- The problem with the automation was that they didn’t capture all the various states of a page and they had to keep adding more every time they found a bug. So there was manual work involved.
- They put the lazyloading and unused CSS optimizations out as an experiment to a subset of their users and saw a 6% reduction in DOM Content Loaded and a 13.2% reduction in page load time. Conversion also increased.
- Originally, Etsy had five background images in CSS and one icon font. While the over the wire cost wasn’t very high, the extra HTTP requests weren’t necessary so they switched to SVG. The impact on metrics was negligible and didn’t impact conversion.
- These optimizations combined to create a 44% decrease in start render time, 10% decrease in Time to First Interactive, 60% reduction in CSS file size and 32% reduction in image weight.
- Etsy had two primary JS files on the listing page. A page specific file that was 56kb with 121 dependencies and a global base file that was 142kb with 124 dependencies.
- Manually reducing JS dependencies saw an improvement in performance metrics but a reduction in conversions.
- Chrome’s Code Coverage tab reports unused JS and CSS code. There was no API, so Etsy built a tool called Vimes to make code coverage scale.
- Vimes re-writes JS so that each function logs itself. Those logs are sent to the server to be aggregated and mapped back to source files to get an idea of what functions were called and when.
- Running Vimes lead to a 28% reduction in page specific file and a 37% reduction in the global base file and an improvement in conversion.
Interpret
- Etsy’s experiments clearly showed that performance directly impacts conversion. Nothing compares to having your own numbers to show when you need to convince folks that performance matters.
- Frontend performance is just as important, if not more, than server-side performance. It’s important to prioritize it so that legacy baggage doesn’t accumulate and negatively impact the user experience.
- Our experiences aren’t our user’s experiences. The experiments Etsy ran made little impact on high-end machines and networks, but significantly improved the experience for less than ideal scenarios.
- Your front-end architecture should match your culture. For example, if you have a culture of experimentation, your architecture needs to be set up in a way that encourages and supports that experimentation.
- Architect for deletion. It needs to be easy to identify what gets used and what doesn’t so that you can minimize you tech debt. Every line of code we write today will end up as someone’s legacy code.
The SpeedCurve team is ridiculous. I mean that in the kindest way possible. Between Mark, Tammy, Steve, and Joseph there’s an absurd abundance of top-notch people working on the product. I’ve personally worked with Tammy, Mark, and Steve on things in the past, so there’s already a high-level of familiarity between us all.
Partnering with them means I have an excuse to work with each of them a bit more, and that in itself would be enough to make me excited.
I’m still handling projects on my own as well, but for SpeedCurve customers specifically, this pairing makes so much sense.
The SpeedCurve gang and I will be sitting on a large amount of data giving us intimate knowledge of the state of performance of your sites and applications. There’s no need to setup access to some other tool somewhere and go back and forth setting it up: it’s all right there at the ready.
Working directly with SpeedCurve for consulting will also ensure that customers are getting the maximum value out of the service itself. It’s not just about digging into the performance of your site and fixing issues there, but also about digging into your SpeedCurve account to get the most out of your monitoring through continuous deployment, carefully configured performance budgets, custom timing metrics—you name it!
As a company that is already so personally invested in making sure they’re doing everything they can to help out, there’s a real opportunity for SpeedCurve to go above and beyond and help companies fix the performance woes that the product so nicely exposes.
In fact, I’m also going to be working with SpeedCurve to bolster their “Improve” recommendations so that all their customers will get detailed reports about where to focus their performance efforts. The consulting work will dovetail nicely into that.
I’ve been using SpeedCurve personally since it was in beta and it’s been fun to watch SpeedCurve evolve since then. They’ve added responsive design analysis, continuous deployment capabilities, performance budgets (a personal favorite, as you probably guessed), a fully-featured RUM offering and more.
What I like most about SpeedCurve is that they’re not focused on metrics for metrics sake. They zero in on what matters. They focus on providing an accurate view of the actual experience of using your site, and the information you need to make that experience better.
I’m really excited to help make it even easier for SpeedCurve customers to make the improvements that actually matter.
If you’re a SpeedCurve customer, head over to their site to get a little more information about the consulting services. I’m looking forward to working with you!
]]>First Dedicated Performance team
- Pinterest serves over 200 million global monthly active users and an infrastructure that serves over 1 millions request per second.
- Around 2016, Pinterest migrated from Backbone to React. They saw a 20% improvement in performance and 10-20% improvement in engagement.
- For unauthorized pages, the same migration saw a 30% improvement in performance and a 15% increase in signups, 10% increase in SEO traffic and a 5-7% increase in logins.
- Questions came out of these improvements: Were there bug’s in their performance tracking? Were they still performing this well? They realized they needed a dedicated performance team to better understand what was happening.
Data Confidence
- Pinterest chose a custom metric called Pinner Wait Time (PWT) which looks at the slowest load time for content they deem to be critical on a page.
- Custom metrics let them find something they can measure that is directly tied to engagement instead of just vanity metrics with no real impact on the actual experience.
- Identified four steps to build confidence in the data.
- The first step was to set baselines for the different flows across the site. They validated their performance metrics, implemented confidence tests, ensured graphs reflected real user experience and ensured teams understood their metrics.
- The second step was to tie performance metrics to business goals. They run experiments to see which metrics correlate to engagement and which do not. This lets them tie performance to engagement wins, builds better trust in performance, and helps teamst budget time for performance improvements based on impact.
- The third step was an internal “PR campaign”. This included all-company demo’s and custom-built tools to help people get excited and let them know the performance team was there and able to help.
- The fourth step was to fight regressions. Regression protection in some cases could be even more important than the initial optimization.
Regressions
- Developed Perf Watch—an in-house regression testing framework.
- They run regression tests for each critical page: pages like homefeed, pin closed page and the search page.
- Tests are run for each critical page several hundred times using multiple test runners running parallel.
- They calculate and monitor the 90th percentile of Pinner Wait Time over time. If the test comes back exceeding a threshold for variance, the build is flagged as a regression.
- Running these tests through their build process helps them to identify performance issues quickly to address.
- To help determine what caused the regression, they build Perf Detective which runs a binary search (similar to
git bisect) to determine the offending commit.
Optimization Strategy
- When the team was formed, they were presented with an aggressive goal of improving PWT.
- They started by doing some detailed analysis and brainstorming with various teams to see what the current situation was and what potential optimizations they could make.
- These potential optimizations were then listed based on what work would be taken and what the impact could be.
- Prototyping these optimizations gave them a better understanding of the estimated improvement and level of effort.
- Each optimization was run inside an A/B experiment using an in-house experimentation framework that shows performance impact as well as user engagement impact.
- Their experimentation framework lets them drill in based on user type, geography, etc.
Vision
- A dedicated performance team isn’t enough to ensure top-down buy-in and a strong performance culture. The ownership couldn’t just be on their small performance team.
- It can’t just be the performance team making optimizations. The experts for each surface in the company need to be the one making optimizations.
- A centralized knowledge base of performance information and history within the company is an invaluable resource.
- The performance team should build tools to empower teams within the organization to make performance improvements that matter.
But there was still a glimmer of hope in these dark days. In July of that year, Daan Jobsis discovered a technique that the Filament Group would later dub “compressive images”. The technique became pretty popular and worked quite well in lieu of an actual standard.
Now, fast forward to today. We have a set of standards for responsive images, browsers have improved image loading—does the compressive images technique still have a place in our workflow? I’ve been threatening to write this post for a long time, but just kept putting it off. But when Dave Shea offers you a like, you take it (there…happy Dave?).
What are compressive images?
The compressive images technique relies on you sizing a JPG image to be larger than the size it ultimately is displayed at and then compressing it to an incredibly low-quality setting. This cuts the image weight dramatically, but also makes the image look absolutely terrible. However, when the browser sizes the image down to be displayed, it looks fantastic again. In fact, it even looks fantastic on high-resolution displays. Magic! (Warning: Not actually magic.)
The benefit in weight can be substantial. In the Filament Group’s article, the example image was a whopping 53% lighter (from 95kb to 44kb).
The trade-off for compressive images is primarily the memory cost (there used to be scaling and decoding risks, but browsers have improved in that area).
Let’s consider the Filament Group’s example image again. The original image is a 400px by 300px image. The compressive image is 1024px by 768px.
When a browser stores a decoded image in memory, each pixel is represented by an RGBA value which means that each pixel costs us four bytes (a byte each for the red, green, blue and alpha values). The memory footprint of decoded images is only reported by Edge and Firefox at the moment, but it can be calculated easily enough.
- Take the height of the image and multiply it by the width of the image to get the total number of pixels.
- Multiply the total number of pixels by 4.
With that in mind, let’s calculate the memory impact for both the original image and compressive image in Filament Group’s post:
- Resized image: 400 x 300 x 4 = 480,000 bytes
- Compressive image: 1024 x 768 x 4 = 3,145,728 bytes
For the example provided, though the weight of the compressive image is less than 50% of the original weight, the memory allocated for the compressive image is roughly 6.5 times the size needed for the original image.
Another wrinkle: The GPU
Sounds terrible, right? Thankfully a few browsers have made some improvements in how images impact memory. Microsoft Edge (and IE 11+) and Blink-based browsers (Chrome + Opera) support GPU decoding of images in certain scenarios. There’s a lot of interesting fallout, but the memory impact is pretty straightforward. Since the final decoding happens on the GPU, JPG’s can now be stored in YUV values instead of RGBA values. Now each pixel only costs us three bytes:
- Resized image: 400 x 300 x 3 = 360,000 bytes
- Compressive image: 1024 x 768 x 3 = 2,359,296 bytes
That reduces the memory footprint a little, but it gets even better.
Since we’re storing the values before they’re finally converted to RGBA, browsers can also take advantage of subsampling to reduce the memory footprint even more. Subsampling isn’t the easiest thing to wrap your head around, but Colin Bendell wrote a good post going into detail and you can also read about it High Performance Images.
For the context of the article, it’s enough to say that subsampling reduces the file size of the image by not necessarily including information about each and every pixel in the image. The savings don’t just apply to the file size. If a browser is storing data in YUV format, then it gets to ignore those pixel values as well based on whatever subsampling level is applied.
Any image saved at a low-enough quality to take advantage of the compressive images technique will be using 4:2:0 subsampling. As a result, the browser only has to store a couple pixel values per section of the image. The math is slightly trickier, but the basic idea is that fewer pixel samples mean fewer pixels stored in memory.
- Resized image: 400 x 300 x 3 = 360,000 bytes
- Compressive image: 1024 x 768 x 3 - (1024 x 768 x .75 x 2) = 1,179,648 bytes
GPU decoding reduces the memory cost quite a bit. Without GPU decoding, the memory footprint for the compressive image was around 6.5 times the size of the original image. With GPU decoding in place, the memory footprint is closer to 3.3 times the original.
What should we do instead?
By now the trade-off is pretty clear. Compressive images give us a reduced file size, but it greatly increases the memory footprint. Thanks to the standards that have been developed around responsive images, it’s a trade-off we no longer need to make.
For selectively serving images to higher resolutions, the solution could be as simple as using the srcset attribute with a density descriptor.
<img src="/images/me.jpg" alt="Image of me" srcset="/images/me-1x.jpg 1x, /images/me-2x.jpg 2x" />
The snippet above:
- Provides an image (me.jpg) for the default. Any browser not supporting the
srcsetattribute will load this. - Provides two alternative images (me-1x.jpg and me-2x.jpg) that browsers can choose from based on the screen resolution. The density descriptors (1x and 2x) help the browser determine which is the appropriate one to use.
This allows you to serve a lower quality image for older browsers (me.jpg), something appropriately sized but slightly better quality for 1x screen resolutions (me-1x.jpg), and a large high-resolution image for double-density displays (me-2x.jpg). No one pays a higher memory cost than necessary and the network costs can still be kept mostly in check.
That’s a straightforward use-case, but even if your situation is more complex you can likely navigate it with some combination of srcset, sizes or picture. Jason Grigsby’s ten-part (yes, 10) series of posts on responsive images is still my favorite go-to for anyone who would like to read a bit more. Just don’t tell Jason I said that—it would give him a big head.
All of this isn’t to say we should never use compressive images—never is a word that rarely applies in my experience. But it does mean that we should be cautious.
If you’re considering the approach for a single image on a page, and that image’s memory footprint will be reasonable (i.e. not a massive hero image)—you can likely use the compressive images technique without much of a problem (if you go that route, check out Kornel’s post from a few years back where he mentions modifying quantization tables to get better compressive images).
More often than not, though, your best approach is probably to make use of the suite of responsive images standards we now have available to us.
]]>Evaluating the effectiveness of AMP from a performance standpoint is actually a little less straightforward than it sounds. You have to consider at least four different contexts:
- How well does AMP perform in the context of Google search?
- How well does the AMP library perform when used as a standalone framework?
- How well does AMP perform when the library is served using the AMP cache?
- How well does AMP perform compared to the canonical article?
How well does AMP perform in the context of Google search?
As Ferdy pointed out, when you click through to an AMP article from Google Search, it loads instantly—AMP’s little lightning bolt icon seems more than appropriate. But what you don’t see is that Google gets that instantaneous loading by actively preloading AMP documents in the background.
In the case of the search carousel, it’s literally an iframe that gets populated with the entirety of the AMP document. If you do end up clicking on that AMP page, it’s already been downloaded in the background and as a result, it displays right away.
In the context of Google search, then, AMP performs remarkably well. Then again, so would any page that was preloaded in the background before you navigated to it. The only performance benefit AMP has in this context is the headstart that Google gives it.
In other words, evaluating AMP’s performance based on how those pages load in search results tells us nothing about the effectiveness of AMP itself, but rather the effectiveness of preloading content.
How well does the AMP library perform when used as a standalone framework?
In Ferdy’s post, he analyzed a page from Scientas. He discovered that without the preloading, it’s far from instant. On a simulated 3G connection, the Scientas AMP article presents you with a blank white screen for 3.3 seconds.
Now, you might be thinking, that’s just one single page. There’s a lot of variability and it’s possible Scientas is a one-off example. Those are fair concerns so let’s dig a little deeper.
The first thing I did was browse the news. I don’t recommend this to anyone, but there was no way around it.
Anytime I found an AMP article, I dropped the URL in a spreadsheet. It didn’t matter what the topic was or who the publisher was: if it was AMP, it got included. The only filtering I did was to ensure that I tested no more than two URL’s from any one domain.
In the end, after that filtering, I came up with a list of 50 different AMP articles. I ran these through WebPageTest over a simulated 3G connection using a Nexus 5. Each page was built with AMP, but each page was also loaded from the origin server for this test.
AMP is comprised of three basic parts:
- AMP HTML
- AMP JS
- AMP Cache
When we talk about the AMP library, we’re talking about AMP JS and AMP HTML combined. AMP HTML is both a subset of HTML (there are restrictions on what you can and can’t use) and an augmentation of it (AMP HTML includes a number of custom AMP components and properties). AMP JS is the library that is used to give you those custom elements as well as handles a variety of optimizations for AMP-based documents. Since the foundation is HTML, CSS, and JS, you can absolutely build a document using the AMP library without using the Google AMP Cache.
The AMP library is supposed to help ensure a certain level of consistency with regards to performance. It does this job well, for the most part.
The bulk of the pages test landed within a reasonable range of each other. There was, however, some deviance on both ends of the spectrum: the minimum values were pretty low and the maximum values frightening high.
| Metric | Min | Max | Median | 90th Percentile |
|---|---|---|---|---|
| Start Render | 1,765ms | 8,130ms | 4,617ms | 5,788ms |
| Visually Complete | 4,604ms | 35,096ms | 7,475ms | 21,432ms |
| Speed Index | 3729 | 16230 | 6171 | 10144 |
| Weight | 273kb | 10,385kb | 905kb | 1,553kb |
| Requests | 14 | 308 | 61 | 151 |
Most of the time, AMP’s performance is relatively predictable. However, the numbers also showed that because a page is a valid AMP document, that is not a 100% guarantee that the site will be fast or lightweight. As with pages built with any technology, it’s entirely possible to build an AMP document that is slow and heavy.
Any claim that AMP ensures a certain level of performance depends both on how forgiving you are of the extremes, and on what your definition of “performant” is. If you were to try and build your entire site using AMP, you should be aware that while it’s not likely to end up too bloated, it’s also not going to end up blowing anyone’s mind for its speed straight of the box. It’s still going to require some work.
At least that’s the case when we’re talking about the library itself. Perhaps the AMP cache will provide a bit of a boost.
How well does AMP perform when the library is served using the AMP cache?
The AMP library itself helps, but not to the degree we would think. Let’s see if the Google cache puts it over the top.
The Google AMP Cache is a CDN for delivering AMP documents. It caches AMP documents and—like most CDN’s—applies a series of optimizations to the content. The cache also provides a validation system to ensure that the document is a valid AMP document. When you see AMP served, for example, through Google’s search carousel, it’s being served on the Google AMP Cache.
I ran the same 50 pages through WebPagetest again. This time, I loaded each page from the Google AMP CDN. Pat Meenan was kind enough to share a script for WebPagetest that would pre-warm the connections to the Google CDN so that the experience would more closely resemble what you would expect in the real world.
logdata 0
navigate https://cdn.ampproject.org/c/www.webpagetest.org/amp.html
logdata 1
navigate %URL%
When served from the AMP Cache, AMP pages get a noticeable boost in performance across all metrics.
| Metric | Min | Max | Median | 90th Percentile |
|---|---|---|---|---|
| Start Render | 1,427ms | 4,828ms | 1,933ms | 2,291ms |
| Visually Complete | 2,036ms | 36,001ms | 4,924ms | 19,626ms |
| Speed Index | 1966 | 18677 | 3277 | 9004 |
| Weight | 177kb | 10,749kb | 775kb | 2,079kb |
| Requests | 13 | 305 | 53 | 218 |
Overall the benefits of the cache are pretty substantial. On the high-end of things, the performance is still pretty miserable (the slightly higher max’s here mostly have to do with differences in the ads pulled in from one test to another). But that middle range where most of the AMP documents live becomes faster across the board.
The improvement is not surprising given the various performance optimizations the CDN automates, including:
- Caching images and fonts
- Restricting maximum image sizes
- Compressing images on the fly, as well as creating additional sizes and adding srcset to serve those sizes
- Uses HTTP/2 and HTTPS
- Strips out HTML comments
- Automates inclusion of resource hints such as
dns-prefetchandpreconnect
Once again, it’s worth noting that none of these optimizations requires that you use AMP. Every last one of these can be done by most major CDN providers. You could even automate all of these optimizations yourself by using a build process.
I don’t say that to take away from Google’s cache in any way, just to note that you can, and should, be using these same practices regardless of if you use AMP or not. Nothing here is unique to AMP or even the AMP cache.
How well does AMP perform compared to the canonical article?
So far we’ve seen that the AMP library by itself ensures a moderate level of performance and that the cache takes it to another level with its optimizations.
One of the arguments put forward for AMP is that it makes it easier to have a performant site without the need to be “an expert”. While I’d quibble a bit with whether labeling many of the results I found “performant”, it does make sense to compare these AMP documents with their canonical equivalents.
For the next round of testing, I found the canonical version of each page and tested that as well, under the same conditions. It turns out that while the AMP documents I tested were a mixed bag, they do out-perform their non-AMP equivalents more often than not (hey publishers, call me).
| Metric | Min | Max | Median | 90th Percentile |
|---|---|---|---|---|
| Start Render | 1,763ms | 7,469ms | 4,227ms | 6,298ms |
| Visually Complete | 4,231ms | 108,006ms | 20,418ms | 54,546ms |
| Speed Index | 3332 | 45362 | 8152 | 21495 |
| Weight | 251kb | 11,013kb | 2,762kb | 5,229kb |
| Requests | 24 | 1743 | 318 | 647 |
Let’s forget the Google cache for a moment and put the AMP library back on even footing with the canonical article page.
Metrics like start render and Speed Index didn’t see much of a benefit from the AMP library. In fact, Start Render times are consistently slower in AMP documents.
That’s not too much of a surprise. As mentioned above, AMP documents use the AMP JS library to handle a lot of the optimizations and resource loading. Anytime you rely on that much JavaScript for the display of your page, render metrics are going to take a hit. It isn’t until the AMP cache comes into play that AMP pulls back ahead for Start Render and Speed Index.
For the other metrics though, AMP is the clear winner over the canonical version.
Improving performance….but for who?
The verdict on AMP’s effectiveness is a little mixed. On the one hand, on an even playing field, AMP documents don’t necessarily mean a page is performant. There’s no guarantee that an AMP document will not be slow and chew right through your data.
On the other hand, it does appear that AMP documents tend to be faster than their counterparts. AMP’s promise of improved distribution cuts a lot of red tape. Suddenly publishers who have a hard time saying no to third-party scripts for their canonical pages are more willing (or at least, made to) reduce them dramatically for their AMP counterparts.
AMP’s biggest advantage isn’t the library—you can beat that on your own. It isn’t the AMP cache—you can get many of those optimizations through a good build script, and all of them through a decent CDN provider. That’s not to say there aren’t some really smart things happening in the AMP JS library or the cache—there are. It’s just not what makes the biggest difference from a performance perspective.
AMP’s biggest advantage is the restrictions it draws on how much stuff you can cram into a single page.
For example. here are the waterfalls showing all the requests for the same article page written to AMP requirements (the right) versus the canonical version (the left). Apologies to your scroll bar.
Comparing the waterfalls for the canonical version of an article (left) and AMP version (right). AMP’s restrictions make for a lot fewer requests.
The 90th percentile weight for the canonical version is 5,229kb. The 90th percentile weight for AMP documents served from the same origin is 1,553kb— a savings of around 70% in page weight. The 90th percentile request count for the canonical version is 647, for AMP documents it’s 151. That’s a reduction of nearly 77%.
AMP’s restrictions mean less stuff. It’s a concession publishers are willing to make in exchange for the enhanced distribution Google provides, but that they hesitate to make for their canonical versions.
If we’re grading AMP on the goal of making the web faster, the evidence isn’t particularly compelling. Every single one of these publishers has an AMP version of these articles in addition to a non-AMP version.
Every. Single. One.
And for more often than not, these non-AMP versions are heavy and slow. If you’re reading news on these sites and you didn’t click through specifically to the AMP library, then AMP hasn’t done a single thing to improve your experience. AMP hasn’t solved the core problem; it has merely hidden it a little bit.
Time will tell if this will change. Perhaps, like the original move from m-dot sites to responsive sites, publishers are still kicking the tires on a slow rollout. But right now, the incentives being placed on AMP content seem to be accomplishing exactly what you would think: they’re incentivizing AMP, not performance.
]]>Each track is curated by the track hosts and speakers are all given the opportunity to take advantage of mentoring. They even had a process in place so that speakers could give a preview of their talk to other speakers and conference staff for feedback in the weeks before the event. It’s a level of care in the talk process that you don’t typically expect from larger events.
I had the privilege of closing QCon’s first ever ethics track. The whole day was full of fantastically passionate talks around the ethics of technology. Even more encouraging was that while the room was small, every single talk was packed.
Towards the end of the day all the speakers, the track hosts, and a couple of guests got on stage for a panel discussion. The discussion was lively and spirited. I was struck by how much we all agreed on the big picture, but how much less we agreed on the specific steps that we would need to start making notable improvements in our field. That makes complete sense to me. We’re at a critical junction in technology where (sadly, very late) we’re coming to grips with the broader impact of what we build. I know a lot of people, like those of us on the panel, want to fix it—the how is the tricky part.
One of the topics that we were a little divided on was the idea of regulations and licenses. Doctors must be licensed, electricians must be licensed—why not web developers. The conversation got a little cut-off due to time, but I’ve been unable to shake it since then.
Then I stumbled on a post I had missed from Mike Monteiro arguing for the same thing—some sort of licensing around design:
As professionals in the design field, a field becoming more complex by the day, it’s time that we aim for a professional level of accountability. In the end, a profession doesn’t decide to license itself. It happens when a regulatory body decides we’ve been reckless and unable to regulate ourselves. This isn’t for our sake. It’s for the sake of the people whose lives we come in contact with. We moved too fast and broke too many things.
First, let me say that I’m coming at this topic from the perspective of the web specifically. Technology is a much broader topic, of course, though I think a lot of the general concerns are similar.
I firmly agree with Mike’s statement about moving too fast and breaking too many things. In fact, there is so much in his article that I agree with. Just as was the case on the panel, we absolutely agree on the goal and the problem. The solution—in this case, licensing—is what I’m not as sold on.
My worry is that in attempting to put some sort of restriction on who gets to build for the web, we’ll end up excluding important voices in the process.
It’s not just access to consume that makes the web so great, but also access to create. So many people I know in this industry would quite likely not be here if they hadn’t been able to view source, to pop open notepad and hack together some HTML for some embarassingly bad site they wanted to make. I probably wouldn’t.
I love that feature of the web.
I love that I can sit down with my kids and teach them a little bit of markup and they can start building sites of their own, like my 9-year old building a site listing out all of her favorite books.
I love that people like Elvis Chidera, without any access to an actual computer, can build a text messaging app with some PHP, HTML and CSS on a J2ME feature phone!
Now maybe we wouldn’t lose this. Maybe we could find a way to enforce a license only for so-called “professional” work (though how we would define that in a way that enables the Elvis Chidera’s of the world to accomplish what he did is beyond me). But it’s an important consideration that makes me a little hesitant whenever the topic of licensing is brought up.
Mike also mentioned our role as gatekeepers:
We are gatekeepers, and we vote on what makes it through the gate with our labor and our counsel. We are responsible for what makes it through that gate, and out into the world. What passes through carries our seal of approval. It carries our name. We are the defense against monsters.
It’s a good point, but it’s also worth recognizing that gatekeeping isn’t always a good thing. The expensive licenses and educational requirements some other professions have help to weed out the riff-raff a bit, sure. They also make those professions prohibitively costly to many. We have to be careful with gatekeeping through licensing because it’s just as likely to weed-out the good as it is the bad. It’s a situation that, were we to go down this route, we would need to be careful to avoid.
Generally, I also wonder how realistic it is to expect licensing to help solve the issues we’re facing (and for the record, Mike never claims licensing will solve them). You don’t have to look far to find folks in all sorts of professions—cops, accountants, doctors—who are fully licensed and yet wreck terrible havoc. Licensing can help ensure a level of proficiency, but that’s not our problem. Our challenge is a matter of ethics, a matter of responsible consideration of the consequences of what we build. Skill does not equate to better ethics.
So all that said, what do I think the answer is? I’ll be honest: I have no idea.
A hippocractic oath is an interesting idea, though like the one used in the medical profession, it’s unlikely to be enforceable.
I do think a good start is to break down this idea of a “tech industry”, something Sara argued against in her book:
Ten years ago, tech was still, in many ways, a discrete industry—easy to count and quantify. Today, it’s more accurate to call it a core underpinning of every industry. As tech entrepreneur and activist Anil Dash writes, “Every industry and every sector of society is powered by technology today, and being transformed by the choices made by technologists.”
A lot of the issues we see can be traced back to this artificial separation we have put between people who build technology and the industries that technology is used in. The end result is this enthusiasm for “disruption”. This belief that the same rules don’t apply.
It would be a start, though I’d be naive to think it’s enough to fix the problem. The challenge is that this is a mess and there are no easy answers. It’s unlikely any one solution will solve all our woes. It will take a combination of things that leads to significant change.
I take hope from Ursula Franklin’s earthworm theory.
Social change will not come to us like an avalanche down the mountain. It will come through seeds growing in well prepared soil—and it is we, like the earthworms, who prepare the soil. We also seed thoughts and knowledge and concern. We realize there are no guarantees as to what will come up. Yet we do know that without the seeds and prepared soil nothing will grow at all…we need more earthworming.
Change doesn’t happen all at once nor does it come from the top-down. It comes from the accumulation of little changes, from baby steps taken by everyday people who want to make things a little better.
I agree with Mike (and so many others): we have skirted by too long without taking responsibility for what we build. As Mike said, “We moved too fast and broke too many things.” Something needs to change and I’m grateful that we’re finally having that conversation on a broader level.
And maybe I’m wrong. Maybe some sort of licensing can work. Smarter folks than I have argued for it. I have my doubts, though it’s certainly a discussion I want us to continue to have.
But I hope in our attempt to fix what is broken about technology and the web, we don’t break what isn’t. The openness of the web is one of its most important features and it’s something I hope we’re careful about defending.
]]>You know that saying about abandoning a sinking ship? This is more like abandoning a rocket that is taking flight at an incredible speed. Snyk gets better and better at securing open-source every single day. If you’re working with any of the ecosystems Snyk supports, setting up Snyk on that project should just be a given. I’m proud of how far the company has come already in such a short amount of time.
But I’ve been feeling myself pulled in a different direction for awhile now.
On March 15th, I’m very excited to start working as an independent performance consultant and developer. I’ll be providing training, consulting and hands-on development.
I want to work with companies who want to build something everyone can use—something performant and accessible no matter the context. It’s good for those organizations, and it’s good for the web in general. I have a few projects coming up that I’m pretty excited about (more on those at a later date), but I’m still accepting work for April and beyond.
To give you a sense of what I can help with, we’re probably going to be a good fit together if:
- You’re building a site or application and you want to make sure that it reaches as wide an audience as possible, making performance and accessibility a priority.
- You need someone to provide training for your team on how to design and build with performance up-front-and-center, or generally help stakeholders understand how important performance is.
- You could use someone to help you conduct a detailed performance audit, prioritize your next steps and conduct ongoing reviews and tests to ensure your site is as performant as possible.
- You’ve stumbed on some tricky performance issues and you need someone to roll-up their sleeves and help solve them.
If any of those sound like a fit, send me an email and let’s start building a faster web—one site at a time.
]]>An important caveat: this is what works for me. As in everything, I think the most important thing is to find a style that suits you. Being authentic is one of the most important things you can do whether you’re giving a workshop, giving a talk or writing.
I didn’t do all these things at the first workshops I presented. Through trial and error, these are the things I learned seem to resonate with the folks I train.
Your mileage may vary.
Start with the attendees
Every workshop I run, I start with a blank slide and ask the folks in the room what they want to get out of the day. They signed up (or their company did) to spend a whole day on this topic, so just about everyone in the room already has a few topics in mind; a few questions that they really want the answers to.
Getting that out in the open early allows me to tailor the content so that everyone gets some value from it. It also sets the tone that the workshop isn’t a pre-defined lecture but a discussion that is malleable based on the questions they may have.
I repeat this question a few times throughout the workshop: typically right after lunch and right after the last break of the day. If it’s a multi-day training, I’ll also ask at the end of day one. More often than not, I’ll wind up in my hotel room tweaking the content and topics for the next day based on that feedback and how I felt things went.
Modular workshop material
I’ve given my workshops in a variety of forms to a variety of people—from training broken up into one-hour modules delivered over the span of weeks to more intense sessions of two or even three days.
Preparing my workshop material in a modular fashion is a must. It gives me the flexibility to adjust the training based on both the duration and the attendees and what they want to learn about.
For example, I have a folder full of 1-2 hour sections on various performance topics—the network, how the browser works, fonts, images, service-workers, etc. The modules are rarely the same from different workshop to workshop. I tailor the content of each training based on who the workshop is targeting.
For each workshop I do, I keep a folder of individual Keynote files for various sections so that I can easily adjust based on questions from attendees.
This is particularly true for private training. There’s ultimately a limit on how customized a public workshop can be as you’re appealing to a wide variety of folks in different roles and organizations. With a private training session, however, I’m able to create something carefully tailored to the challenges that company is facing.
Having modules in place makes it easy for me to adjust on the fly. If one of the attendees says they’re really hoping to learn about something I hadn’t anticipated including in the workshop, I have a whole section of content on the topic handy that I can make sure to work into the day without having to panic.
It’s called a workshop for a reason
Exercises are critical. It’s going to be a long day indeed if folks have to just sit there and listen to me rattle on the whole time.
But they also accomplish two important things: they give people an opportunity to get up and move around and they let people apply what they’re learning in a meaningful way. People have different learning styles. Some folks can happily absorb everything a presenter is saying and come away armed with a ton of knowledge. But others need to be able to do something for an idea to click.
And that something isn’t always in front of a computer. One of my favorite exercises to do during my performance workshops is to give folks a stack of post-it notes and have them build a render tree, CSSOM and DOM based on some sample code. It lets people get out of their seats for a bit, and the distance from the computer seems to help make the folks who maybe don’t spend all day looking at code a bit more at ease.
Always remember it’s about them
You can spend forever carefully crafting and refining your workshop and coming up with solid exercises but at the end of the day, you need to be ready to go with the flow. Some people may opt to sit out an exercise or two. Some sections you wanted to cover you may not get to. Some topics you hadn’t allotted a lot of time to may need to become more detailed. That’s all fine because the workshop is about helping them, not yourself.
While I have topics I want to teach, I also try to do everything I can to encourage discussion throughout the day. I want to hear about the specific challenges they’re facing so that we can tackle those. I want them to walk away feeling confident that they can take what they’ve learned and actually applied it.
In the best workshops I’ve ever had the privilege of leading, we didn’t get through all the material I had planned. People asked questions. We got into unplanned discussions about specific issues folks were battling, or we took a detour through a related topic that folks were wrestling with.
Attending a workshop is a commitment. There are a lot of other things attendees could be doing, but they’ve made a decision that learning about this subject is worth investing a good chunk of their time. The needs of the attendees should take priority over the presenter’s pre-planned agenda.
]]>In the novel, magical doors appear that can transport people from one place to the next. The doors pop up at random. It’s never clear whether a door will be safe or who will be waiting on the other end. In this way, the doors work pretty well as a symbolic representation of the reality of refugees fleeing their homeland in search of something better.
I kept thinking about The Lightless Sky and Gulwali’s accounts of how harrowing and risky the methods of passage were. People try to take advantage of the refugees, lie to them, cheat them—you never know if in fact the doors of opportunity that open are going to lead to something better or not. The magical doors in Exit West share a similar uncertainty. I have heard a few folks struggle with the idea of a magical element being introduced into a novel otherwise devoid of anything fantastical, but it didn’t bother me. The magic of the doors is subdued: you get the sense they exist perhaps entirely so that Hamid can fast-forward past the journey and focus on what happens after the journey is done.
That’s the core of the story: the day-to-day reality of migrants. It’s so easy for us to overlook the incredible burden of trying to make a new home in a place where you are unwelcome and unwanted, but we see it here.
Saeed and Nadia’s relationship starts out strong and sensual. But with each escape, they are forced to leave people they care about behind, as well as parts of themselves. As Hamid poignantly writes, “..when we migrate, we murder from our lives those we leave behind.”
As these relationships fade into the past, and Saeed and Nadia are surrounded by hate from others, their own relationship begins to wither and deteriorate as well. When all you’re faced with is hate, it becomes harder and harder to love.
Long story short: this novel had all the makings of an incredibly poignant and powerful story. Unfortunately for me, the story never quite hits those lofty expectations.
I struggled to connect with the characters. They never felt fully fleshed out. As a result, there were many moments where rationally I recognized should be powerful, but emotionally they didn’t register much.
I suspect this was not an issue of the characters themselves and how they were portrayed, but rather a symptom of a bigger issue I struggled with: the writing style. Hamid loves to use run-on sentences. A lot. I’ve read many books where the author makes some sort of stylistic decision like this (Cormac McCarthy strips out punctuation, José Saramago’s Blindness features characters with no names, etc) and most of the time I find after a few pages I stop noticing as it fades into the story.
That never happened for me with Exit West. Occasionally the style worked. At its best, these run-on sentences create a breathless stream of thought that is almost lyrical. But for the most part, the style distracted me from the story. The run-on sentences are used so often and I kept finding myself being pulled back out of the story.
This won’t bother many people, and that’s good. The style didn’t work for me, but your mileage may vary.
If you do the style distracting as well, I would suggest reading The Lightless Sky for a powerful true account of the challenges faced by refugees.
]]>You start out with your first impression. Ove is a grumpy 59-year-old man. He’s rude and annoyed by pretty much anything and anybody. His grumpiness is played to comedic effect over and over. He battles a stray cat, the dog who keeps peeing on his deck, the people parking where they shouldn’t be, and the driver backing into his mailbox because he doesn’t know how to properly back up a trailer—just to name a few. He’s a curmudgeon who seems to finds the world annoying.
But as you read, you learn a bit more about Ove. And as you do, you find out there’s much more to him than meets the eye. He doesn’t get annoyed for the sake of being annoyed. He’s annoyed because he grew up with clearly defined principles instilled in him at a young age. As he’s grown older, he sees a world that doesn’t always value those same principles. When he gets upset about people driving their car by the residential houses despite a sign saying not to do exactly that, he’s not just upset just by the act; he’s upset that no one seems to understand why it upsets him.
You start to learn about his past and you come to realize his story is more connected to the majority of the supporting characters than you first assume. You learn that there’s a great deal of good in him, underneath his rough exterior.
The flashbacks help to fill in a lot of context, but the present is just as important. Most notably, the changes brought upon when Parvaneh, an pregnant Iranian immigrant, moves in next door with her husband and their two daughters. Parvaneh and Ove’s interactions are some of my favorite parts of the book. Parvaneh is never fooled by his act. She refuses to allow Ove to dismiss her, and over time she and her two daughters (Ove never really grows fond of her husband) melt away a lot of the hardness and depression that had settled over him.
And as Ove warms up to them, we warm up to him.
The story bounces back and forth between making you laugh and punching you right in the emotions. It teaches that there’s more to a person than what you see at first glance.
It also teaches you the power of not just talking, but acting. As Ove at one point states:
Men are what they are because of what they do. Not what they say.
It’s reflected in the way Ove acts towards others. His comments are off-key, but his behavior is sincere. He puts up a fuss, but throughout the book, he is there when other people need him to be. His words may be what leads to your first impression, but his actions are what end up defining him.
I think that’s why Parvaneh ends up gaining so much of Ove’s respect. She never tries to help Ove with some sort of a dramatic speech and she never lets what Ove says drown out what Ove does. She, too, seems to value what people do over what people say. The little comments, his gruff tone of voice—those things don’t phase her. They amuse her because she seems to recognize that there’s more to him. She never hesitates to admonish him if he says something out of line, but the only times she ever really lets loose on Ove is when his actions themselves have gone too far.
The risk in emphasizing the power of actions is that the power words can have may be overlooked. But I would argue that’s not the case here. One of the most closely explored relationships, we learn, was destroyed in no small part because of what was said. It’s a good reminder that what we say has power too.
The book has apparently sold remarkably well, something that appears to have surprised just about everyone involved in the publication process. I don’t think it’s that hard to see why, though. Ove, as we discover, is a man with a good heart who feels lonely and out of place. I think all of us can relate to that on some level.
The story is simply told, and even a little familiar at times, but also surprisingly poignant.
]]>This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.
It’s a concern that has been stated over and over again in the two-plus years of AMP’s existence, by numerous developers. It finally boiled over recently, resulting in the publication of the AMP letter which at the time I’m writing this has been signed by 640 people and seven organizations (and also, to my knowledge, has not been formally addressed by the AMP team).
The letter calls on the AMP team to make two primary changes:
Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index. Publishers can then use any technical solution of their choice.
Do not display third-party content within a Google page unless it is clear to the user that they are looking at a Google product. It is perfectly acceptable for Google to launch a “news reader”, but it is not acceptable to display a page that carries only third-party branding on what is actually a Google URL, nor to require that third-party to use Google’s hosting in order to appear in search results.
AMP has indeed made a few changes to help with problem number two a little bit. There’s still some work to be done there, but it’s a good sign.
But there has been no indication of any attempt to address the first issue, that of incentivization and premium placement. In fact, not only has there been no attempt to fix it, it appears the AMP team is doubling-down on those incentives instead.
Yesterday they announced a new extension to the AMP format: AMP stories. AMP stories will be familiar to anyone who has looked at more openly proprietary formats like Apple News. They provide a mechanism for publishers to build visually compelling and engaging stories for mobile (though as AMP itself eventually expanded beyond mobile, I fully expect the same thing to happen with stories).
The kicker to the announcement is the promise from yesterday’s keynote that this content will “..be surfaced in Google search.” The exact method sounds like it isn’t guaranteed, just as the exact method wasn’t guaranteed when AMP itself was first announced. The launch post did, however, provide an image that shows how those results could appear in search.
The new AMP stories will also be singled out in Google search.
So, to recap, the web community has stated over and over again that we’re not comfortable with Google incentivizing the use of AMP with search engine carrots. In response, Google has provided yet another search engine carrot for AMP.
This wouldn’t bother me if AMP was open about what it is: a tool for folks to optimize their search engine placement. But of course, that’s not the claim. The claim is that AMP is “for the open web.” There are a lot of good folks working on AMP. I’ve met and talked with many of them numerous times and they’re doing amazing technical work. But the way the project is being positioned right now is disingenuous.
If AMP is truly for the open web, de-couple it from Google search entirely. It has no business there.
I have no problem with Google using their search engine to drive certain behaviors. HTTPS impacts your SEO score, performance impacts your SEO score (a little)—and that’s fine! These are features of a site that are not beholden to a single framework or tool. Everyone benefits whether they’re using Google technology or not.
But that’s not what happens with the AMP carrot. The only people who benefit are the ones who buy into this one, single tool.
If AMP makes performance better, that’s fantastic! Let’s incentivize good performance in the rankings. Let’s incentivize the goal, not the tool. There’s no need to single it out—if it does what it promises, it will reap the same rewards as any other highly performant page.
And I get it—it’s a non-trivial issue to give the same sort of treatment to all performance, well-built sites that you currently give to AMP content. Here’s the solution: don’t incentive it then until you can do it for the broader web. Yes, it will hurt AMP adoption and slow it down, but that’s not the goal here, right? AMP is for the open web, remember? The goal is a better web experience, a more performant web—not more AMP content. Right now that’s not what these artificial incentives accomplish.
Look, AMP, you’re either a tool for the open web, or you’re a tool for Google search. I don’t mind if you’re the latter, but please stop pretending you’re something else.
]]>I’m so glad I did. Lonesome Dove sucks you in, but not in the way many modern novels do. The book follows a cattle run from Texas to Montana, set shortly after the Civil War. It’s an epic setting, but the book is much more character-driven than it is plot-driven.
The characters are so remarkably vivid, and not just the primary ones. There’s a rich cast of characters who get detailed backstories, and we spend time with most of them as a point-of-view character for at least a chapter or two. The result is you end up feeling invested, whether good or bad, in the fates of them all.
McMurtry decided to ignore the “romanticized” version of the west that so many Western novels at the time appeared to readily embrace. He wanted to write the “anti-western” (as he put it) and paint his characters and their behavior as it really was. Whether he succeeded or not depends on who you talk to and how deeply you consider the characters.
There’s still a bit of the romantic western in play—Gus, in particular, has a bit of a mythic aura. But the characters are flawed, often deeply. They’re emotionally stunted. They say and do things that make you cringe. As McMurtry put it later to one reporter “Would you want to live with these men?”
A lot of this is explored through the role of Newt, the youngest in the company. McMurtry has stated that Newt could be thought of as “the lonesome dove” and he acts as a bit of a stand-in for us, the readers. At the start, young Newt looks up to so many of the men in the company, idolizing them–viewing them through that mythic lens. The cattle drive itself is also viewed with excitement and general romance. But as the book goes on and tragedy after tragedy strikes, he becomes increasingly disenchanted with it all. By the end, he is left bitter and angry. The drive was dangerous with real-world consequences he hadn’t considered. Many of his heroes are dead, and those that aren’t have disappointed him as their veneer has worn off and he’s seen how very flawed they are.
That’s the strength of this novel: the characters. They surprise you, they upset you, they make you cry. You get attached to them and every failure along the journey—every loss—is felt all the more deeply. Even after 850 or so pages, I still found myself wishing I could have more time with them.
Their stories don’t end the way you think they will. Some characters are abruptly killed off in ways that seems almost insulting to the level of importance they seemed to carry to the novel. Many of the missions that characters embark on end up failing unspectacularly. You keep thinking you know where this is going, but it never really does. The results for many characters and plotlines aren’t what you expect, but it’s much closer to what real life would be: real life doesn’t always get the Hollywood ending.
There’s not a lot of action. It’s very safe to say that the book is much more character-driven than it is plot-driven. When you do stumble on an action scene, it makes them stand-out even more. Scenes like Gus being pinned down behind his horse, outnumbered five to one or the bear facing off against the bull keep you breathless and are unforgettable.
Since seeing it on Gay’s list, I’ve been thinking about why it made her list. It’s a historical novel set shortly after the civil war, and McMurtry makes no attempt to pretend conditions were better than they were for anyone involved. Women are mostly treated poorly as are black people. The men in the cattle herding company constantly worry about Native Americans and talk disparagingly of them as well as of Mexicans.
But instead of ignoring the issues, McMurtry takes numerous opportunities to explore them through various different lenses. He builds characters like Elmira, Janey, Clara (a particularly strong character) and Deets that buck the stereotypes the other characters have. The effect is often comedic as other characters are at times very visibily uncomfortable when confronted with these people who don’t match their worldview at all. The book explores the consequences of the way Native Americans were viewed and treated, often through the voice of Gus. McMurtry doesn’t provide any clear-cut statements, but he also refuses to let the reader ignore the ugliness of it all.
I could go on and on about this book and all the different themes, the different characters and the different thoughts and emotions it provoked, but I’m already beginning to ramble. Suffice it to say I thoroughly enjoyed this book and will be returning to it again in the future.
]]>- Copy/paste any selected text I wanted to quote
- Create a new markdown file for Jekyll
- Setup any metdata I needed
- Manually deploy
This wasn’t a limitation of Jekyll or anything, just a matter of me never taking the time to make the process more seamless.
What I wanted was a bookmarklet to fast-track that whole process and make it automatic. With a static-site generator, that’s not quite as simple as it would be otherwise.
But then I remembered that the Github API allows you to create and commit a file to a given repository. I did a little looking around. It turns out someone had already taken the time to build a bookmarklet for Jekyll that used the GitHub API to do exactly what I wanted to accomplish.
Modifying it for Hugo didn’t take long. Mostly, I had to change the JavaScript to account for the metadata I wanted included in the file.
Since I use two-factor authentication (2FA) on my Github account (because 2FA is annoying, but not as annoying as having someone get into my account) I also had to generate a personal access token to allow my account to post directly to the repository. After that and a couple tweaks to the template itself to make it fit my style a bit more, I had a working bookmarklet that I can now use to save any site I happen to be on.
Any text I select is auto-filled into the bookmarklet form, I can annotate from there, and then hit submit. The new file gets committed to my repository using the GitHub API and then Netlify kicks in with an automatic-deploy. It’s a much simpler process.
Even better is that this whole process works very well on mobile, where I often pull open Feedbin to catch up on some posts. Clicking the bookmarklet from the bookmarks bar in Chrome doesn’t work. However, there’s a less obvious method of using bookmarklets in Chrome on mobile devices.
On Chrome for Android, the bookmarklet is accessible by typing its name into the URL bar.
If you start to type the name of the bookmarklet (in my case “Save to TimKadlec.com”) in the URL bar, you’ll see the bookmarklet come up as an option. Once you select that, the bookmarklet opens in another tab and you can publish on the go.
The whole flow is much simpler now and I’m pretty happy with it. Unsurprisingly, if you’ve been following along with the links via RSS, expect to see an uptick in frequency.
]]>So whenever a new technology emerges, we should ask: Who will win and who will lose out as a result?
Sara’s book is all about this exact idea. She looks at the technology landscape asking this question, and the answer she gets isn’t a good one. We’re building technology for people like us, and most of the time in this community, that means building for young, white males. And we’re doing this without thinking about the consequences.
But when we start looking at them together, a clear pattern emerges: an industry that is willing to invest plenty of resources in chasing “delight” and “disruption”, but that hasn’t stopped to think about who’s being served by its products, and who’s being left behind, alienated, or insulted.
This book is an uncomfortable read, and it should be. It’s painful to hold up the mirror and see just how badly we’re falling short. But it’s so important that we do. Technology drives so much of our day to day lives, and its reach is only expanding. It’s not a hobby, it’s not a niche thing—it’s something that impacts everyone around the world every single day.
I love that Sara very early points the finger at us, the people building technology, and then she never lets it waver. She doesn’t let us hide behind the code or the math in the algorithms we build. Her focus is on the human aspect, as it should be. We’re the ones who need to work to ensure that we’re considering different viewpoints and testing our work through these different lenses.
The book also builds very nicely from chapter to chapter. She progresses from seemingly basic considerations—like form fields—in the early chapters to complex algorithms in the later ones. Throughout, there are numerous examples of situations where people were left out by the decisions that we made on their behalf, whether or not we realized it.
She also does a good job of zeroing on some core beliefs in our field that contribute to the mess we’re in: how the idea of a separate “technology industry” lets us avoid the checks and balances for established fields, how our obsession with engagement drives us to make the wrong decisions for the people using our products, and how the focus on collecting and selling people’s data counters inclusivity.
Sara isn’t anti-technology. She just recognizes how important technology has become, and the power of the decisions we make.
Every form field, every default setting, every push notification, affects people. Every detail can add to the culture we want—can make people a little safer, a little calmer, a little more hopeful.
My own love of technology is because of this reach she describes. It’s so incredible that what we build can be used by people all around the world, in various different walks of life. I want it to work for everyone. Taking the time to read Sara’s book is a good way for anyone to get started in making that a reality.
]]>Val’s book is about how we change that. It’s about how we can make animations that are effective, not just pretty to look at. And it’s about how we make sure that animation is giving the serious attention it should be. Far from being merely eye-candy, Val explains how animation can be a powerful tool if applied correctly. Animation can help with brand consistency, storytelling, providing feedback, improving perceived performance and more.
The idea of animation promoting brand consistency, in particular, was interesting to me. People interact with companies through a constellation of experiences today. It’s not always easy to provide consistency across different platforms and systems while also taking advantage of the unique characteristics they have. Consistent animation becomes a way you can subtly make these different experiences still feel familiar.
Throughout the entire book, Val provides practical, real-world advice about how to bring it back to your team and your workflow. She explains techniques like prototyping, sketching, animation style guides without ever dictating one approach over another. Instead, she takes the time to lay out what each tool is good for, and what it’s not. It’s an effective method of teaching. By the time you’ve finished the book, you have more than one person’s opinion—you have a framework for making your own decisions about what will work best for your situation.
Unsurprisingly to anyone who knows me, I particularly enjoyed all the information about how animation plays into perceived performance. I was also happy to see Val dedicated an entire chapter to making sure those animations are accessible and inclusive.
Val is one of a handful of people I know of who are really pushing animation forward on the web. She’s done an incredible amount of work and research around not just designing and building animation online, but doing it effectively. We’re all lucky that she took the time to turn that knowledge into this practical and comprehensive book.
]]>So I changed the static site generator I was using as well as the hosting provider.
It’s actually not as off-track as it sounds. I had been running Jekyll for a long time now, loving the fast loading times. But there were a few issues. The most notable is my own fault: over the years, I had done a poor job of keeping things neat and tidy. The whole setup was pretty unwieldy and messy. It was to the point where I was either going to need to refactor my code anyway or switch it up altogether.
The second issue was that while the site was fast to load, it was pretty slow to generate. It was enough of an annoyance that I decided if I was going to be making changes anyway, I might as well find something fast.
I didn’t do much looking around, either. I remembered reading a post by Sara Soueidan about how she migrated from Jekyll to Hugo so I started there.
The templating syntax Hugo uses isn’t altogether that different from the liquid templating used by Jekyll. It was different enough to cause me to hit my head against the wall at first but similar enough that porting the theme over to Hugo didn’t take a lot of time.
The migration was smooth enough. The Hugo docs are a little clunky, but Sara’s post was a life-saver. If you’re moving from Jekyll to Hugo, that’s where I’d suggest you start.
I couldn’t do a better job of explaining the switch than Sara did, so I won’t even try. I will, however, highlight two things that I consider the killer features of Hugo.
The first: it is fast. Generating a new version of the site is almost instantaneous. It’s night and day from the experience of generating my Jekyll based site.
The other thing I like about Hugo is how it handles different content types. On this site, I have blog posts, saved links, talks, book reviews (now) and static pages. For each content type, I can set an archetype. The archetype is a starting point for the structured data used by each type of content.
For example, my saved links have the following data fields:
- externalURL
- date
- title
- tags
My talks have a different setup:
- startDate
- endDate
- title
- conferenceLink
- talkTitle
- location
Hugo lets you create a markdown file for each that contains YAML data structured the way that content type requires. Here’s my archetype file for my saved links:
---
date: "{{ .Date }}"
title: "{{ replace .TranslationBaseName "-" " " | title }}"
externalURL:
tags:
---
Now if I want to create a new saved link, I can run hugo new saved/my-new-link.md. Hugo will see that the content type is “saved” (defined by the directory structure here) and use the archetype I’ve defined to get the metadata in place. The date and title will be auto-populated (that’s what the funky stuff in the brackets do), and the rest will be empty and waiting for me fill in. It makes setting up different content types an absolute breeze.
Taking Netlify for a spin
Switching to Hugo let me start with a clean slate. With my now neat and tidy repo ready to roll, I decided to take the opportunity to give Netlify a try as well.
I had been hosting using Digital Ocean (who I still love and use for many other sites), but I had heard so many folks raving about Netlify’s simplicity. Also, Phil Hawksworth recently joined the team and I welcome any excuse I can get to bother Phil.
There’s not much to say about my setup there. The move to Netlify went without much of a hitch (once I set the HUGO_VERSION environmental variable). Netlify monitors my Github repo and when I push a change to the master branch, Netlify kicks off a quick deploy process. Netlify uses Let’s Encrypt, so getting HTTPS in place is a breeze.
At the moment, I’m not doing as much with Netlify as I could. It’s plumbing. Efficient, frictionless plumbing that gets the job done and then gets out of the way. That may not sound glamorous, but to me, that’s the definition of a good tool.
At some point, I plan on playing around with some of Netlify’s other features. In particular, the Lambda functions feature (in beta) could end up being pretty useful. I love static-site generators, but as Paul pointed out, they do make some functionality a bit tricky to implement. I’m thinking a few quick Lambda functions may end up coming in handy.
I don’t see myself using it, but I also like that they do A/B tests on the edge (fancy CDN talk for on a server somewhere). As a performance nut, client-side A/B tests make me weep.
The whole switch should’ve been invisible (well, it would’ve had somebody double-checked the default permalinks first). It was also very quick. Mostly that’s because I forced myself not to refactor my code too much, as much as I wanted to.
Small steps, but enough to make the process of publishing to my site a much more pleasant experience.
]]>What followed was a decade’s long explanation of the horrors and struggles they faced in North Korea. The promises that turned out to be lies. The constant brainwashing. The realization that as Koreans from Japan, they were still considered the lowest of the low. The corruption and abuse.
I’ve seen a few people wondering how true the story is. If you search for Masaji, you’ll find virtually no information (though you’ll see in the book there is a perfectly good reason for why he may chose to write under a different name). If it’s a fake, though, it’s an incredibly well-researched one.
I can’t blame folks for wondering if it’s true, maybe even hoping it isn’t. It would be much easier if we could pretend this isn’t real. It’s difficult reading, on an emotional level, about the horrible conditions they faced on a day to day basis. Sometimes a story is about overcoming the odds to accomplish something incredible. In this case, survival itself is the accomplishment.
If you’re looking for something uplifting, you won’t find it here. The accounts of what he and his wife and kids dealt with broke my heart over and over again. And there’s no happy ending. At the end of it all, I was left furious and hurting.
I won’t blame anyone for passing on this book. It’s emotionally challenging from the first page to the last sentence. But it’s also important. When we don’t expose ourselves to the challenges faced by people in different situations from our own, we risk viewing the world through our own myopic lens. And when you read stories like this one, you can’t help but feel we could all use a little bit more perspective and empathy.
]]>Heydon’s book doesn’t attempt to teach you everything you need to know about accessibility. It does something more important: it teaches you how to think about building inclusivity into your application throughout the process of designing and building it.
Instead of walking you through a checklist of what to do for each of the various impairments users may have to battle, the book walks you through building different components. For example, there’s a section where it walks through marking up a blog post. Sounds simple, but there’s a lot of thought and care being applied to ensure that the post is accessible: the markup used, how screen readers will interact with the post, transcripts for videos, link labels, reading level and more. As a result, you learn to think critically what you’re building and how different people will want to use it.
I have a few minor nitpicks from some of the early performance recommendations, though to be fair the book came out in 2016 and I’m not sure how many of my critiques would’ve been applicable then. They are, also, minor. Nothing he states there is wrong, just a few things that are a bit less than ideal.
That minor nit aside, Inclusive Design Patterns is a fantastic resource for any developer—and this should be all of us—who wants to build a web that can be used by everyone.
One last parting shot, I have to note the quality of the physical book itself. I love a beautifully crafted hardcover and Smashing did a great job with this one.
]]>Giulia is fascinated by our gut, and all the awkward…err…outputs it can produce. She explains how our digestive system works, and why it’s so important to understand what it’s doing. And she does it all with an infectious level of enthusiasm and plenty of laughter. The illustrations, created by her sister, are light and fun, as are her explanations. It’s easy to see how a book about a critical system of the human body could get very dry, very quick. But Giulia never takes herself too seriously, instead choosing to write in a style that is very friendly and easy to digest (no pun intended).
She’s also very practical. As she discusses each function of the gut and what causes things like constipation or bad breathe, she also spends time bringing those ideas back down to actionable advice: if this is something you battle with, here are the things you should try.
At times I wouldn’t have minded a little more technical detail, but that’s nitpicking. She never fails to present the concepts themselves in understandable form—and at the end up of the day, that’s more important than the little trivia bits that surround it.
]]>Our understanding of just what sleep does, as Walker explains, is a relatively recent discovery which is probably why it doesn’t get anywhere near the amount of attention it deserves. As Walker laments, we are taught in school about eating healthy and about exercise, but not about proper sleep habits which is every bit as important. Sleep, on the surface, may seem self-explanatory, but it turns out we do a lot of things (some obvious, some not) that are just plain terrible for our sleep quality. A little education would go a long way, and even over the course of this book, I would be surprised if readers didn’t start to alter their habits (I know I did).
Walker may be passionate about sleep, but he’s also practical. He recognizes that the challenge of fixing the western world’s sleep problem isn’t trivial. There are things we can try to do individually (he provides a full list at the end, as well as additional ideas throughout the book), but there are also broader issues with the way businesses are run, the way schools are run and more.
Because of our underappreciation for sleep, our society is very resistant to these changes. One clear example is the number of people who claim that they only need a few hours of sleep. The numbers show that in actuality, only a fraction of a percent of people truly can operate on less sleep. Everyone else is limiting their emotional stability, mental fortitude, and physical health without realizing it.
He does present some possible solutions but also recognizes that these won’t be easy changes to make. He does provide some examples that offer a glimmer of hope, though. Denmark paying worker compensation to women who developed breast cancer after night-shift work. Aetna providing bonuses to employees based on sleep-tracking data. NASA offering napping time as a way to improve both alertness and on-the-job performance.
Change can happen, even if the examples are too far and few between at the moment. Arming people with the benefits of sleep and the understanding of how sleep works is a great first step, and that’s exactly what this book accomplishes in an engaging and approachable way.
]]>I wrote 17 posts for the Snyk blog and a handful of posts for other sites as well. I posted 889 tweets to Twitter. I reviewed 47 books on Goodreads. I’m probably forgetting some other things.
But here, on my own site? Four. I wrote four posts. For someone who loves to talk about how important it is to own your own content and to write for yourself, I’ve done precious little of that as of late.
Goals can be fickle things, but I’m going to set myself a goal for 2018: I want to walk the game I talk. I want to get back to owning my own content. That doesn’t mean cutting back on posts for Snyk—that’s part of my job and I’m happy to do it. But it does mean writing more frequently here as well. It means treating my own site as the hub for the content I post elsewhere instead of letting it accumulate cobwebs.
There are a few obvious steps. I want to replace Disqus with web mentions. I want to be more disciplined about using my site for bookmarking links I find interesting. Instead of posting to Twitter directly, I want to use my site as the hub for that content and let Twitter feed off of that (similar to what Jeremy has been doing). The same goes for book reviews.
Another obvious step is to remember that my site is my own, something I tell many others but apparently don’t apply to myself. While I love the web, that’s not the only thing I care about—it represents a small sliver of who I am. I write to understand and remember. Sometimes that will be interesting to others, often it won’t be.
But it’s going to happen. Here, on my own site.
]]>As time went on I didn’t have to do that anymore. Like a dog learning that he gets a treat when he does a trick, I started to recognize that, at least for me, ten minutes reading left me in a much better state than ten minutes on Twitter. I can’t say I don’t occasionally dip in more than I probably should, but there’s a better balance now. It’s not much of a surprise, then, that I read more books in 2017 than I have since I started keeping track back in 2009.
There were fewer standouts this year though, at least for non-fiction. I loved The Lightless Sky—it’s easily my favorite non-fiction book of the year and one I would unequivocally recommend to everyone. After that, it would probably be Ghost in the Wires and Fifty Inventions That Shaped the Modern Economy.
For fiction, The Old Man and the Sea, The Bear and the Nightingale and A Man of Shadows are probably my favorites, though Dark Matter, The Girl in the Tower and Down Among the Sticks and the Bones are right up there as well.
Below is the full list, for those who want a bit more detail. And, as always, please feel free to send any recommendations. I love hearing what others have read and enjoyed.
- Connections by James Burke 4⁄5
Burke’s Connections TV series is magnificent stuff. I love the way he manages to wrangle disparate topics across science and history to show how much of innovation and advancement is non-linear. The book sets out to do the same thing and does a pretty good job. At times I didn’t quite see how the dots connected, but I enjoyed the ride.
The final chapter, where he summarizes why this all matters, is I think the strongest. He talks about how learning history in a linear fashion (as we typically do) limits our perspective of the future and how these random connections make it clear that holding back funding or research from a topic that is deemed “not worthwhile” is dangerous. The statement that echoed the strongest with me, however, was a slightly tangential point about the impact of high rates of change.
The high rate of change to which we have become accustomed affects the manner in which information is presented; when the viewer is deemed to be bored after only a few minutes of airtime, or the reader after a few paragraphs, content is sacrificed for stimulus, and the problem is reinforced.
His point, very Neil Postman-esque, is particularly true in today’s world of entertainment-driven news and fast-moving social networks that I’m increasingly convinced that, by default, only amplify existing biases.
- Dark Matter by Blake Crouch 5⁄5
After having this book recommended to me by a few friends, I bumped it up my pile of books to read—as it turns out, an excellent decision. Dark Matter moves forward at an incredible pace and sucks you in—it’s a very hard book to put down. It’s not just action, either. There ends up being quite a bit of thought-provoking moments, asking questions about how much of our relationships and life are based on all the tiny little decisions we make along the way and also causing readers to maybe reconsider the “regrets” we carry from decisions made or unmade.
- A Study in Scarlet by Sir Arthur Conan Doyle 5⁄5
It’s been awhile, so I decided to re-read the Sherlock Holmes stories this year. A Study in Scarlet was the first novel (incredibly written in a mere three weeks) and holds up very well. The initial (iconic!) meeting of Watson and Holmes is one of my favorite introductions to a character, ever.
- This is Why We Can’t Have Nice Things by Whitney Phillips 4⁄5
Considering the state of current affairs going on, a book about trolling is incredibly relevant reading. The author’s case is that while we decry trolling as something horrible and disreputable, trolling is just what you get if you were to hold a fun-house mirror up to what is tolerated in our culture (media in particular).
I’ve read some criticism that she is not hard enough on the trolls. While it’s true that you can tell at times that her time masquerading amongst them has perhaps softened her stance ever so slightly, I think it only strengthens her research. This topic would be far too easy to cast in an emotionally charged, binary way. Her ability to understand the perspective of the people engaging in the behavior is exactly what enables her to have a rational and critical exploration of the broader link between culture and trolling.
The writing style is fairly academic (though not as bad as many), but the research is solid, fascinating and sobering.
- The Sign of the Four by Sir Arthur Conan Doyle 4⁄5
The Sign of the Four is, according to many, behind only Hound of Baskerville in the short list of Sherlock novels. I think it’s a step back from A Study in Scarlet. Don’t get me wrong; there are some great moments. Sherlock’s deductions of Watson’s watch, Sherlock tricking Watson and Jones with his disguise—there’s even some romance and humor in this one. But the novel is tainted by the racist portrayal of the aboriginal characters in a couple of passages I know, I know…an artifact of the time in which the novel was written, but those portions make for uncomfortable reading today and taint what is an otherwise excellent novel.
- Future Crimes by Marc Goodman 4⁄5
Security is such an interesting problem to solve. On the one hand, it’s not given nearly enough attention considering just how critical of an issue it is. On the other hand, understanding its importance requires a certain level of appropriate fear. As a result, many pleas for recognizing the importance of security have often been hard to distinguish from fear-mongering.
Future Crimes teeters back and forth between those lines. It’s fascinating to read about the different ways our technology-reliant world is vulnerable to attacks. If anyone could make it through even the first couple chapters without feeling terrified about the current state of security, I would be amazed. Goodman doesn’t pull any punches, and he spends maybe 80% of the book outlining all the various technologies and how they leave us vulnerable.
The fact that he’s so compelling at it is what also makes me hesitant to give the book a full-on five-star rating. He makes his case clearly and convincingly early and then keeps hitting you with more and more. By the time you get to the last 50 pages or so where he starts to outline what we can do about it, I suspect many will have taken a fatalistic view of security and given up.
My recommendation to everyone is to read the book, but if you find yourself feeling overly defeated and on the verge of not reading anymore, jump to the final sections where he provides some really good ideas for how to make things better.
- The Old Man and the Sea by Ernest Hemingway 5⁄5
Once again, a book I’m glad I didn’t read when I was in high-school—I guarantee I wouldn’t have enjoyed it nearly as much. The man’s epic battle against this fish—the pride and stubbornness of both mirroring each other, the fight against the fish also mirroring his fight against age—makes for powerful reading.
- Time and Again by Clifford Simak 4⁄5
A sci-fi novel that splashes a bit of everything into one: time-travel, artificial life, aliens…you name it. The result was a story that is compelling, with some interesting exploration of the idea of what it means to be human. It doesn’t quite manage to land with as much impact as it tries to, but still an excellent book ahead of its time.
- In the Land of Invented Languages by Anika Okrent 4⁄5
Read this one based on the recommendation of a friend, and it didn’t disappoint. Anika has a passion for invented languages, and it shows in her vivid and thorough accounts of the creation of languages like Blissymbolics, Esperanto, and Klingon.
- The Birth of Modern Politics by Lynn Hudson Parsons 4⁄5
This is a solid, if slightly on the nose, historical account of the election of 1828. If Parsons has a bias in favor of one of the two candidates, it doesn’t come through in the book. Though perhaps that’s because there isn’t a lot of deep analysis here. It’s a good recap of events, but any additional analysis of what transpired and why is pretty limited. Still, there are clear parallels here between what transpired in this election and many of the elections since (including the most recent) and it’s worth learning more even if just for the (somewhat depressing) reminder that politics has been divisive (and frequently, downright nasty) for a very long time.
- Red Seas Under Red Skies by Scott Lynch 4⁄5
This is a very different book than the first in the series. Lies was an Ocean’s Eleven-esque story with a blistering pace that (in my opinion) cared less about developing the characters than it did about having fun. Red Seas is much more an adventure story (the caper stuff starts out strong, goes away nearly entirely in the middle, and then comes back for a relatively low-key reveal) that spends a lot of time with the two primary characters and has a tad slower pace. It also feels a bit more like a bridge to book 3—some key people and topics are brought up early and then never really mentioned again, and of course, the end is an obvious attempt to hook the reader for book three.
With all that in mind, I enjoyed it. While I would love to see the next book return more to the style of book one, Red Seas did help to give both Jean and Locke a little more depth while also incorporating a pretty strong cast of supporting characters.
- Ghost in the Wires by Kevin Mitnick 5⁄5
Look—I’m not sure why I enjoyed this book so much. Judging by the way the story is told, Mitnick is not exactly what you would call “humble”. There are also a few sections where I couldn’t help but laugh at him. For example, at one point he discusses a period where he was hacking on company time. The company assumed he was doing consulting work on the side while at work, so they fired him. Somehow he manages to present himself as the one who should be upset by this “wrongful” termination.
Despite this, I have to admit I tore right through the book. It reads like a fast-paced movie and hearing the stories of how we would manipulate his way into whatever information he wanted through a combination of technology and social engineering are fascinating.
- Learning HTTP/2 by Stephen Ludin and Javier Garza 4⁄5
Short, readable introduction to HTTP/2 from two people who have spent a ton of time working with it. Great starting point!
- The Code Book by Simon Singh 4⁄5
A fascinating walk through the history of cryptography! I loved getting the historical context and seeing how cryptography has evolved from its early and primitive forms. The book walks back and forth between stories and more detailed explanations of the various ciphers. While it’s not overly technical, some of the descriptions of how they each work can get a little dry—particularly as they become more complex. Mostly though, the book flows very well and provides a useful and important overview of how cryptography works and has evolved throughout time.
- The Nature Fix by Florence Williams 4⁄5
The pop-science genre gets a lot of (well-deserved) flack. Its incredible popularity has resulted in many shallow, poorly written books gaining great popularity. This isn’t one of them. Williams book is an example of why the pop-sci genre became so popular in the first place—it’s well-written, well-researched, engaging and entertaining. Williams makes a convincing case for the importance of nature in our mental and physical well-being.
Now, there may very well be a little confirmation bias coming out in my review—I live in a small town because I enjoy being surrounded by trees and lakes much more than I enjoy being surrounded by buildings. But even with this in mind, I was still a little skeptical early on that this would be more pseudo-science than anything of significant substance.
Thankfully it’s not. Williams cites study after study by researchers around the globe to demonstrate how nature—the scents, the sights, the sounds—all impact how we function and perform. She also never goes overboard with it—she points out that some percentage of us (~15%) just don’t seem to respond to nature and she never makes the mistake of overstating the benefits of being outside. It’s a very engaging book that will challenge anyone who reads it to spend a little more time hanging out with trees.
- #Republic: Divided Democracy in the Age by Cass Sunstein 3⁄5
There’s an important discussion to be had about the divisive nature of our political system, and Sunstein does provide a fair share of interesting data points and commentary on the subject here. But all in all the book falls a bit flat. The writing is a bit dry and academic, and it feels like this book was an essay stretched out over 200-plus pages.
- The Bear and the Nightingale by Katherine Arden 5⁄5
I absolutely loved this book! Arden has written a wonderfully rich story steeped in Russian myth, with just the right amount of creepiness. I was sucked in from the very beginning.
The characters are all so rich and compelling. Even the supporting characters were so well fleshed out and so intriguing in their own right that I found myself wishing we would get more time with them. Thankfully, it looks like this is the first in what will be a three book series so hopefully, we will.
I’ve seen the book described as “Gaiman-esque”. It’s a description that always makes me both intrigued and skeptical—his dark fantasy writing is some of my favorite and such a lofty standard. Most books fall short. This one most definitely did not.
- Mr. Penumbra’s 24-Hour Bookstore by Robin Sloan 4⁄5
It won’t win any awards for incredible writing, but 24-Hour Bookstore is a fun, lighthearted read.
- The Adventures of Sherlock Holmes by Sir Arthur Conan Doyle 5⁄5
The first collection of Sherlock in short story form, and full of gems. A Scandal in Bohemia, The Five Orange Pips, The Adventure of the Speckled Band….plenty of classics in this one.
- Norse Mythology by Neil Gaiman 4⁄5
Mythology plays a part in so many of Gaiman’s great stories, so it should be no surprise that his take on the Norse myths is deftly done. It doesn’t hurt that the original myths are full of rich characters themselves. Combined with Gaiman’s incredible gift of storytelling, it’s a perfect match.
- Summer in Orcus by T. Kingfisher 5⁄5
After reading and loving The Bear and the Nightingale, I wanted to find a few more books that had some basis in Russian fairytales. Summer in Orcus features an 11-year old girl resentful of her mother’s smothering love and wants to set out on her own. When she meets Baba Yaga, she’s given the opportunity to do just that and seek her “heart’s desire”. The story was compared to Narnia by more than a few reviewers, but I think that’s mostly because of the transportation to a different “world” and the talking animals. Summer in Orcus tells a story that is much smaller in scale and throws a few delightful curves into the typical narrative you would expect.
- Six Wakes by Mur Lafferty 4⁄5
At first glance, Six Wakes is a space who-dunnit with a slight twist: the victims are trying to solve their own mass murder. Or, at least, the murder of the previous versions of their cloned selves. But that twist, the fact that cloning has allowed a person’s identity to be nearly immortal, makes it richer than your typical murder mystery (though as a straight mystery, it’s pretty compelling all in its own). Cloning is viewed very differently by different people, and without giving too much away, there are some interesting discussions about just how “human” these clones are and what sort of rights they’re entitled to. Fast-paced with more to think about than it first appears.
- Trigger Warnings by Neil Gaiman 4⁄5
I tend to prefer novels over short stories, but this collection from Gaiman was a lot of fun. As with any collection, there were a few shorts that I didn’t enjoy quite as much as the others, but it’s a good reminder of how talented a writer he is and shows off a bit more variety in his storytelling
- The Green Ember by S.D. Smith 4⁄5
- Ember Falls by S.D. Smith 4⁄5
Consider this my review of both of the first two books in this series. I read these originally to better gauge the reading level for my kids (the oldest is eight) and ended up enjoying them. The writing is geared to a younger audience, but the stories are compelling and you gotta love rabbits with swords. Book two is a bit darker and a little more complex than the first, as you often expect from second books in a series, but neither is particularly intense. My oldest (a bookworm) is probably ready to read these on her own, and I’ll be putting some copies on the shelf for the other kids to stumble upon as well.
- Borne by Jeff VanderMeer 4⁄5
A delightfully weird take on a post-apocalyptic world. The main character, Rachel, lives in a city in ruins after…what exactly we don’t know for sure, but climate is at least a part of it. The setting is imaginative and full of unique biotech. Mord the bear, a massive bear, generally controls the city though he is opposed by some, most notably a woman named “The Magician”. The real story here, though, is the relationship between Rachel and a creature called Borne—a biotech creature that she ends up raising. The whole thing is brought to life beautifully and vividly by VanderMeer. He has since published a novella set in this same world and I can’t wait to read it.
- Mini-Farming by Brett Markham 4⁄5
A solid overview of small-scale farming. Markham is an engineer, and you can tell in his detailed and analytical approach to topics like soil composition, as well as his prescriptive advice for bed sizes and square footage estimates. He has a very strong focus on the economics, which I’m sure will be very useful to some, but I found it a little less relevant and interesting for my purposes. It’s a good resource for anyone looking to be a bit more productive in their gardening, or thinking about getting into other homesteading-related topics.
- Down Among The Sticks and Bones by Seanan McGuire 5⁄5
McGuire is just killing it with the Wayward Children series so far. This dark and creepy little fairy tale will be a hit with anyone who enjoyed Every Heart a Doorway. It’s a prequel to the first book, and I would recommend that you read them in publication order for reasons that are likely obvious once you’ve finished them both.
- The Mini-Farming Guide to Composting by Brett Markham 4⁄5
Another good resource from Markham, this time with a much more singular focus which frees him up to go into much more detail.
- Uprooted by Naomi Novik 3⁄5
This book comes with a lot of hype, but it didn’t quite live up to it for me. The story is vividly brought to life, but it was really hard to find any redeeming qualities in the Dragon—one of the main supporting characters. I love a good anti-hero, scoundrel type of character but there has to be something about them that makes you start to root for them. In this story, it seems more assumed that you’ll just come around to him inspite of his constant belittling and abuse of Agnieszka, the main character.
- All-New Square Foot Gardening by Mel Bartholomew 2⁄5
The biggest issue here, quite frankly, is style. Mel is….well….he’s got a bit of a car salesman approach in his delivery. He makes more than a few sexist comments. He also tries to oversimplify at the expense of providing any context at all for why what he is suggesting is the “right” way.
There are much better books with much more detail.
- This Is Your Brain on Parasites by Kathleen McAuliffe 4⁄5
This book is just chock-full of interesting. Crickets that become suicidal when controlled by parasitic worms, all in an attempt to help those same words reproduce. A parasite that makes rodents have a fatal attraction to the smell of cats. There are some really weird and fascinating stories in here. Her storytelling is enough to make me overlook the fact that she spends two chapters on microbiomes, which are not parasites. She makes some leaps with the science towards the end, discussing how parasites may impact everything from religion to democracy. I didn’t always make the same leap with her, but the thought exercise was fun nonetheless.
- Countdown to Zero Day by Kim Zetter 4⁄5
A fascinating and thorough account of the Stuxnet virus. Frankly, the whole story would seem very at home some modern-day action/techno-thriller movie. Zetter does an excellent job of walking you through the discovery and gradual unveiling of Stuxnet: detailing how traces of the virus was discovered, the clues each little piece left about its origin, functionality and ultimate goal. Given the depth she goes into regarding the functionality of the virus, she does a pretty good job of making it approachable even if you don’t have prior technical knowledge.
Somewhere around chapter 16 or 17, it does get a tab bit repetitive as she spends the bulk of the time putting things you’ve already learned in a clear chronological order. The final chapter, however, is a very important read. Once you get past the impressive workings of the virus, the messy follow-up discussion that has been largely lacking is the implications and consequences—ethically and technically—of opening “Pandora’s digital box”, as she puts it. Some think they’re severe, some downplay the risk of a digital attack (personally, I think those folks just lack imagination), but no matter the stance, the discussion needs to be given much higher priority.
- The Lightless Sky by Gulwali Passarlay 5⁄5
Humbling, heartbreaking and uplifting all at once. As a 12-year-old boy, Gulwali is sent, along with his brother, away from Afghanistan for his own safety. The two of them are soon separated from each other. What follows is an unsettling account of Gulwali’s year-long, 12,000-mile journey as a young refugee fleeing Afghanistan for the United Kingdom. The story sheds a critical light on the impossibly dangerous day-to-day life of a refugee.
That Gulwali emerges, eventually, to not simply exist but thrive in the United Kingdom is a testament to his incredible courage and willpower.
Everyone should read this book, and everyone should take his final words to heart:
None of us travel alone in life. We all have the power to help those around us, or to harm them. It is the choices we make that define our walk, define our own personal journeys, and make us the people we are.
- Did You Grow Up With Me, Too? By June Foray 4⁄5
The many talented men in the history of cartoon animation get the lion share of the attention. For example, Mel Blanc is a name that will instantly bring up memories of Bugs Bunny, Daffy Duck, Barney Rubble or any other of the numerous voices he performed. But mention June Foray, and you’ll get some blank stares, this despite the fact that, as Chuck Jones put it, “June Foray is not the female Mel Blanc. Mel Blanc was the male June Foray.”
This short autobiography gives a glimpse into June’s incredible talent and career. It jumps around a bit, but the chaotic feeling of the book was endearing to me and felt very authentic. June’s humor comes through repeatedly, and there are a ton of fun stories and anecdotes liberally sprinkled throughout the book. Anyone with interest in the creation of cartoons, or fans of any of the many characters she voiced, will enjoy this one.
- A Man of Shadows by Jeff Noon 5⁄5
Let’s get this out of the way: this book will not be for everyone. The world Noon creates is one that is imaginative and frankly a bit weird. It all revolves around time. Time is a commodity. People live within different times: some standard, some commercially available for the right price. The city he operates in is divided between Dayzone (permanent day) and Nocturna (permanent night). In between is Dusk, a terrifying and fantastical place that operates by its own rules.
The story itself is ultimately a hardboiled detective story set in a world that is part science-fiction, part fantasy (Dusk, in particular, reminded me a lot of the “Other” world in Gaiman’s Coraline.) It’s fast-paced and creepy, with plenty of twists and turns along the way. Noon does a great job of world-building over the course of the book—in fact, many of the twists come in the form of revelations about the world itself. A unique and enthralling novel.
- The Boy on the Bridge by M.R. Carey 5⁄5
The Boy on the Bridge is a fast-moving story following a small group consisting of scientists, military personnel and one gifted, autistic teenage boy as they search for a cure to plague sweeping over the globe. While Dr. Smrina Khan (one of the scientists) and Stephen (the boy) are the primary characters, the other characters are richly fleshed-out with their own background stories and personalities as well.
The end is dark and sad, setting up The Girl With All the Gifts, the first book in the series, very well. While not as surprising as the first book, The Boy on the Bridge is still a blast to read. I highly recommend reading the two books in the order they were published to get the full effect from the surprise twists in The Girl with All the Gifts. - The Enchanted by Rene Denfeld 4⁄5
This one isn’t an easy one. The book is haunting and dim and deals with some heavy subject matter. It follows a cast of killers on death row, an investigator whose job is to find background information that can keep those killers out of the electric chair, and a disgraced priest. There is a small whiff of magic in the book, though it takes place in the head of one of the convicts on death row and mostly works to show how he has escaped mentally during his years of solitary confinement.
The author, Denfeld, is a licensed investigator who has spent a lot of time working with people facing execution, and that insight no doubt helps her tell such a vivid story. Too vivid, I imagine, for some. Some of the scenes are very unsettling. That’s part of what’s challenging about this book: the jarring contrast between the humanity of the killers she manages to portray and the horrific things they’ve done. This book will stick with you long after you read it, and will no doubt be very difficult for many readers to get through.
- A Mind At Play by Jimmy Soni and Rob Goodman 4⁄5
A superbly thorough biography of Shannon, whose work laid the foundation for modern technology. The telling of his many important contributions are handled very well, and the descriptions of the science involved are approachable enough.
But my favorite tales are those set after he had made his most significant academic contributions. His desire to create never left, but he was free to create and learn for pure fun and enjoyment, building everything from making trumpets that shot fire when played and a machine that solved Rubik’s cube to an incredible variety of unicycles.
- The Omnivore’s Dilemma by Michael Pollan 4⁄5
I really enjoyed this one, particularly the first sections. Pollan explores industrial food production, zeroing in on corn which is pretty much everywhere, followed by organic food, followed by more personal food consumption—foraging, hunting, mushrooms, etc. Pollan is naturally inquisitive and digs deep into each topic, constantly questioning himself causing him to go even deeper. He makes some fantastic points about the sustainability of different diets and their impact on the globe. The last section, where he forages and hunts for his own meal, was not as interesting to me and was a lot more anecdotal than the first two sections of the book. In the end, though, Omnivore’s Dilemma is a great read that will have you thinking much more about what you’re eating and where it came from.
- Warcross by Marie Lu 3⁄5
I wasn’t at all familiar with the author or the book, but the description—teenage bounty hunter/hacker finds herself trying to solve a mystery in a massive VR game—sounded like a lot of fun.
It was, for the most part. The action was fun, and Emika is an easy character to root for. And while the twist at the end felt a bit rushed to me, it does set up a fun plot change for book two.
But I think I probably wasn’t the target audience here. The eventual romance between Emika and the brooding, mysterious billionaire was played up very heavily. Probably cool for some folks, just not my cup of tea.
- The Sky Below by Scott Parazynski 3⁄5
So first, the good: there are plenty of interesting stories about his time at NASA. While he doesn’t get very scientific, it’s still compelling. That I enjoyed.
But a lot of the book also rubbed me the wrong way, mostly in the way he portrays his personal life. He talks a fair amount about how his marriage was on the rocks (he does ultimately get divorced and then married to someone else), but the way he does makes it sound dismissive of the sacrifices his first wife had to make for him to achieve what he did. I don’t know that that’s intentional, but it’s the way it comes off. The way he discusses his daughter’s autism was felt similarly off-key.
You don’t get to do all the things Scott has done, to achieve everything he has, without a lot of help from those around you and there is little in the book that acknowledges that.
- Fifty Inventions That Shaped The Modern Economy by Tim Hartford 4⁄5
Fifty Inventions sets out to accomplish something similar to Burke or Stephen Johnson’s “How We Got to Know” by telling story after story of different inventions and letting the reader see how they weave together. He’s a little less explicit in it, perhaps, than Johnson and Burke, but no less effective. Each of the inventions is covered quickly—its the kind of book you can easily read a chapter at a time when you have a few free minutes—but with enough detail to keep you interested and help you see connections without Hartford having to spell them out himself. It’s all written in an approachable way, with plenty of wit sprinkled in.
There’s one point in particular that Hartford brings up about all new inventions that stood out to me:
So whenever a new technology emerges, we should ask: Who will win and who will lose out as a result?
It’s a point he comes back to multiple times, and a theme that runs throughout the book. It’s also an important question we should all be asking whenever we create or support, a new technology.
- Children of Time by Adrian Tchaikovsky 4⁄5
Children of Time has gotten a lot of attention and accolades since its publication, and for good reason. The story follows two groups of…creatures…stretched out over thousands of years. One set is the humans, the last of the humans, struggling to find a place to set down roots after Earth’s demise. The other set are spiders infected by a nanovirus, reaching new levels of intelligence and sentience as time passes. Watching their society expand and evolve was even more compelling than watching the humans struggling for survival.
The book doesn’t shy away from some big topics, and the portrayal of the spiders rise to sentience over the years is incredibly well-done.
The only fault I have with the book is the ending. While the conclusion is smart, it’s also pretty abrupt to the point of feeling a bit rushed.
- The Girl in the Tower by Katherine Arden 5⁄5
More fantastic work from Arden! The Girl in the Tower has the benefit of being the second in the series, so it doesn’t have nearly as much work to do building up the world. Instead, it drops you right into the action and never lets up.
The story spends a little less time with the Russian mythology, at least until the final few chapters, than book one. But we get a lot of time with both Vasya and Sasha. It was fun to see these two characters, with very different outlooks on life, interacting after everything that Vasya went through in book one.
I think there are two things in particular that continue to impress me with this series. First, is Arden’s dedication to historical accuracy. She takes both the mythology and the world build-up very seriously, trying to represent as accurate a portrayal of middle-ages Russia as possible. The result is vibrant characters and very vivid scenery.
The other thing that continues to impress me is the way Arden writes Vasya, the main character. Vasya is not an anti-hero—she’s likable and a good person. Arden makes it very clear Vasya is a flawed individual. She messes up constantly; she lets her temper and emotions lead her to bad decisions. In short, she’s a very relatable and realistic character.
Eagerly awaiting book three.
- A Wrinkle in Time by Madeline L’Engle 4⁄5
I had never read this book as a kid, somehow, though it seems like exactly the kind of book I would have thoroughly enjoyed. Given all the praise its received from friends over the years, I decided to change that. I’m glad I did.
One of my favorite quotes regarding “children’s books” comes from Maurice Sendak: “I don’t write for children. I write—and somebody says, ‘That’s for children!’”. It’s a quote I think of often as I’m reading to my kids, or reading “young adult” books. A good story is a good story, period.
A Wrinkle In Time is a good story. L’Engle does here what the best so-called “young adult” authors do: she writes a story that is rich, and that doesn’t attempt to oversimplify for the benefit of a potentially younger audience. The story and setting are wonderfully imaginative, as well as steeped in a love of physics, mathematics, and literature (one of the characters constantly quotes from Shakespeare, Descartes, the Bible, Dante—you name it). The book also never shies away from some rather heavy themes, exploring spirituality, moral responsibility, and individuality. While it’s not overly thought-provoking, the messages are strong and important.
I may have missed out on this book as a kid, but I will make sure my kids do not.
- The Slow Regard of Silent Things by Patrick Rothfuss 5⁄5
I have loved the Kingkiller Chronicles so far, but I kept putting off this one due to very mixed reviews. Even Patrick himself was worried about publishing this one, calling it a “strange” story. I didn’t want to spoil what has been an excellent series with this questionable-sounding novella. Turns out, there was no need to worry. Rothfuss can flat-out write.
The book definitely has a very different feel from the primary books in the series, and I’m sure that leads to the mixed reviews. The Name of the Wind and Wise Man’s Fear are long, magical, fantasy novels full of suspense and intrigue conflict. The Slow Regard of Silent Things is short and features no real suspense, and no characters outside of Auri—the strange girl Kvothe befriended in the primary books. Magic is present, but very subdued.
The story follows Auri for a week as she explores the Underthing, constantly struggling to make sure every object is just where its meant to be, every room in just the right order. There’s not a lot that happens beyond that. The most conflict the story has is probably Auri’s attempts to find the right place for a gear she finds.
But the story is beautiful. Auri is sweet and broken, and there are times of clarity where she recognizes her own brokenness—and you feel for her and relate to her. So when she gets heartbroken about objects breaking, or objects not being in their proper spot, you start to feel it too. You find yourself rooting for her to get everything in its place and find some level of stability and calm in doing so.
This is a very different book than his other novels in the series, but it’s every bit as enjoyable.
Past years
]]>There are some great ciphers I could have taught them, but they’re still little, so I started simple. I them the rail fence cipher (some may know it as a zigzag cipher which is probably a better description). With a rail fence cipher, you write the message you want to communicate in a zigzag pattern, letter by letter.
For example, let’s say we want to use this cipher on the message “hello world”. We would do this by placing the letter “H” on the first line, then the letter “e” on the second line. Then we’d move back up and put the “l” on the first line, followed by another “l” on the second line. The result would be something like this:
H L O O R E L W L D
Now that we have the message in two lines, we place the second row of characters after the first row, and we have our encrypted message:
HLOORELWLD
The rail fence cipher can involve as many “rails” as we would like. For example, using a three rail cipher our message would look a bit different. We would add another level to our zigzag:
H L O D E O R L W L
Our encoded message would end up being a little different:
HLODEORLWL
While it offers very minimal security, the rail fence cipher is pretty straightforward and easy to learn. My girls took to it right away and started using it on everything: the names of their siblings, food requests—my eight-year-old even wrote her mom a letter using it.
Compared to the approaches available to us today, it may even feel like a bit of a stretch to call it cryptography. As simple as it seems now, this was once a dominant form of encryption.
In his book, “The Code Book,” Simon Singh discusses how cryptography started with simple transposition (swapping the order of letters, like the rail fence cipher does) before evolving to *monoalphabetic ciphers*—a cipher that involves substituting one letter for another.
Probably the most well-known example of a monoalphabetic cipher is the Caeser cipher which involves substituting each letter for a letter a certain number of characters up or down in the alphabet. You may shift each letter, for example, three down. So “H” becomes “K”, “E” becomes “H” and so on. “hello world” now looks like this:
KHOORZRUOG
It’s more secure than a transposition cipher, but only slightly.
The monoalphabetic cipher supplanted transposition at the forefront of cryptography and remained there for quite some time. A more secure alternative did exist. A few folks realized that you could have a *polyalphabetic cipher*—shifting each letter not by the same amount, but by varying amounts. You were essentially applying different Ceaser shifts for each letter in your message.
A well-known example is the Vigenére cipher. Using the Vigenére cipher involved looking up each letter that you wanted to substitute in a Vigenére table. The sender created a keyword that would indicate how many letters to shift each letter in the message, and, using the table, walk through the message letter by letter to convert it.
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| A | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| B | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A |
| C | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B |
| D | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C |
| E | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D |
| F | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E |
| G | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F |
| H | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G |
| I | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H |
| J | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I |
| K | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J |
| L | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K |
| M | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L |
| N | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M |
| O | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N |
| P | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O |
| Q | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P |
| R | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q |
| S | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R |
| T | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S |
| U | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T |
| V | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U |
| W | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V |
| X | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W |
| Y | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X |
| Z | Z | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y |
If our keyword was “taco” (because why not) and we wanted to encode “hello world”, we’d start by finding “h” in our first row, then find the “t” in the first column. Where that row and column intersect, we get our first letter: “a”.
Then we would do the same for “e” and “a”, “l” and “c”, and so on—repeating “taco” as many times as necessary to encode our message. Eventually, we’d end up with:
AENZHWQFED
The receiver would have to undo this process, using the keyword to determine the shift and then the table (or some math) to help them decode.
It was much more secure than monoalphabetic ciphers, yet monoalphabetic ciphers remained the most common cipher. For some, monoalphabetic ciphers were still considered “good enough” (despite the many lives lost to decoded messages). The primary reason for the slow uptake of polyalphabetic ciphers, however, was just how much more difficult they were to use. It was much more complicated and took much longer to apply a polyalphabetic cipher than a monoalphabetic cipher. And so the cipher languished mostly unused for nearly two centuries. Usability trumping security.
Security has never been more important than it is today, and general awareness—or at least acknowledgement—of its importance does seem to be trending up. For many, though, security is still perceived to be a bit of a black art. It’s a field that has been mostly technically driven, done by some other team within the organization. You know them. They’re the ones who always swoop in to tell you why you can’t ship that one feature you wanted to build.
This has to change. Security cannot be something that is pushed off to only a select group of people in your organization. Like we’re learning over and over, when something is this critical, everyone needs to be involved.
That’s particularly true of anyone who is paying attention to the actual experience of using your site (which should be everyone, but that’s a battle for a different day). Solving security without consideration to usability simply won’t work. That’s how you end up with the most common password being ‘123456’, and it’s how a more secure cipher ends up in obscurity for centuries.
It doesn’t matter how secure something is if no one can use it in the first place.
]]>The only session I circled back to watch in its entirety so far was the panel about “AMP & The Web Platform.” Unsurprisingly to anyone who has read anything I’ve written in the past about AMP, this was the session that looked the most interesting to me. I’m not typically a fan of panels—panels are so hard to do well—but this was a good one.
Now first off, hats off to the AMP team for assembling the panelists that they did. They could’ve gone the easy route and pulled in a bunch of AMP diehards, but they didn’t. They assembled a smart group of panelists who weren’t afraid to ask some important and hard-hitting questions: Gina Trapani, Nicole Sullivan, Sarah Meyer, Jeremy Keith and Mike Adler.
That was not accidental. Talking to Paul Bakaus (who moderated the panel) before the event, that’s exactly what he wanted to do: put people on the panel who wouldn’t be afraid to voice dissent. When they split the talks out into separate videos, I’d definitely recommend watching this one. There is so much good discussion from every single panelist.
I’ve not been shy in expressing my opinion of AMP. As a performance “framework”, it would be ok. But it’s not treated that way. It’s treated as the incentive. Create AMP content, and you can get in the “Top Stories” carousel and you can get AMP’s little “lightning-bolt of approval”. I don’t think that’s healthy for the web.
These incentives have repeatedly been downplayed whenever I chat with someone involved with AMP, but my own experience has shown otherwise. I’ve been lucky enough to talk to a lot of publishers since AMP first came out and the most common sentiment has been: “We’re feeling pressure to use AMP to get our content into the top stories carousel. How can we do this without having to use AMP?”
Gina talked about how she’s seen the same thing, where the incentives are what is driving the adoption:
In my experience people are motiviated to use AMP…I’ve seen this from our clients…mostly because of SEO. They want it in that top stories carousel, they want that lightning bolt of approval in regular search results wwhich is happening now. And that concerns me. I’d rather that the concern for them was about performance and better user experience but it’s about SEO and search rankings. How many publishers would use AMP if that weren’t a factor? Fewer.
This makes total sense: you need incentive to drive adoption of, well, anything. And Google can offer a very clear incentive for using AMP. Now AMP can argue all they want that about how they’re trying to advocate for performance and a better experience, but that’s not the message everyone is hearing.
This is evidenced by most of the AMP case studies I’ve seen. AMP gets noted as the reason for metric improvement, but if you actually look at the non-AMP version of those same pages you’ll find they’re incredibly bloated. Improve performance without AMP and I’m sure you’ll see similar improvements.
That the message is more about AMP’s incentives than performance was also evidenced many times in the panel. Paul would point out you could, for example, cache your AMP content on your own servers. The panelists were quick to point out that, while technically true, you don’t get the carousel or lightning bolt if you do.
Whether they intended it to incentive performance or not, that’s not the incentive that most people see in using AMP.
Gina also pointed out (much more eloquently than I ever have) the broader risk and concern that many (myself included) see with AMP:
But hearing you say that the lightning bolt is a symbol of Google verifying and validating and hosting this page that’s scary to me as someone who cares about the open web. If you talk about the open web you’re talking about standards based and decentralized and where content isn’t privileged right? And AMP does all those things. It’s not a W3C standard…yet. It’s not decentralized because at least all AMP pages are hosted on Google’s cache. So if you search Twitter for google.com/amp there’s lots of results there people are sharing that URL so it’s not decentralized….AMP content is privileged in search results, and that concerns me.
This isn’t to say that AMP is entirely a bad idea. These are issues that could be fixed. But while the fact that people like Paul are so willing and eager to hear critical feedback gives me hope, so far I haven’t seen a lot of evidence that these issues will be addressed. Consider the fact that AMP now accepts other valid AMP caches other than Google. The single cache source was one of the early criticisms and now they’ve, seemingly, opened that up with Cloudflare being the first to publicize it.
That’s great, except it does nothing to alleviate the actual underlying concern about centralization. While I can cache AMP content anywhere I want, I don’t get the benefits Google is promising unless that content is also cached on Google’s cache. That’s not decentralization, and frankly, I’m not entirely sure what the point is. After all, I can cache my AMP documents on Akamai, or Fastly, or whatever other service I want—AMP approved or not—but I won’t get the incentives that are tihe primary reason AMP is adopted unless it’s on Google’s cache.
There was some optimism from the panelists that, perhaps, AMP will be a gateway to better performance. A platform for learning, a way to make performance more “accessible” as Sarah Meyer put it. And maybe it will be. After all, the work the AMP team has done is already leading to new standards proposals that, if implemented, will greatly improve performance on the web—certainly a great thing.
I want to believe the same thing will happen with the priority organizations put on performance. I want to believe that people learn from AMP’s tricks. That they’ll get to a point where they’ve seen the benefit of performance and start to incorporate these things into their own site. In the long-term, that’s where the effort is best spent.
This is something Jeremy pointed out as well as he thought about all the interesting work folks had been presenting all day.
That’s great but could you imagine if you’d put that same amount of work into your HTML pages?
From what I’ve read and seen in the past from folks working with AMP, they’re doing some incredibly intelligent work making new features and functionality work inside of AMP. What if we could channel that same effort directly to the web instead?
Optimistically, AMP may be a stepping stone to a better performant web. But I still can’t shake the feeling that, in its current form, it’s more of a detour.
]]>These biases come into play over and over again in our work, and can have devastating consequences.
There was an interesting post on The Coral Project about anonymity and its impact—or rather, non-impact—on online behavior. A frequent refrain heard when we try to understand why online behavior is so frequently so poor is that the ability to be anonymous is one of the primary reasons for the problem. J. Nathan Matias argues differently, though:
Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.
We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change.
While the article cites a little bit of research questioning our assumptions about anonymity online, the bulk of the article is focused on reframing our perspective of the discussion. We often consider the question of bad behavior online from the perspective of the people misbehaving. What is it that makes them feel free to be so much more vindictive in an online setting? Matias instead builds his case by focusing on the victims of this behavior.
Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority. Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them.
He also points out that pseudonymity can be an important protective measure for harassment victims:
According to a US nationally-representative report by the Data & Society Institute, 43% of online harassment victims have changed their contact information and 26% disconnected from online networks or devices to protect themselves. When people do withdraw, they are often disconnected from the networks of support they need to survive harassment. Pseudonymity is a common protective measure. One study on the reddit platform found that women, who are more likely to receive harassment, also use multiple pseudonymous identities at greater rates than men.
In other words, one thing we can state about removing anonymity is that it increases the risk for people on the receiving end of online harassment.
Removing anonymity online, then, is yet another example of how we reflect our own biases in the decisions we make and the things we build.
It is our biases that lead us to overlook accessibility or how an application performs on a low-powered device or spotty network.
It is our biases that lead us to develop algorithms that struggle to recognize women’s voices or show more high-paying executive jobs to men than women.
And it is our biases that lead us to frame the problem of online behavior from that of the attacker. If we’ve never experienced persistent harassment, and if we don’t stop to talk to those who have, we end up with solutions that are dangerous for the people on the receiving end of that harassment—the ones who need protection the most.
In each of these situations, our biases don’t just lead us to build something that is hard to use; they cause us to actively, if unintentionally, exclude entire groups of people.
The new year is just behind us which means this is a time where a lot of people are reflecting on the past year, and on the things they want to improve on in the year to come. There are many worthy goals to pursue. There may be, however, no more important investment we can make in ourselves than to actively seek to broaden our perspective.
]]>As always, I enjoyed every book on this list at least a little—I don’t have the patience or desire to get through books that I’m not finding interesting in some way.
If I had to choose, I’d say my three favorite fiction reads of the year were: A Constellation of Vital Phenomena, A Monster Calls, and The Book Thief. For non-fiction, they would be The Road of Lost Innocence, Console Wars, and Evicted. Just to warn you, the only book out of those six that you should expect to finish without having lost any tears along the way is Console Wars. Apparently, I was really into emotionally-charged books this year.
- Creative Schools: The Grassroots Revolution That’s Transforming Education by Ken Robinson 4⁄5
Ken, as you would expect if you’ve read his prior books or watched his fantastic TED talk, is excellent at discussing complex topics in a compelling and memorable way. The book doesn’t go particularly deep in any one area (something Ken makes clear early on), but he does include ample notes and references to books and research if you would like a more detailed look at any one specific point. This book is a starting point for digging into the current issues around education and our test. Digging into something by Diane Ravitch or Peter Gray’s Free to Learn would be a nice way to follow this up.
- Pricing Design by Dan Mall 5⁄5
No one in web design has done more work to help others run a business than Dan Mall. His written post after post and even published a podcast all about how to run your web studio successfully. Pricing feels like a taboo topic to many which makes good information hard to come by. That’s not the case with Dan. In this short book, he dives deep into how he prices projects—providing all the nitty gritty details along the way. I’ll be buying more copies of this one so I can hand it out to friends—such a great resource!
- Written in Fire by Marcus Sakey 5⁄5
A very satisfying conclusion to a very fun trilogy.
- A Constellation of Vital Phenomena by Anthony Marra 5⁄5
I couldn’t put this one down. It’s not that it’s fast paced, but you quickly become very attached to the characters. It’s a beautiful story about ordinary people fighting for survival in post-war Chechnya. It’s not for the faint of heart—it’s difficult to read some of the horrors these people go through—but please don’t let that stop you. It’s an incredible novel!
- Caliban’s War by James Corey 4⁄5
Solid if not quite as spectacular as Leviathan Wakes.
- Abaddon’s Gate by James Corey 4⁄5
Another solid entry in the series, though Corey continues to follow a similar formula to the first book and it would be nice to see things shaken up a bit in book #4.
- The Index Card by Helaine Olen and Harold Pollack 3⁄5
There is a breed of personal finance books that remind me of the bad self-help kind of books—the kind we like to mock for being light on any real substance. I think Index Card kind of falls into that same trap. It reads well—it’s conversational and simple—but there’s not a ton of meat. Probably a decent gift to a student who is just starting to wade into the world of having to manage their own finances, but for anyone who has spent even a little time learning about personal finance, there’s nothing new here.
- Every Heart a Doorway by Seanan McGuire 4⁄5
The more I think about this quick read, the more I like it. You know how Alice falls down the rabbit hole and is exposed to a whole new world? Or the kids walk through the wardrobe and find themselves in Narnia? This book follows children like that—children who have seen some other world—and are now trying to deal with the sadness of being back in their own. It’s fun, dark and a bit creepy. Looking forward to the next book in the series.
- The Road of Lost Innocence by Somaly Mam 5⁄5
I can’t remember a more gut-wrenching, heartbreaking book. Some parts made me cry; others made me overcome by rage—this is not an easy book to get through, and I imagine that many people won’t be able too. It’s simply too intense. This isn’t to say that there’s not hope in here as well. Somaly’s rise from the conditions she was forced into, and her efforts to help others in the same situation, are deeply moving and powerful.
I do have to note that after reading the book, I saw that there was a report that Somaly fabricated much of her story. There are also conflicting reports that are just as compelling. Somaly has maintained that she has told the truth throughout. In some ways, I almost hope she isn’t. I want to be able to believe that we aren’t capable of the cruelties she and the other girls she mentions had to live through.
However, perhaps it is my naivety and desire for her to be genuine, but given the significance of what she is trying to accomplish, I wouldn’t be the least bit surprised to see people trying to discredit her. So, I’m choosing to believe her. If you can stomach it, I highly recommend the book—it’s a powerful reminder of our remarkable ability to overcome even the worst conditions. It’s also a great reminder that we can make an impact:
I don’t feel like I can change the world. I don’t even try. I only want to change this small life that I see standing in front of me, which is suffering. I want to change this small real thing that is the destiny of one little girl. And then another, and another, because if I didn’t, I wouldn’t be able to live with myself or sleep at night.
- A Monster Calls by Patrick Ness 5⁄5
I hate to admit it, but I still occasionally find myself ignoring a novel when I see the “Young Adult” label attached to it. It’s a silly thing to do, as many incredibly gifted authors have pointed out time and time again.
A Monster Calls reminded me just how ridiculous this label is. It’s a beautiful, heartbreaking story of a boy with a terminally ill mother, and the monster he calls for. It’s a quick read, but don’t let the brevity fool you into thinking it’s lightweight. The story grips you by the heart. I read the last few chapters while sitting in an airport and that was probably a mistake—I was fighting back a flood of tears by the end. I read the ebook, but have since ordered the hardcover so that I can have all the beautiful illustrations that go with it.
- Chasing Perfection by Andy Glockner 3⁄5
There’s a great book to be written on this topic—the use of advanced data and analytics in the NBA—but this isn’t it. Most of the time he seems to cover the use of analytics in too much brevity, instead spending time on somewhat related tangents. Andy does uncover a few interesting bits, but there’s not enough depth here.
- TED Talks by Chris Anderson 4⁄5
There’s some great advice in here. Sure, it follows a specific type of talk (as you would expect), but there’s a reason these talks have become so popular. Chris provides solid practical advice and gives plenty of examples (all from TED’s world of course) on how to execute.
- The Days of Tao by Wesley Chu 4⁄5
Another fun entry into the world of Tao. I do think Days of Tao is not quite as polished as the other entries in the Tao series—it feels a little less mature in tone which I suppose makes sense given that Cameron is moved front and center while Jill and Roen are barely involved. While not quite as strong as the other books, it’s a fun and quick way to dip back into this universe while we wait for The Rise of Io to be released.
- Steal Like an Artist by Austin Kleon 5⁄5
I think what I like most about Kleon’s books is that they’re inspirational without being pretentious. He doesn’t waste time on fluff; he just gets right to the point he wants to make in a brief and concise format, and always with beautiful graphics. A quick read with plenty of solid advice.
- The Book Thief by Markus Zusak 5⁄5
Another example of how the Young Adult label is mostly hogwash. The Book Thief follows a young girl in Germany during World War II, as narrated by Death himself. He frequently alludes to events to come later in the novel, only greater building your anticipation. I like Zusak’s take on Death as well—he’s not some dark entity that takes great pleasure in his work. Instead, he’s sad as he watches the different ways in which people hurt each other. It’s a moving book with a story that will stick with you. Just don’t expect to make it through with dry eyes.
- Station Eleven by Emily St. John Mandel 4⁄5
I’m a sucker for a good post-apocalyptic novel, and this one fits the bill pretty well. It feels fairly familiar as far as these sorts of stories go, but with enough novelty to keep it fresh. I keep thinking of it as a cross between some Crash-like story where everyone’s story interconnects in interesting ways and The Road, though less grim.
My only real complaint is that I don’t think enough was done to build up the various characters. Kirsten was awesome, but most of the rest of the troupe all blurred together. There were a few moments that I think were supposed to be emotional heavy hitters that just didn’t seem to land. The one other character who I thought was fascinating, Miranda, was built up early only to let her storyline drop entirely from the second half of the book. Still, in spite of that, I did enjoy the book overall and based on other reviews, suspect this may have just been one of those times where the characters didn’t fully resonate with me for some reason.
- Coraline by Neil Gaiman 4⁄5
I’ll read anything Gaiman writes—he’s consistently excellent. This short read is no exception. It’s a brisk, creepy little story that keeps you glued to the pages. The only reason I’m not giving it five stars is that the brevity of the book made it much harder for me to get attached to the characters in the same way I typically do when Gaiman writes. He also dangled some amazing characters—the cat, the button-eyed mother—that leave you wishing you got to know them a bit better. It’s not a fault of the writing, there just isn’t enough time in this short to give these characters the level of detail I was used to from Neil. A good book, just not one of my favorites from him.
- The Long Way to a Small, Angry Planet by Becky Chambers 3⁄5
Character-driven science fiction, perhaps to a fault. It’s mostly a fun ride and each of the primary characters gets some time in the spotlight, so you get to know them all pretty well. But the plot is..well, barely there and very anti-climatic. It felt much more like a series of small build-ups resolved with minor reveals. Think of it more like a TV series, where the connecting thread is the characters but where each conflict is basically played out within a single episode. That’s not necessarily a bad thing, it just meant I didn’t get as engrossed in the story as I would’ve liked. It was promising enough as a first novel that I’ll read the second book, particularly since the synopsis I’ve read could be very interesting if done well.
- Die Hard, An Oral History by Brian Adams 3⁄5
A fun, if a little too brief and thinly detailed, behind-the-scenes look into the making of one of my favorite films.
- How to Lie With Statistics by Darrell Huff 5⁄5
When I was in HS, I remember my AP Calculus teacher telling us (probably in response to one of us quoting some stat we saw in the news): “Never trust statistics. They’re completely biased. If you know what you’re doing, you can make them say whatever you want.”
That’s basically what this book is about: how people can make statistics say whatever they want. Each chapter focuses on a different “technique” to be aware of with plenty of specific examples. These examples are dated (the book was written in 1954) but for each, I could easily think of similar examples I’ve seen recently.
It’s a statistics book with personality—not exactly a common find. Even if you don’t have a solid understanding of statistics as a starting point, you’ll be able to follow his clear and frequently humorous explanations.
With all the important discussions taking place in news outlets that are far more biased than anyone cares to admit, viewing stats through the critical lens Huff suggests is essential.
- The Manual Issue 5 5⁄5
The Manual continues to be consistently excellent. I certainly hope it returns from its hiatus, and that this issue eventually finds its way to physical form.
- Console Wars by Blake J. Harris 5⁄5
My relationship with both Nintendo and Sega was brief. We didn’t have consoles in our house, so I only played video games when I went over to my friends. We’d take turns playing Mario or Sonic and when you died it was the next players turn. I was always the one whose turn was the shortest.
But I have a fond memory of those games and figured this book might be worth a read. Wow, was it ever! I loved it. Absolutely loved it. There’s definitely a little bit of a SEGA slant here in terms of how much attention each company gets, but I do think you end up with a pretty good picture of both companies—what fueled them, how that influenced their approach to games and consoles, and the mistakes they made along the way.
I’ve seen some negative reviews about the approach Harris took to writing the story. He presents it as a novel: creating dialogue and segues just as you would in a fictionalized telling. To me, that’s why this worked so well. These are based on extensive discussions and interviews with the people discussed in the book, and having dialogue (as well as getting insight into some of the more personal aspects of their lives) moved this beyond the typical business book.
Highly recommend this book, regardless of your interest in video games.
- Show Your Work by Austin Kleon 5⁄5
More of the same from Kleon, in all the best ways. If you enjoyed Steal Like an Artist, you’ll enjoy Show Your Work.
- Evicted by Matthew Desmond 5⁄5
Incredibly thought-provoking and eye-opening book. The author profiles several different landlords and tenants in Milwaukee as they battle eviction for a variety of reasons. Some of the folks are more sympathetic than others, but the author consistently manages to paint a very clear picture of the problem—the odds are stacked against low-income people who find themselves struggling to pay rent from month to month.
My only critique is that I wish the author would’ve let us know up front that he lived with the people in this book to do his research instead of saving that bit for the very end. While reading, there were times I was wondering how he could possibly have that level of insight. Knowing this up front would have made the book even more impactful. Even so, powerful and heartbreaking.
- Cibola Burn by James Corey 3⁄5
Based on the plot alone, this would probably be my favorite since the first book. Unfortunately, some of the new characters are poorly fleshed out. Elvi, in particular, could have used a lot more depth in her arc. In its current form, it sells what should be a fantastic character incredibly short.
- The Gift of Gab by David Crystal 4⁄5
There’s some good content in here, but not enough to consider it to be an essential addition to the growing number of books about public speaking. The first part of the book is very similar to the kinds of information you’ll get from most other books about the topic, but the writing is pretty academic and dry. Where the book does shine is once it gets into rhetoric and when the author starts to analyze different speeches. There’s some great information in there that is not as frequently covered (from what I’ve seen at least) in books about speaking. If you’re a seasoned speaker or you’ve read a few books on the topic, skip to those sections. If you’re a newcomer, you’re better off starting with something by Garr Reynolds, Nancy Duarte or Lara Hogan’s book below.
- JavaScript for Web Designers by Mat Marquis 5⁄5
For years I’ve said DOM Scripting, despite its age, remains the golden standard for introducing designers to JavaScript. I can finally update that stance. Mat’s book is a fantastic and gentle introduction that provides the perfect starting point.
- Identity and Data Security for Web Development by Jonathan LeBlanc & Tim Messerschmidt 4⁄5
Pretty solid introduction to the topic.
- The Rise of Io by Wesley Chu 5⁄5
Chu simply has this world down pat. Ella is a lot of fun to root for and rather than follow the same formula of the first three books, an interesting twist gives this one a very fresh take. It’s going to be fun to see where this goes.
- Demystifying Public Speaking by Lara Hogan 5⁄5
The perfect starting point for anyone considering public speaking. Lara avoids prescribing any rules, instead focusing on giving you the information you need to feel comfortable giving it a go. I can’t imagine a better way to introduce people to public speaking.
- The Buried Giant by Kazuo Ishiguro 4⁄5
There are some very mixed reviews on this book, which is not at all surprising. It’s frequently presented as fantasy, but it’s not very fantastical. There’s a dragon and a few other elements of fantasy incorporated, but it’s very subdued. Likewise if you’re expecting blistering action, you’re going to be disappointed. That’s not what this is. The story itself is pretty melancholy and the author takes his time telling it. The ending, as seems to be the case with Kazuo’s books, is sad and haunting. But as is also par the course for the author, the book is well-written and the story will linger with you.
- In Times Like These by Nathan Van Coops 5⁄5
- The Chronothon by Nathan Van Coops 5⁄5
- The Day After Never by Nathan Van Coops 5⁄5
Consider this my review of all three books in the series, which I tore through in no time. Sometimes books are just plain fun, and that’s the case with this series. The take on time travel is fresh enough to stand out from other stories, the characters are fun, and each book has its own distinct, fast-moving and often humourous plot. There’s no drop-off either. If anything, the writing gets stronger as the series goes on. Just plain fun.
- The Handmaid’s Tale by Margaret Atwood 4⁄5
Better never means better for everyone, he says. It always means worse, for some.
Bleak, haunting and full of beautiful prose. In many hands, a book this overtly political would drag and feel forced. But Atwood is exceptional in her story-telling. The result is powerful, thought-provoking and an essential read.
- Resilient Web Design by Jeremy Keith 5⁄5
Resilient Web Design is part principal, part history. If everyone working on the web read this, the web would be much closer to achieving the bold but worthy goal of universal access.
- The Bright Continent by Dayo Olopade 4⁄5
This book is optimistic, for sure. Olopade paints a more “brighter” picture of Africa than what you’ll often read elsewhere. She doesn’t avoid discussing the difficulties faced in Africa, but she always follows that discussion with some examples of people innovating in incredibly creative ways to overcome these challenges. Very well worth the read if for no other reason than to make us reconsider what passes as “innovation” in the west.
- The Attention Merchants by Tim Wu 4⁄5
The Attention Merchants is a comprehensive survey of how the methods companies use to get our attention have evolved and how, in turn, we’ve slowly given more and more of our time and space to them. To me, the book shines in the earlier chapters where Wu walks through the early days of advertising and how it went from taboo to accepted to have advertising presented to us in our homes thanks to radio and TV.
Particularly of interest was how initially radio was deemed too important a medium to allow something as unseemly as advertising to mess it all up, for many of the same reasons we think the internet is too important to allow it to be filled up with advertising. Wu quotes Herbert Hoover here:
It is inconceivable that we should allow so great a possibility for service, for news, for entertainment, for education, and for vital commercial purposes to be drowned in advertising chatter.
The later chapters, when Wu turns his attention to these more current affairs (Twitter, Facebook, etc) are less compelling to me. Perhaps because I’m more familiar with these platforms, these chapters seemed a lot less insightful. These chapters also seem more emotionally charged. When Wu talks about advertising breaking into, for example, radio, he provides a critical but reasoned take. When he talks about Facebook and the like, he becomes much more…angry I guess would be the word. The later chapters end up feeling both less complete and more emotionally charged.
All in all, though, it’s an enjoyable book. Seeing how history keeps repeating itself in this industry was fascinating if more than a little discouraging as well.
- Working the Command Line by Remy Sharp 5⁄5
I don’t consider myself an expert at using the command line, not by any stretch, but I’m relatively handy. Even still, I learned several new tricks from Remy’s book. It’s a really gentle introduction to the command line that teaches just enough to be able to help feel a bit more comfortable and confident enough to start working it into your workflow.
Past years
]]>You have something you’re interested in, something you want to communicate to others. So you write. Or you give a presentation. Or you record a screencast. You do these things more and more, sharing what you learn and what you think.
Over time, as you get more comfortable, you learn how to improve. You learn how to write more effectively. You learn how to make your presentation resonate more deeply with an audience. One day, years later, you look up and you find out that you’re not the best—in fact you still have a lot of room for improvement, but you’re much better than you used to be.
This is the way you improve. This is way it’s supposed to work.
Except that often, it doesn’t. There’s a lot of advice out there, mostly well-intentioned, about how to be better. Don’t use passive voice in your writing. Make your movements deliberate on stage. Avoid filler words.
It’s great that people share these tips. It’s all good advice and it will likely make you better at communicating.
But advice like this can also be intimidating if you focus too much on it from the beginning. In fact, I would argue there are two likely outcomes. The first is that you get so nervous about doing it right, about not messing up, that you decide it’s best not to try anyway. Who needs that stress and criticism? The second likely outcome is that you decide to write or speak, but you’re so caught up in the mechanics of doing it correctly that your voice—what makes your contribution unique—gets lost in the shuffle.
Don’t get caught up in the mechanics too early. If you do you’ll end up blindly adhering to rules that simply may not apply to you. People say don’t use animated GIF’s in talks. Myself, I can use the occassional GIF at a specifically timed spot, but anything more isn’t me and people will know it. But that’s just me. I’ve seen some people give amazing presentations that were absolutely full of them.
The most important rule to follow when giving a talk or writing is to be yourself. I can learn just about any topic out there from a million different posts or talks. The reason I’m listening to you is because I want to hear your take. I want to know what you think about it, what you’ve experienced. More than anything, I want your authenticity. I want you to be you.
You’ll get better. You’ll learn how to communicate more effectively. You’ll learn how to make your message more powerful. You’ll learn the rules for more effective communication, and you’ll learn when and how you can break those rules to actually improve your message. But those things come later.
First, you find your voice.
The developers were overworked and the site had never gotten enough budget to give it the rebuild it needed. Granted, they could have stuck with the original framework included but the problem was that as each of the frameworks faded and gave way to the next one, the ecosystem and community around them online dried up and shriveled.
There’s a happy ending to this story. Eventually, jQuery was used and all the other frameworks were removed (talk about a big performance win!). jQuery never suffered from the same fate as the other frameworks the team had tried to use—its ecosystem only continued to grow and flourish as time went on.
Of course, the snarky side of me would be happy to point out that had they used good old fashioned JavaScript, the problem would have never manifested in the first place.
That isn’t entirely fair though, is it? There’s a reason people build these tools. Tools exist because somewhere someone thought one would be helpful in some way. So they created it and they shared it. And frankly, that’s pretty darn awesome.
So maybe that’s why some people were a little upset when they read the post going around that pokes a little fun at the current state of learning JavaScript. The post was intended to be humorous (I laughed), but to some it felt like a critique of the ecosystem and those contributing to it.
To be clear, I don’t think that was the point of the article. The thing is, it’s not the ecosystem that’s the problem. It’s great that we have a plethora of options available to us. It beats the alternative. No, the problem is the way we’ve chased after each new tool that comes along and even more concerning to me, the way we teach.
Our industry loves tools, and not without reason. Tools help us to be more productive. They can help to automate low-hanging fruit that is critical to the success of a project. They can help to obfuscate tricky browser compatibility issues. The can free us up to focus on the bigger, more interesting challenges presented. Tools are generally a “Good Thing”.
Unfortunately our love of tools has lead to an unhealthy mentality where we constantly feel the need to seek out the next great tool to be released.
Build scripts are a fun example. Grunt came out and was really instrumental in getting the front-end community as a whole to give more serious consideration to having a formal build process. Just as people started to adopt it more frequently, the early adopters were already starting to promote Gulp as a superior option. As some developers tried to make the shift, still others jumped on Broccoli. For many, it seemed that just as they started to understand how to use what had been the new best option, an even newer best option would become available.
Mostly, I think the evolution is healthy. We should be iterating and improving on what we know. And each build tool does things a little differently and different people will find one or the other fits their workflow a bit better. The problem is if we blindly race after the next great thing without stopping to consider the underlying problem that actually needs solving.
I don’t know exactly what fosters this mentality, but certainly the way we approach teaching JavaScript (and web technology as a whole) doesn’t help.
If you’ve ever tried to find resources about how to use vanilla JavaScript to solve a given issue, you’ll know what I’m talking about. It’s rare to find a post or talk that doesn’t throw a tool at the problem. A common critique you could hear early in the days of jQuery really taking off was that too many posts assumed the use of jQuery. You’ve likely heard similar critiques of using Sass to demo something where you could’ve demoed it using regular old CSS. When the fictional character in the previously mentioned post responds to a simple question with “you should learn React”, it may be a little contrived but it isn’t uncommon.
Just as each additional tool adds complexity to our development environment, each additional tool we mention when teaching someone about how to build for the web introduces complexity to the learning environment. That, I think, was the point of the post going around. Not that the ecosystem is flawed, not that the diversity of options is a bad thing, but that when someone wants to find an answer to a problem, the response they get frequently starts with “use this tool, then set this up”.
It’s ok—good even—to teach new tools that may be helpful. But when we do so, we need to be careful to present why these tools may be helpful as well as when they may not be. We need to be careful to separate the underlying issue from the tool itself, so that the two do not become conflated. Let people learn what’s going on under the hood first. Then they can make a determination themselves as to the necessity of the tool.
I’ve said it before, but the most valuable development skill to develop is not to learn Node.js. Or React. Or Angular. Or Gulp. Or Babel. The most valuable thing you can do is take the time to learn the core technologies of the web: the network stack, HTML, CSS and JavaScript. The core understanding of the web serves as your foundation when making decisions about tooling.
Those tools are useful in the right context, but you need to be able to understand what that context is. Whenever you come across an issue that needs solving, think about what the underyling problem actually is. Only once you’ve identified that should you consider whether you might want to use a tool to help you address the problem, and which tool that might be.
For the tool itself, there’s a few things you might want to consider. Here’s what I tend to look at:
- Who benefits from the use of this tool and how? Someone has to benefit, or else this tool doesn’t really need to be here, does it? If you can’t articulate who is benefitting and how they’re benefitting, then it’s probably not a tool that needs to be used on this particular project.
- Who suffers and how? There is always a trade-off. Always. Someone is paying for the use of this tool in some way. You could be adding complexity to the development environment or, in the worst case scenario, it could be your users who are paying the price. You need to know the cost so that you can compare it to the benefits and see if its worthwhile.
- How does it fail? I’m stealing this from the fine folks at Clearleft, but I love the way this frames the discussion. What happens when something goes wrong? Like it or not, the web is a hostile environment. At some point, for someone, something will break.
- Does the abstraction feed the core? If it’s a framework or library, does it help to strengthen the underying core technologies in a meaningful way. jQuery to me is a good example of this. jQuery was a much friendlier way to interact with the DOM and some of the work they did ended up influencing what you can use do with JavaScript, and how that should work.
There may be more questions you want to ask (how active the community is, the number of contributors, etc), but I find this is a really good start to help me begin to think critically about whether or not it is worthwhile to introduce another tool into my current environment.
Very often, the answer is no. Which means that when you’re chatting with some developer friends and they’re talking about using this brand new framework inside of a new code editor released last week, you may have to politely nod your head and admit you haven’t really dug into either yet. That’s nothing to be ashamed of. There is power in boring technology. Boring is good.
Have you ever watched someone who has been using Vim for years work in it? It’s amazing! Some joke that the reason they’re still in there is because they haven’t learned how to quit yet, but I think they’re onto something. While some of us jump from tool to new tool year after year, they continue to master this “boring” tool that just works—getting more and more efficient as time goes on.
We are lucky working on the web. Not only can anyone contribute something they think is helpful, but many do. We benefit constantly from the work and knowledge that others share. While that’s something to be encouraged, that doesn’t mean we need to constantly be playing keep-up. Addy’s advice on this is absolutely spot-on:
…get the basics right. Slowly become familiar with tools that add value and then use what helps you get and stay effective.
Start with the core and layer with care. A rock-solid approach for building for the web, as well as for learning.
]]>Since that time, Let’s Encrypt came out of beta and did a lot to really simplify the process of moving sites to HTTPS. I’m a big fan, as I’ve mentioned before.
But moving to HTTPS, while important, is just one tiny step in what it really takes to make sure that the people using our sites and applications are safe. If the web is really going to be secure by default, then we need many more tools and standards along a similar vein. We need security to be demystified.
Maybe that’s why when Guy was showing me the first incarnation of Snyk I was so impressed. He and his team had created a tool that focused on one part of the security equation—how to make sure you’re not unknowingly introducing vulnerabilities while using open-source code (focusing on Node initially)—and made addressing that part pretty trivial. Each feature they built on top only made me more and more impressed.
I found myself talking about Snyk casually to friends, each time seeing them respond with the same sort of enthusiasm I had the first time I used it. I’m not one to get super excited about tooling very often, but I do appreciate a well-built tool that makes important things easier.
After many conversations, coffees and other drinks, I decided to take the leap and join Snyk. I’m going to be starting and leading developer relations there. I’ll be rolling up my sleeves and getting my hands dirty with code quite a bit (I’ve got a long list of things I want to build)—something I’m looking forward to.
Several friends who I told about my move all asked the same question: “Does this mean you’re not going to focus on performance anymore?” The answer is: of course it doesn’t. You’re not getting rid of me that easily.
I’ve always considered myself a “web” person, not a “performance” person. I talk about performance so much because it interests me and I think it’s critical to the success of web. I still do and am unlikely to stop thinking so anytime soon.
But of course security, too, is critical. Along with performance and accessibility, it’s one of those “unsexy pillars” Paul Lewis wrote about—unseen, yet critical.
The team at Snyk is doing important work—work that I want to help with. I’ve talked to them about what they have in mind for the future, and it’s pretty exciting. That, plus the fact that Anna promised to bake me cakes (I have a massive sweet-tooth), made this an opportunity I couldn’t pass up.
Last Friday was my last day at Akamai. Before I joined, I already had a tremendous amount of respect for Akamai. Leaving, I have even more. In a segment of our industry that I worry can be a little shortsighted at times, they continue to think bigger—investing in the web as a whole through standards and browser involvement. In addition, they are smart. I mean, really, really smart. There’s a reason they’ve been around as long as they have.
The only place to go after a company where you are surrounded by brilliant and passionate people is another company filled with brilliant and passionate people. Snyk’s team is absolutely top-notch and I’m looking forward to working with them to make it easier for the web to be secure by default.
]]>You never know with taxis though. Sometimes, the driver will ask where I’m headed and then stay quiet the rest of the way—the two of us physically in the same car but mentally somewhere else entirely. Other times the driver will want to make small talk. We’ll talk about where we’re each from, what the weather is like back home, how many kids we have and how long I’ll be in town for.
Today, it turns out, is not going to be quiet ride.
The driver—a middle aged man—and I take turns talking about where we live, the weather, all the standard fare.
He asks if we play football where I’m from. Soccer. He corrects himself remembering I’m an American and we made up our sort of football just to be difficult. I tell him that yeah, actually, soccer is pretty popular at home. He asks if any area teams need a coach. I tell him I’m not sure.
He goes on to tell me how much he loves soccer. How he’s always loved playing it, coaching it. He tells me about how, the other day, he was practicing and offered to teach a few tricks to a twenty-something year old who was nearby. She said sure. She was good, but he started to show her a few things she hadn’t known. He challenged her to a race and won. He raced another twenty-something (her boyfriend if I remember correctly)—he a bit more confident in his abilities but in actuality not as good as the young woman. He beat him as well.
He tells me how there is a level of art to the game that most casual fans don’t appreciate. How if you go back and watch the greats, you see a sophisticated grace. He compares it to Steph Curry this year or Michael Jordan (who he believes is still the epitome of basketball perfection) and how they transcend the sport they play in—how they see things others don’t and move in ways others don’t.
He tells me he wants to coach soccer professionally someday. I smile and say that’s great, but internally I know that’s a long shot. I always wanted to coach basketball professionally. Everyone has a pie-in-the-sky dream like this at one point or another in their life, but that doesn’t mean they come true as planned.
Maybe sensing what I’m internalizing, he insists. He tells me he knows he will. He firmly believes that, in the United States, he can do anything. If he puts enough time and energy into it, and if he stays patient and focused, there is nothing he can’t accomplish here. It’s the same old American dream that we’ve heard many times—though I have to admit I haven’t heard it as often lately.
He elaborates. It turns out he believes this because he’s done it already.
The taxi driver’s name is Ahmedin Nasser. He moved to the United States from Ethiopia in 1985. At the time there were no freely available public libraries in Ethiopia. None. After graduating college, Ahmedin decided to change that. He and 12 friends organized Yeewket Admas with the goal of bringing free public libraries to Ethiopia.
He rounded up $15,000 in donations and sent a 40-foot container of books, 11 computers and 4 printers back home. His organization is responsible for at least eight different libraries in Ethiopia now.
He tells me he felt he needed to give back—that we all have a responsibility to do that. He’s a firm believer in a quote he once heard attributed to Albert Einstein: “The value of a man should be seen in what he gives and not in what he’s able to receive.”
He hands me a laminated newspaper clipping from an article that was written about him. He’s proud of this, and rightfully so.
Proud, but not content.
He’s currently working on the next step of his vision—ensuring that there are more libraries setup in Ethiopia and that these libraries will be properly maintained and sustainable.
I ask why libraries…why books. He tells me it’s because books can transform people. He tells me that we take it for granted in the United States that we have free access to a wealth of knowledge. He goes on to talk about how much he loves books and that he believes that one of the most important things you can do for a young child is introduce them to the love of reading.
I mention The Reading Promise to him, and he asks me to write the title and author down so he can grab a copy. He starts to tell me about a young boy in Africa who couldn’t afford to go to school, yet through a book learned how to build a windmill to bring electricity to his village. I mention that he has written a book (The Boy Who Harnessed the Wind), so I write that down for him too.
He thanks me and tells me that books are something he will never hesitate to indulge in. He says he’ll happily go a day without eating if it means he can buy a couple great books. He is a Muslim. Fasting is part of his religion so going a day or two without food is not a difficult thing to do—in fact he finds it revitalizing.
By the time we pull up to my hotel—45 minutes after stepping into this taxi feeling exhausted and worn down—I’m revitalized as well. I thank him for the amazing conversation and ask him if he minds if I type up some of it. He’s more than happy to let me. He says he wants everyone to know that someone ordinary can do extraordinary things.
In 2014 there was a research paper that concluded that people who interact with strangers when they’re traveling (whether by train, plane or taxi) are happier than those that do not. I’ve always had my doubts.
But at least on that day, in San Francisco, talking to Ahmedin Nasser—it was true.
]]>But that’s the beauty of the web, isn’t it? It’s not just that anyone, anywhere can consume the information on the web—that’s fantastic and amazing, but it’s not the complete picture. What makes the web all the more incredible is that anyone, anywhere can contribute to it.
You don’t need to go through some developer enrollment process. You don’t need to use a specific application to build and bundle your apps. At its simplest, you need a text editor and a place to host your site.
That’s it. The rest is up to you. You can choose to use jQuery, SASS, React, Angular or just plain old HTML, CSS and JavaScript. You can choose to use a build process, picking from one of numerous available options based on what works best for you. Certainly everybody has their own opinion on what works best but in the end it’s your choice. The tools are up to you.
That’s not the case with AMP as it stands today. While I’ve heard many people claim that the early concerns about tying better methods of distribution to AMP were unfounded, that’s the very carrot (or stick depending on your point of view) that they’re dangling in front of publishers. There have been numerous rumblings of AMP content being given priority over non-AMP content in their search engine rankings. Even if this ends up to not be the case right away, they have certainly emphasized the need for valid AMP documents in order to get into Google’s “search carousel”—something any publisher clearly would like to benefit from.
This differs from similar announcements in the past from Google about what they prioritize in their search ranking algorithms. We know they like sites that are fast, for example, but they’ve never come out and said “You must use this specific tool to accomplish this goal”. Up until now.
By specifying the specific tool to be used when building a page, Google makes their job much easier. There has been no simple way to verify a certain level of performance is achieved by a site. AMP provides that. Because AMP only allows a specific set of elements and features to be used, Google can be assured that if your page is a valid AMP document, certain optimizations have been applied and certain troublesome patterns have been avoided. It is this verification of performance that gives Google the ability to say they’ll promote AMP content because of a better experience for users.
So when we look at what AMP offers that you cannot offer yourself already, it’s not a high-performing site—we’re fully capable of doing that already. It’s this verification.
Content Performance Policies
I’d like to see a standardized way to provide similar verification. Something that would avoid forcing developers into the use of a specific tool and the taste of “walled-garden” that comes with it.
There were several discussions with various folks around this topic, and the option I’m most excited about is the idea of a policy defined by the developer and enforced by the browser. We played around with name ideas and Content-Performance-Policy (CPP) seemed like the best option.
The idea is that you would define a policy using dedicated directives (say….no hijacking of scroll events) in either a header or meta tag. The browser could then view this as a “promise” that the site adheres to the specified policy (in this case, that it doesn’t hijack any scroll events).
If the site then tried to break its promise, the browser would make sure that it cannot (e.g. ignore attempts to cancel the scroll event). An embedder, such as a search engine or a social network app, can then be certain that the “promises” provided by the developers are enforced, and the user experience on the site is guaranteed not to suffer from these anti-patterns.
CPP directives could also be used to control what third parties can do on a given site, as well as a way for third parties to provide guarantees that they will “behave”. This way, content owners can be sure that the user experience will not contain obvious anti-patterns even if the page is pulling in scripts and content from a large number of arbitrary third-party sources.
CPP could borrow from the concept and approach of the already existing Content Security Policies (CSP). This means that there would likely be a reporting-only mode that would allow sites to see the impact the policy would have on their pages before applying it live.
CPP’s would free developers up to use their own tools again and avoid limiting them to the specific subset of web technologies that AMP imposes. Because it uses a set of definable policies versus a specific framework, there is much more flexibility in how browsers and apps choose to enforce and promote content. For example, an app may choose to look for a certain set of policies that would work best for its context while Google may prioritize an entirely different set of policies when considering how a page should be prioritized in their search-engine. It’s far more extensible.
You could also imagine smarter content blockers that let through ads and other third party content guaranteed to be fast and not interfere with the user experience, while blocking third party content without these guarantees. That would allow us to avoid the centralized model of things like the Acceptable-Ads program, while providing a standard way to have the same benefits.
So…what happens to AMP?
There are too many smart people building AMP to let all that good work go to waste. If we decouple the distribution benefits away from the tool, then suddenly AMP becomes a framework for performance—something it is far better suited to. Developers could choose to use AMP, or a subset of its features, to help them accomplish their performance goals. The difference is that they wouldn’t be forced to use it. It becomes an option.
I’m working with Yoav Weiss to create a formal proposal for CPP’s that can be shared and built up. There’s an extremely early draft up already, if you would like to take a look. We’ve discussed the idea with numerous people from browsers and publishers and so far the feedback has been positive. People like the more standardized approach, and publishers in particular like that it feels more open and less like something they’re being forced into.
The idea of CPP is still young and nearly all discussion has happened behind closed doors. So this is us putting it out publicly to get people thinking about it: what works, what doesn’t, what could make it better.
I like the work AMP has done from a technical perspective, and I love the ambitious goal of fixing performance on the web. Let’s find a way of accomplishing these goals that doesn’t lose some of the openness that makes the web so great in the process.
]]>What is HSTS?
While the SSL certificate is a big boost for security in its own right, there is still a potential hole if you are redirecting HTTP content to HTTPS content.
Let’s say someone tries to request wpostats.com (diagrammed below). They may type it into the URL bar without the protocol (defaulting the request to HTTP) or they have it bookmarked from before it used HTTPS. In this case, the browser first makes the request to the server using a non-secure link (step 1). Then the server responds by redirecting the browser to the HTTPS version instead (step 2). The browser then repeats the request, this time using a secure URL (step 3). Finally, the browser responds with the secure version of the site (step 4).
Diagram of a non-secure request being redirected to HTTPS
Trying to load a secure asset using a non-secure URL exposes a gap in security.
During the initial exchange, the user is communicating with the non-encrypted version of the site. This little gap could potentially be used to conduct a man-in-the-middle attack and send the user to a malicious site instead of the intended HTTPS version. This gap can occur every single time that person tries to access an asset using a non-secure URL.
HTTP Strict Transport Security (HSTS) helps to fix this problem by telling the browser that it should never request content from your site using HTTP. To enable HSTS, you set a Strict-Transport-Security header whenever your site is accessed over HTTPS. Here’s the line I added to my virtualhost configuration for wpostats.com:
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
With this line added to my server configuration, any asset served over HTTPS will include the following header:
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
The header will only be applied if sent over HTTPS. If it’s sent over HTTP it’s unreliable (an attacker could be injecting/removing it) and so the browser will choose to ignore it. As a result, that initial redirect still has to take place. The difference is that now, after the browser requests the content using a secure URL, the server can attach the HSTS header telling the browser to never bother asking for something over HTTP again, sealing off that vulnerability for any future access from that user. As an added bonus, with the redirect out of the way we get a little performance boost as well.
The header has three options (each of which were used in my example above):
Strict-Transport-Security: max-age=expireTime [; includeSubDomains] [;preload]
max-age
The max-age parameter is mandatory and specifics how long the browser should remember that this site is only supposed to be accessed over HTTPS.
The longer the better here. Let’s say you set a short max-age of one hour. The user accesses your secure site in the morning and the browser now seals up the vulnerability. If they then go to your site using a non-secure URL in the afternoon, the Strict-Transport-Security header is outdated, meaning the vulnerability is wide open again.
Twitter sets their max-age to a whopping 20 years. I chose two years for mine, which most likely says something about me having committment issues or something.
includeSubDomains
The includeSubDomains parameter is optional. When included, it tells the browser that the HSTS rules should apply to all subdomains as well.
preload
Some of you may have noticed a kink in the HSTS armor. If a user has a fresh local state for any reason (max-age expires, haven’t visited the site before) then that first load is still vulnerable to an attack until the server has passed back the HSTS header.
To counter this, each major modern browser has a preloaded HSTS list of domains that are hardcoded in the browser as being HTTPS only. This allows the browser to know to request only HTTPS assets from your URL without having to wait for your web server to tell them. This seals up that last little kink in the armor, but it does carry some significant risk.
If the browser has your domain hardcoded in the HSTS list and you need it removed, it may take months for the deletion to make its way out to users in a browser update. It’s not a simple process.
For this reason, getting your domain included in the preload list requires that you manually submit the domain and that your HSTS header includes both the includeSubdomains parameter as well as this final preload parameter.
Does the Let’s Encrypt client enable HSTS?
The Let’s Encrypt client can enable HSTS if you include the (currently undocumented) hsts flag.
./letsencrypt-auto --hsts
The reason why it’s not enabled by default is that if things go wrong HSTS can cause some major headaches.
Let’s say you have HSTS enabled. At some point something (pick a scary thing…any scary thing will do) goes wrong with your SSL configuration and your server is unable to serve a secure request. Your server cannot fulfill the secure request, but the browser (because of the HSTS header) cannot request anything that is insecure. You’re at an impasse and your visitor cannot see the content or asset in question. This remains the case until either your SSL configuration is restored or the HSTS header expires. Now imagine you’re running a large site with multiple teams and lots of moving parts and you see just how scary this issue could be.
Because of this risk, HSTS has to be an option that a user must specify in Let’s Encrypt—despite its importance.
Room for improvement
That’s not to say the process couldn’t be improved. The GUI version of the client currently asks you a variety of questions as you setup your certificate. One of those questions asks if you would like to redirect all HTTP traffic to HTTPS.
An example of the Let’s Encrypt GUI asking the developer to decide whether to make everything HTTPS or keep HTTP around.
If the developer decides to redirect all HTTP traffic to HTTPS, I would love to see the very next question be: “Would you like Let’s Encrypt to setup HSTS?”, probably with a warning encouraging the developer to make sure they have all content on HTTPS.
Defaults matter and most people will stick with them. HSTS is important and HTTPS is…well…incomplete without it. If we’re serious about HTTPS Everywhere then we need to be just as serious about enabling HSTS as we are about making sure everyone is serving content over HTTPS. Finding a way to encourage its use whenever possible would go a long way towards boosting security on the web as well as adhere to one of the primary principles of the Let’s Encrypt initiative (emphasis mine):
Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
As I did in the past, I’ve included a rating and short review of each book I’ve read to give both you and I some idea of why I enjoyed each book. You’ll notice that no book has a rating below three stars out of five—that’s because if I am not enjoying a book on some level, I discard it. Life is too short to spend reading books that aren’t interesting.
If you forced me to choose, I’d have to say my three favorite fiction books were: All the Light We Cannot See, Crime and Punishment and Leviathan Wakes. My three favorite non-fiction books were: They Poured Fire on Us From the Sky, How Music Got Free and So You’ve Been Publicly Shamed.
- The End of Absence by Michael Harris 4⁄5
I found myself nodding my head in agreement quite frequently while reading this meditation on the way technology is slowing but surely filling in anything that vaguely resembles a void in our “busyness”. There was one point early on in the book where I worried the author was about to get a little too over-the-top in his critique and concerns, but as it turns out, he comes to a pragmatic conclusion at the end arguing that while every technology can alienate us from some part of life, it is our job “to notice”.
- Right Ho, Jeeves by P. G. Wodehouse 3⁄5
I was torn on how to rate this book. There are sections that are very good and laugh-out-loud funny (particularly much of the quick back-and-forth banter between Jeeves and Bertie) but most of the time the book seemed to drag along. Judging by the overwhelmingly positive reviews of this book (and Wodehouse in general), I’m willing to wager that perhaps I just wasn’t in the right mood for the story.
- All the Light We Cannot See by Anthony Doerr 5⁄5
All the Light We Cannot See is a wonderfully written novel following two primary characters—one a young French blind girl named Marie-Laurie and the other a young German boy named Werner—as they struggle through World War II. In particular, the accounts of Werner’s time in Hitler’s Youth were heart-breaking & moving. But don’t let the heart-break turn you away—there is a lot of genuine beauty in this story as well. A wonderful, wonderful book.
- Pressed for Time: The Acceleration of Life in Digital Capitalism by Judy Wajcman 4⁄5
This is a really interesting take on the topic of technology and its influence on the widespread feeling of not having enough time in the day. While most books on the topic place the blame directly on the technology itself, Wajcman digs much deeper. Early on, she points out that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires.” In other words, to really understand how technology is impacting this feeling of busyness, we need to look beyond the technology itself and see what other factors are contributing.
While the writing is certainly quite dry (sort of par-the-course for a lot of University-based publishers), the ideas are fresh, nuanced and well thought out. I do think that referring to studies conducted prior to smartphones to establish how people use mobile phones was a little short-sighted. However, in the end I support the conclusion: “busyness is not a function of gadgetry but of the priorities and parameters we ourselves set.”
- How About Never—Is Never Good for You?: My Life in Cartoons by Bob Mankoff 4⁄5
While labeled as a “memoir”, Mankoff’s book is actually quite light on the “memoir” side of things. Which is ok by me. Where the book really flourishes is in the discussion of the cartoon process: how the cartoons are created, how they are chosen, etc. In fact, I would have loved to get a little more detail and depth on that side of things. As it is, How About Never is a quick and humorous look at cartoon creation. Mankoff’s writing is very informal and ripe with the kind of humor you would expect from a cartoon editor. Combined with the plethora of cartoons included, it makes for a very fun book to read.
- They Poured Fire on Us From the Sky: The True Story of Three Lost Boys from Sudan by Benjamin Ajak, Benson Deng, Alephonsion Deng & Judy A. Berstein 5⁄5
Unbelievable. What these three boys went through, the courage they showed—it left me speechless on so many occasions. They battled hunger, thirst, lions, hyenas, war, prejudice—and they had to do it day in and day out, year after year. This is a hard book to read, but an important one.
- To Kill a Mockingbird by Harper Lee 4⁄5
It’s easy to see why so many people like this book. It’s well-written and the ideas it is trying to get across are important. I mean, when reading this you can’t help but be inspired by Atticus and the principles that guide his decisions.
The only criticism I really have is that I feel like the story would have benefited from a bit more…friction, I guess. The characters are all pretty shallow: the good people are really good, the bad are really bad. Perhaps that has to do some with the age of the narrating character. But it feels like, given the important topics discussed, there should have been a bit more depth somehow.
That sounds harsh considering the 4 stars, but that’s only because I’m comparing to the high-praise the book is given and the lofty expectations that go with it. The truth is I did enjoy the book quite a bit—I was just hoping for a little more.
- An Unwelcome Quest by Scott Meyer 4⁄5
I loved getting to spend a bit more time with some of the other characters in this one—Martin is in more of complementary role. The story got a little bogged down for a couple chapters in the middle and that’s the only thing stopping me from calling this my favorite in the series. So much fun to read!
- 11/22/63 by Stephen King 4⁄5
I haven’t read a lot of Stephen King books. In fact, prior to this one, I’ve only read the Dark Tower series. So I can’t speak to the quality of his books that lie more firmly in the “horror” category. But from what I’ve seen, while King gets his fair share of criticism, the truth is he’s a gifted writer with the ability to paint a story so vividly that you get lost in it. 11/22/63 is a great example. There’s some unnecessary fat in the middle (the book probably could have been about 100-150 pages shorter), but he builds the suspense and intrigue masterfully throughout. It’s an interesting take on time travel and a true page-turner with an ending that you can’t help but be moved by.
- Memoirs of a Muppets Writer: (You Mean Somebody Actually Writes That Stuff?) by Joseph A. Bailey 4⁄5
There is so much to like about this book. I loved all the behind the scenes stories Bailey tells. The insight into the process of writing for the Muppets, Sesame Street and comedy in general are pure gold.
The only reason this doesn’t get 5 stars is because of the poor quality of the book itself. The text changes sizes for no reason at all throughout the book. I started reading the paperback but ended up buying the Kindle version to avoid the bizarre text issues. Whoever did the editing also did not do a good job—and I say this as someone who usually doesn’t notice those sorts of things. I hope someone republishes this at some point with a little more attention given to it because the actual substance here is wonderful.
- How We Got to Now: Six Innovations That Made the Modern World by Steven Johnson 4⁄5
How We Got to Now builds off the ideas from Johnson’s Where Good Ideas Come From and further hammers home the point that good ideas do not come from lightbulb moments but from what Johnson calls the Hummingbird Effect. For each chapter, Johnson focuses on a different innovation (glass, cold, sound, etc) and shows you how it is connected to innovations you hadn’t considered (from printing press to selfies, for example). If you’re familiar with Johnson’s previous writing you won’t be too surprised by the conclusions here—but you’ll enjoy the winding paths you take to get there.
- Mixed Nuts: America’s Love Affair with Comedy Teams from Burns and Allen to Belushi and Aykroyd by Lawrence J. Epstein 3⁄5
Mixed Nuts is itself a bit of a mixed bag. It felt that at times he was overly abstract in those discussions. I also felt like he stretched his own definition of “comedy team” a bit too thin in order to include more contemporary pairings—Cheech & Chong, Belushi & Aykroyd, etc.—that didn’t seem to match the term quite as well. And I do think that the wide breadth of coverage held the book back a bit—he’s at his best detailing the success of acts like Laurel & Hardy and Abbott & Costello and I found myself wishing he didn’t have to move on so quickly. Still, it was interesting to read the progression of the comedic team and how successive acts built upon their predecessors and Epstein provides several sharp insights throughout the book. Flawed, but a decent introduction.
- The Pleasures of Reading in an Age of Distraction by Alan Jacobs 4⁄5
Jacobs’ book is in sharp contrast to many of the “how to read” books that are out there. The idea of a prescriptive list of books you “should read” appalls Jacobs, and he spends a good amount of time arguing for reading based on your whim instead. Ironically, in arguing against many books that tell you how to read, he sort of ends up doing the same—just from a different perspective. But there’s a lot here that gets you thinking—his disdain of reading lists, his arguments that most of us read too fast. That, and the many interesting anecdotes along the way, make it an enjoyable book.
- Enchanted Objects: Design, Human Desire, and the Internet of Things by David Rose 4⁄5
Rose’s book is a very clearly organized look at how widespread and cheap computing could impact objects from our everyday lives. Much of his ideas are tied directly back to abilities from science-fiction and fantasy, which does offer an interesting perspective. The book doesn’t quite get into the underlying design principles enough for my taste (it’s aimed at a more general audience I’m guessing) and there are certainly a few gimmicky examples, but overall it does get you thinking about the potential of the “internet of things” in a different light.
- The Manual Issue 4 5⁄5
Another fantastic issue of The Manual with the typical level of high quality writing all in a beautifully put together book.
- The Rebirths of Tao by Wesley Chu 5⁄5
I loved this series so much! As with the first two books, there’s plenty of laugh-out-loud moments somehow mixing perfectly with tense action. Seeing the dynamics shaken up a bit after the end of book two brought a new level of depth to the characters—in fact, this may be the author’s finest work in terms of character development in the series.
I’ve heard he’s working on another trilogy with new characters but set in the same universe—I cannot wait!
- Free to Learn: Why Unleasing the Instinct to Play Will Make Our Children Happier, More Self-Reliant, and Better Students for Life by Peter Gray 4⁄5
I’ll openly admit I picked up this book looking for evidence for things I already believed. Gray’s book fit the bill very well. He doesn’t just think compulsory schooling—full of worksheets and testing—is ineffective, he builds a case that it is actively harmful and can’t hold a candle to the way we learn naturally: through play.
In the early chapters, he builds the case extremely well. He provides the data, provides a counter point, and then the data to dismiss the counterpoint. Unfortunately a few of the later chapters aren’t quite as thorough and rely a little more on anecdotal evidence. Still, I can’t imagine anyone coming away from this book unconvinced that one of the best things we can do to improve the state of education today is move away from our current model and allow kids more time to play and experiment so that they don’t just learn better, but develop a love of doing so.
- Crime and Punishment by Fyodor Dostoyevsky 5⁄5
Everyone talks about this book so I figured it was about time I give it a read. True to what I had heard, it’s a fabulous book. I am still not sure how Dostoyevsky made such a cruel main character also somehow sympathetic, but he does just that. As is the case with many of the great novels, Crime and Punishment is a rich book with many pauses in the main narrative for philosophical and historical discussions. Some may not enjoy the slower pace, but I find the side discussions fascinating—they add so much more nuance to our understanding of the characters and how they think. I highly suspect that this is one of those books that you can read over and over, picking up new details each time.
- Never Let Me Go by Kazuo Ishiguro 5⁄5
Beautiful and haunting story that has a lot more to say than you realize at first glance.
- Apex by Ramez Naam 4⁄5
A solid finish for a very solid and thought-provoking trilogy. I think Naam’s writing improved as the series went on, though at times book #3 felt a little too sprawling—like there were too many bit characters in play to keep straight.
Still, as was the case with the first two books, Apex forces you to consider the vast implications of pervasive technology that is not as far off as we may think (which was once again backed by a chapter at the end where Naam discusses the real-life technology influencing the book).
- Losing the Signal: The Untold Story Behind the Extraordinary Rise and Spectacular Fall of BlackBerry by Jacquie McNish & Sean Silcoff 4⁄5
RIM is one of the most infamous stories in tech: a company that rose to the very top only to get so stuck in their current vision that they couldn’t see the changes happening around them that would lead to their demise. This is a well-written and engaging account of the rise and fall of RIM and makes for a very nice starting point for understanding what mistakes were made and more importantly, why.
- Shades of Grey: The Road to High Saffron by Jasper Fforde 5⁄5
Shades of Grey takes a little bit to get going. Fforde carefully and meticulously builds up this world and the characters in it. But man, the pay-off is SO worth it. The more you learn about the world, the more you get sucked into it. The writing is great, the story is compelling, the characters are vividly brought to life and the world is completely unique. My only disappointment was in finding that book 2 is not out yet (and it’s been 6 years)! Can’t wait to see how the rest of the story plays out.
- Time Salvager by Wesley Chu 3⁄5
The plot often felt a little too familiar—like I had been through many of these same sorts of scenes and situations before. Yet I still ended up enjoying it a bit. That I did is a testament to Chu who is very good at mixing action and fun. It’s not nearly as strong as his Tao books, but the potential is there for it to take off in the second book.
- Speak by Louisa Hall 3⁄5
Speak starts off strong and the concept has a ton of potential, but it ended up falling a little short. The writing is pretty solid, but I don’t think the various narratives worked together as well as they could have. It felt like there was something important to be said here but the book never quite got around to saying it.
- How Music Got Free: The End of an Industry, the Turn of the Century, and the Patient Zero of Piracy by Stephen Witt 5⁄5
Thoroughly enjoyed this! Witt weaves the story of three central figures—the creators of the MP3, one of the most well-known and successful music executives and one of the most prolific “leakers”—together to create a fascinating look at digital music (and piracy) revolutionized the music industry.
- Impro by Keith Johnstone 4⁄5
Impro is not just a solid introduction to improvisation, but an important look at how current educational systems tend to drive away creativity and what we can do to bring it back. The chapter on Masks felt a bit….odd…at times, and parts of the book drag a little, but there’s plenty of food for thought here.
- So You’ve Been Publicly Shamed by Jon Ronson 5⁄5
I admit that there’s some confirmation bias at play here: I’ve increasingly felt like we are so quick to raise our online pitchforks without stopping to consider what the possible outcome might be. In fact, if I had my way, before you signed up for any social media site you would be required to read this book. Ronson’s style of writing makes you feel like you’re taking the journey right alongside of him as he moves from idea to idea, trying to make sense of shaming and it’s merits and risks. I’m not 100% sold on all the conclusions, but his take is always well articulated and gets you thinking more critically about how you interact with others online.
I also appreciated Jon’s ability to look himself in the mirror and acknowledge his own faults, as well as his own privileges that lessen (though not eliminate) the risk of experiencing the same degree of shaming experienced by some in the book.
I’d love to read a follow-up that dovetails off some of the ideas expressed in the final chapter about feedback loops and the echo chambers created by social media, as I feel like that is key to understanding why we interact the way we do on Twitter, Facebook and their kin.
One note: the book gets a little intense and explicit at parts so if that’s not something your comfortable with, you may want to find another take on the topic.
- Mindwise by Nicholas Epley 4⁄5
This was a humbling book to read. Epley walks through all the ways we “think” we understand where others are coming from when in reality we understand very little. The research mentioned is a bit light in parts, but overall Mindwise does a great job of discussing a very important topic.
- The Boy Who Harnessed the Wind: Creating Currents of Electricity and Hope by William Kamkwamba & Bryan Mealer 4⁄5
This simply written book provides a fascinating account of not just one boy’s curiousity and drive, but also of what it’s like to grow up in a small African village (which actually is what the majority of the first half of the book is about). Well worth a read.
- Curious: The Desire to Know and Why Your Future Depends On It by Ian Leslie 4⁄5
Leslie breaks curiosity down into two forms: diversive (shallow; Googling the capital of Australia) and epistemic (deeper; reading books about Australia’s history and economy). His book focuses on epistemic curiosity: why it matters and what we can do better to encourage it. I found Leslie’s analysis to be pretty nuanced and I loved how curiosity wasn’t framed so much as a trait of a person, but instead as a choice. Though I don’t agree with all of his conclusions, especially some of those around education, Curious works as a good overview of the topic of curiosity with plenty of recommendations for where to dig deeper.
- Building a Device Lab by Destiny Montague and Lara Hogan 5⁄5
Full review here. The short version: Lara and Destiny give wrote a wonderful little guide to setting up a device lab that is equally good for companies of all sizes. They walk through everything you could possibly want to know and more.
- Little Rice: Smartphones, Xiamoi, and The Chinese Dream by Clay Shirky 3⁄5
I typically really enjoy Shirky’s writing, but this one was a little subpar. While the topic itself is fascinating, it felt like Shirky kind of threw this one together a little too quickly—the connections between the main topic and his tangents were tenuous. There are a few interesting tidbits scattered throughout, but overall the discussion felt a bit shallow.
- Using WebPageTest by Rick Viscomi, Andy Davies & Marcel Duran 5⁄5
Full review here. The short version: With all the power WebPageTest provides, there was a huge need for a comprehensive guide to getting the most out of it. Now we have one—a very good one at that. No matter how much (or how little) you think you know about WebPageTest, you’ll walk away from this book with a few new tricks up your sleeve.
- Adaptive Web Design, 2nd Edition by Aaron Gustafson 5⁄5
Full review here. The short version: Adaptive Web Design should be one of the first books on the shelf of anyone building for the web. Showing a deep understanding of the web, Aaron manages to cram nearly 20 years of insight into a book that is an absolute pleasure to read. I dare you to try and read this book without a highlighter handy.
- Beacon 23 by Hugh Howey 4⁄5
Another solid book from Hugh Howey, though I do think it falls a little short of the lofty bar set by Wool and Sand. Still, a gripping story of a man battling to maintain sanity while also having to make moral and ethical decisions with very serious consequences.
- Going Responsive by Karen McGrane 5⁄5
Full review here. The short version: Karen’s book isn’t going to get super technical—she’s approaching the topics from a higher level which means the audience of people who would benefit from reading this is pretty broad. Going Responsive needs to be read by anyone planning to build a responsive site—designers, developers and (perhaps especially) management.
- Designing for Touch by Josh Clark 5⁄5
Full review here. The short version: I love how Josh weaves seamlessly back and forth between the why and the how: here’s why this is the case, now here’s a practical way for you to design based on that knowledge. The book ends up being a mini-master class about designing for touch and gestures.
- Responsive Design: Patterns & Principles by Ethan Marcotte 5⁄5
Full review here. The short version: When I grow up, I want to write as well as Ethan does. His style of writing is just so pleasant: conversational, informative and entertaining. He also, as it turns out, knows a little bit about this whole responsive design thing. Ethan’s pulls from a ton of experience to write an extremely useful book.
- Ancillary Mercy by Ann Leckie 4⁄5
A satisfying conclusion to what was a really solid trilogy. The pace picks up here after the slower second book and I found I had a hard time putting the book down. My only critique is that the final conflict felt a tad anti-climatic after such a great build-up. If you like the Ancillary series, you’ll enjoy this finale as well as it has all the same things that have made the other books so good: great dialogue, smart writing, and plenty of tea.
- The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive by Brian Christian 4⁄5
Once a year, a bunch of people get together to judge a bunch of chatterbots on their ability to pass the Turing test. The judges talk a mix of bots and real people and try to figure out who is who. One of the awards handed out is for the “Most Human Human”—the person who was most easily identifiable as a human being based on their chats. Christian sets out to win that prize in 2009, and the result is this thought provoking book about the way we reason, the way we communicate and the complexities of language.
The book is a few years old so some of the bots he praises seem quite poor, but that’s really secondary to the more interesting philosophical discussion to be had here (as well as a nice little anecdote around why those who think philosophy is useless are already philosophizing).
- Leviathan Wakes by James S. A. Corey 5⁄5
Leviathan Wakes is one heck of a fun read. Corey avoids most of the faults that frequently bog down long, space-opera novels to create a book that’s a page turner from start to finish. He doesn’t beat you over the head with the science, but instead puts more focus on creating characters you care about. The result is a fast-paced novel that is part science-fiction, part detective story. It’s the first in a long series of books, so if you’re afraid of long-term commitment, you may want to look elsewhere.
Past years
]]>Apple’s Web
I’m good for a heat of the moment rant about either standards or Apple (often both) every couple years. This year, it was about Apple’s influence over the standardization process after some fallout around the Pointer Events specification.
Client-side MVC’s Major Bug
If your client-side MVC framework does not support server-side rendering, that is a bug. It cripples performance, limits reach and reduces stability.
Choosing Performance
In light of Facebook’s Instant Articles feature and FlipKart’s announcement about leaving the web (something they’ve since reversed their stance on), I wrote a post about why the issue with poor performing sites has nothing to do with technical limitations. Performance is a decision. We actively choose whether to devote our time and energy to improving it, or to ignore it and leave it up to chance.
Taking Let’s Encrypt for a Spin
A lot of folks have been very vocally pushing for “HTTPS Everywhere”, and for good reason. Unfortunately, moving to HTTPS can be kind of painful. Let’s Encrypt hit public beta and I walked through using their tool to simplify the process.
AMP and Incentives
Google announced the AMP Project and while I appreciated the focus on performance, I feel their approach (use this specific framework as a way to build a faster version of your existing page and get some enhanced distribution options as result) puts the incentive in the wrong place. I’m still not overly fond of the approach and hope we can find a more standardized solution.
Other years
]]>Unfortunately, moving to HTTPS can be kind of painful as you can see from Jeremy Keith’s excellent post detailing exactly how he got adactio.com onto HTTPS. He pinpoints the major obstacle with HTTPS adoption at the end of his post:
The issue with https is not that web developers don’t care or understand the importance of security. We care! We understand!
The issue with https is that it’s really bloody hard.
Let’s Encrypt—a new certificate authority from the Internet Security Research Group (ISRG)—has been promising to help with this, pledging to be “free, automated and open”.
They just announced public beta today, so I decided to give the beta version of their system a try on wpostats.com. Like Jeremy’s blog, WPO Stats is housed on a Digital Ocean virtual machine running Ubuntu 14.04 and Apache 2.4.7.
Getting Let’s Encrypt installed
The first thing I had to do was get the Let’s Encrypt client installed. To do this, I logged into the WPO Stats server and followed the instructions on the Let’s Encrypt repo.
First I grabbed the repo using git:
git clone https://github.com/letsencrypt/letsencrypt
Once git had done it’s magic and pulled down the Let’s Encrypt client, I needed to actually install it. To do that, I navigated to the newly created letsencrypt directory and then ran the Let’s Encrypt client with the help flag enabled.
cd letsencrypt
./letsencrypt-auto --help
This does that scary-looking thing where it downloads a bunch of different dependencies and gets the environment setup. It went off without a hitch and after a few moments it completed and told me I was ready to begin.
Obtaining and installing a certificate
The install process was smooth, but I was bracing myself for the actual SSL setup to be a bit more painful. As it turns out, I didn’t have to worry.
To run the client and get my certificate, I ran the same command without the help flag:
./letsencrypt-auto
This popped up a pleasant little GUI (Figure 1) that walks through the rest of the process. The first screen it popped up was a warning.
No names were found in your configuration files. You should specific ServerNames in your config files in order to allow for accurate installation of your certificate. If you do use the default vhost, you may specify the name manually. Would you like to continue?
Figure 1: First screen of the letsencrypt client GUI banner.
In this case, I only use the server for WPO Stats—nothing more. This means that, yes, I use the default vhost. I selected ‘Yes’ and moved along. Where this might be different is if you were hosting multiple domain names on a single server. For example, if I ran this site on the same server, I may have virtual hosts set for both timkadlec.com and wpostats.com and would need to have that specified in my config files.
The next three prompts were straightforward. I had to enter my domain name, my email address, and then accept the terms of service. I’ve always liked easy questions.
After that, I was prompted to choose whether I wanted all requests to be HTTPS, or if I wanted to allow HTTP access as well. I had no reason to use HTTP for anything, so I selected to make everything secure.
Figure 2: GUI screen for choosing to make everything HTTPS or keep HTTP around.
And, well, that was it. The next GUI prompt was informing me I was all set and that I should probably test everything out on SSL Labs.
Figure 3: Final screen of the letsencrypt GUI informing me I was victorious.
I checked the site, and everything was in working order. I ran the SSL Labs test and everything came back a-ok. For once, it really was as simple as advertised.
I felt like trying my luck so I went through the process again for pathtoperf.com and, again, it went through without a hiccup. All told it took me about 10 minutes and $0 to secure both sites. Not bad at all.
Going forward
The improvement between the obnoxiously complicated process Jeremy had to suffer through and the simplified process provided by Let’s Encrypt is absolutely fantastic.
I don’t want to mislead you—there’s work to be done here. I don’t know that every server is setup to be quite as smooth as the Apache process, and without root access to the server you still have to go through some manual steps.
UPDATE: It looks like Dreamhost is going to allow customers to generate and enable Let’s Encrypt certificates from the control panel. Hopefully other hosting providers will follow suit.
But that’s where they’ll need you. Try it out on your own servers and test sites and if you run into difficulties, let them know. I’m really optimistic that with enough feedback and input, Let’s Encrypt can finally make HTTPS everywhere a less painful reality.
]]>I hadn’t read very many industry books this year, but the second half of the year was absolutely bursting with great options and I couldn’t resist. Here are a list of the ones that I’ve found time to read and highly recommend.
Adaptive Web Design, 2nd Edition by Aaron Gustafson
But the second edition of Adaptive Web Design isn’t just a minor update—it’s a completely new take on the topic. While I would have been hard pressed to imagine it happening, Aaron somehow managed to write an even better guide to progressive enhancement.
You see, being told a specific way to code—a specific technique or snippet—that can have some short term value. But what’s more important is thinking about the underlying philosophy and the values that guide those decisions. While techniques come and go, those guiding principles persist. Understanding them at a deep level will help guide you as things change, helping you to make appropriate decisions about how to wield new technology as it emerges.
That’s what Aaron provides here. While there are some specific examples of how you could layer enhancements onto your site, most of the book is focused on helping you understand the underlying principles of progressive enhancement—principles that will help guide your decisions long after you’ve read about them.
I contributed an early quote about the book after I read it through which sums up my thoughts much more concisely than these last few paragraphs:
Adaptive Web Design should be one of the first books on the shelf of anyone building for the web. Showing a deep understanding of the web, Aaron manages to cram nearly 20 years of insight into a book that is an absolute pleasure to read. I dare you to try and read this book without a highlighter handy.
The book isn’t out until early December, and you should absolute pick up a copy when it’s available. I can’t recommend it highly enough.
Using WebPageTest by Rick Viscomi, Andy Davies & Marcel Duran
With all the power WebPageTest provides, there was a huge need for a comprehensive guide to getting the most out of it. Now we have one—a very good one at that. The book walks through how to read waterfalls (useful for any performance tool), how to test for single-points-of-failure (SPOF), how to use API’s to drive WebPageTest, how to setup a private instance and much more (including some undocumented power features).
No matter how much (or how little) you think you know about WebPageTest, you’ll walk away from this book with a few new tricks up your sleeve.
Going Responsive by Karen McGrane
Karen is one of those people who has a really wide range of knowledge about what it takes to design and build sites. Going Responsive demonstrates this very clearly. Karen uses her plethora of experience to provide practical advice throughout the process of a responsive project—from selling a responsive project all the way through testing and measuring its impact.
Unsurprisingly, I’m particularly fond of the chapter about emphasizing performance in a responsive project. She does a great job of walking through building a case for performance and how to get started with a performance budget.
Karen’s book isn’t going to get super technical—she’s approaching the topics from a higher level which means the audience of people who would benefit from reading this is pretty broad. Going Responsive needs to be read by anyone planning to build a responsive site—designers, developers and (perhaps especially) management. As Karen points out in the introduction, successfully implementing a responsive design requires much more than design and development. It “requires a new way of solving problems and making decisions.” This book is a wonderful guide to help you make that shift.
Responsive Design: Patterns and Principles by Ethan Marcotte
When I grow up, I want to write as well as Ethan does. His style of writing is just so pleasant: conversational, informative and entertaining.
He also, as it turns out, knows a little bit about this whole responsive design thing. Ethan’s pulls from a ton of experience to write an extremely useful book here. Using real-world examples, he walks you through common patterns for navigation, images and video—even advertising.
He takes the time to carefully analyze each potential solution, exposing the benefits and disadvantages of each. Approaching the topic this way makes sure that you not only walk away with concrete ideas for how to navigate some of responsive design’s trickier bits, but you thoroughly understand all the potential trade-offs.
It’s a perfect compliment to Karen’s book, not to mention Ethan’s original book on the topic and Scott’s excellent book from last year.
Building a Device Lab by Destiny Montague and Lara Hogan
Lara and Destiny give wrote a wonderful little guide to setting up a device lab that is equally good for companies of all sizes. They walk through everything you could possibly want to know—and probably more. Few people consider, for example, how to keep things charged appropriately and that topic gets its own chapter here.
When I was working with companies, before we started doing any design and development I often helped them get a solid device lab in place. Had this book been around then I would’ve been handing it out to every single one of them.
Designing for Touch by Josh Clark
First off, a critique: while there are many touch-based puns in this book, I did not see a single pun based on Neil Diamond’s Sweet Caroline. That feels like a missed opportunity and frankly I’m a little disappointed.
Putting aside my love for Neil Diamond, the rest of Josh’s book is spot-on. I love how Josh weaves seamlessly back and forth between the why and the how: here’s why this is the case, now here’s a practical way for you to design based on that knowledge.
The book ends up being a mini-master class about designing for touch and gestures. You learn about the ideal locations for making controls easier (and harder in some cases) to get to, how to help with discoverability, how to minimize response times and how to rethink traditional input types to make things easier for people and their “clumsy sausages” (Josh’s words, not mine).
The level of knowledge here is impressive—Josh knows this stuff inside and out, and he manages to explain the topic in a way that is both concise and fun.
]]>A few years ago there was a story of incentives gone wrong that was making the rounds. The story was about a fast food chain that determined customer service was an important metric that they needed to track in some way. After discussion, they determined that the time it took to complete an order in the drive thru seemed to be a reasonable proxy.
So they set a goal: all drive thru orders needed to be completed within 90 seconds of the cars arrival at the window. They had a timer visible to both the customer and the server. If the timer went over 90 seconds, the time would be recorded and then reported back to corporate headquarters.
There were some rather silly and unintended side effects. One of the most absurd happened when a customer informed the server that part of their order was missing. The server had the customer first drive forward a few feet, and then back up to the window. This way, the timer reset and it wouldn’t be flagged as a slow order in the reports.
It’s silly. But the incentives being applied encouraged this sort of….let’s call it creativity. The incentives were intended to encourage better customer service, but by choosing the wrong method of encouragement, they influenced the wrong kind of change.
Yesterday morning the Accelerated Mobile Pages (“AMP”) Project was announced to a loud chorus of tweets and posts. The AMP Project is an open source initiative to improve performance and distribution on the mobile web. That’s a very fancy way of saying that they aim to do for the web what Facebook Instant Articles does for…well…Facebook.
I’ll be completely honest: when I first started reading about it I was viewing it as basically a performance version of the Vanilla JS site. A “subset of HTML”, no JavaScript—it sounded very much like someone having a little too much fun trolling readers. It was only after seeing the publishers who were associated with the project and then looking at the GitHub repo that I realized it was a real thing.
AMP provides a framework for developers to use to build their site with good performance baked in—not entirely unlike Bootstrap or Foundation does for responsive design.
To build a “valid AMP page” you start by using a subset of HTML (carefully selected with performance in mind) called AMP HTML. You also use the AMP JavaScript library. This library is responsible for loading all external resources needed for a page. It’s also the only JavaScript allowed: author-written scripts, as well as third party scripts, are not valid.
If you want to load resources such as images, videos or analytics tracking, you use the provided web components (branded as AMP Components).
By enforcing these conditions, AMP retains tight control of the loading process. They are able to selectively load things that will appear in the initial viewport and focus heavily on ensuring AMP pages are prerender and cache friendly. In return for having this level of granular control, they give up browser optimizations like the preloader.
To further help achieve the goal of “instant load”, Google is offering to provide caching for these AMP pages through their CDN. It’s free to use and publishers retain control of their content.
The result is pretty impactful. The AMP Project is reporting some rather significant improvements for publishers using the AMP pages: anywhere from 15-85% improvement in Speed Index scores when compared to the original article.
So from a performance standpoint, the proposition is pretty clear: buy into AMP’s tools and approach to development and in return you’ll get a fast loading page without all the hassle of actually, you know, optimizing for performance.
There’s not anything particularly revolutionary about this. The Google caching is notable in that it is free, but other than that it appears to be nothing more than any CDN can do for you. You can build your sites to be prerender and cache friendly. You can limit your use of JavaScript. You can carefully select your HTML and write your CSS with the goal of performance in mind. You can do all these things all by yourself (and in fact you should be doing all of these things).
There is also nothing too exciting about the claim that using a subset of the web’s features will improve your performance. Kill JavaScript on any traditional article page out there and you’ll likely see very similar returns.
The advantage that AMP has over anyone else who might try to make similar claims is that AMP provides clear incentive by promising better methods of distribution for AMP content than non-AMP content.
The distribution model is slightly more fuzzy at the moment than the performance impact, but with a little imagination you can see the potential. The AMP Project is promising a much-needed revenue stream for publishers through soon to be added functionality for subscription models and advertising. Google, for its part, will be using AMP pages in their news and search products at the very least.
There’s a demo of what the search implementation could potentially look like. (Addy Osmani was kind enough to make a video for those who can’t see the demo in their region.)
The demo is definitely impressive (provided your article uses “valid AMP HTML”). AMP pages get pulled into a nicely formatted carousel at the top of the search results and pages load instantly when tapped on. It’s exactly the kind of performance I would love to see on the web.
Google does claim they have no plans at the moment to prioritize content that is on AMP pages, but how many of us are going to be surprised to see an implementation like this go live?
AMP has giving performance a “paint-by-numbers” solution. The project has also drawn a very clear line from point A to B: do this, and here’s what we’ll do for you.
As a result they get to do an interesting thing here: they get to suggest a big, fat “reset” button and have people take them seriously.
This is something Eric Meyer saw coming with the rise in ad blocking.
Feels like content blockers are a two-decade reset button, sending us back to 1995 when nobody was sure how to make money publishing online.
No question that’s scary, but it’s also an opportunity. We can look at what we got wrong in the last 20 years, and try something different.
It’s kind of a unique moment. How often does an entire industry get an almost literal do-over?
AMP is experimenting with what a do-over would look like. Start fresh. Take all the baggage we’ve been adding, remove it, and then try to collectively come up with something better.
If anyone had suggested hitting “reset” a month ago, I would have found it to be an interesting thought experiment. I may have even gotten a little bit excited about the idea. So why is it that now that it’s here, I find it a bit unsettling?
I think it comes down to incentives.
If you can build a site that performs well without using AMP, then what does AMP offer us? Certainly convenience—that’s the primary offering of any framework. And if AMP stopped there, I think I’d feel a little more at ease. I actually kind of like the idea of a framework for performance.
It’s the distribution that makes AMP different. It’s the distribution that makes publishers suddenly so interested in building a highly performant version of their pages—something they’re all capable of doing otherwise. AMP’s promise of improved distribution is cutting through all the red tape that usually stands in the way.
This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.
That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.
Getting rid of the clutter on the web and improving performance is a very worthy and important goal, as is finding ways to allow publishers on the web to have a consistent revenue stream without derailing the user experience.
But they should be decoupled. Provide tooling to improve performance. Provide a model and method for producing a revenue stream and improving distribution. You can encourage better performance by factoring that into the distribution model, but do that for performance in general—not just performance gained by using a specific set of tools.
There’s a smart team behind AMP and I do think there’s value in what they’re doing. I’m hopeful that, eventually, AMP will evolve into something that really does benefit the web as a whole—not just a specific version of it.
]]>We have the mass migration to HTTPS. There’s HTTP/2 which provides the first major update to HTTP in over 15 years. Alongside of that we have Google’s QUIC which could provide significant reduction in latency. Service workers brings a programmable proxy to the browser. We have more focus than ever on motion design on the web. Improved performance metrics have shifted the discussion to more experience-based optimizations such as optimizing for the critical path. We have the shift to ECMAScript 6. The list goes on and on.
It’s very exciting. But it can also be stressful.
The other day I tweeted about my excitement about some of these new standards. Shane Hudson was the first reply:
Quite worryingly, some of those words are gobbledegook to me. Looks like I have some research to do!
That sense of worry is something that seems to be widespread in our industry. Arguably the most common question I’ve heard at events over the last few years—whether directed to myself, another speaker, or simply discussed over drinks at the end of the night—is how people “keep up”. With everything coming out there is a collective feeling of falling behind.
Some have blamed it on increasing complexity but I don’t really buy that. My first few sites were simple (and ugly) things I put together using Notepad and an FTP client while teaching myself HTML using a little magazine I bought. If I were just getting started today that same setup would work just as well. In fact, it would probably be easier as the baseline of browser support has generally improved and frankly, there are a ton of excellent resources now for learning how to write HTML, CSS and JavaScript.
I didn’t think much about accessibility or performance or semantic markup or visual design when I started. I just used what little I knew and learned to build something.
Over time as I learned more and more about the web, I started to recognize the extreme limitations of my knowledge. I realized accessibility was important and that I needed to learn more about that. I learned that performance was important. I learned that typography was important.
And so I dug in and tried to learn each. The more I learned, the more I realized I didn’t know. It’s the Dunning-Kruger effect in full force.
No, I don’t think the complexity of building for the web has changed. I think our collective understanding of what it means to build well for the web has and that as that understanding has deepened, we’ve become acutely aware of how much we individually still do not know.
I certainly have improved as a developer since I first started. Yet everything I’ve learned has exposed a dozen more topics I know nothing about. The list of things I don’t know about the web grow as fast as my well-intentioned “read-it-later” list, so how do I prioritize and figure out what to explore next?
Susan Robertson had some solid advice on coping with this in her article on A List Apart a year ago:
So I’ve started devoting the time I have for learning new things to learning the things that I like, that matter to me, and hopefully that will show in my work and in my writing. It may not be sexy and it may not be the hottest thing on the web right now, but it’s still relevant and important to making a great site or application. So instead of feeling overwhelmed by code, maybe take a step back, evaluate what you actually enjoy learning, and focus on that.
I completely agree with her stance on learning about what interests you, but I would add one small bit of advice to this as well: when in doubt, focus on the core. When in doubt, learn CSS over any sort of tooling around CSS. Learn JavaScript instead of React or Angular or whatever other library seems hot at the moment. Learn HTML. Learn how browsers work. Learn how connections are established over the network.
The reason for focusing on the core has nothing to do with the validity of any of those other frameworks, libraries or tools. On the contrary, focusing on the core helps you to recognize the strengths and limitations of these tools and abstractions. A developer with a solid understanding of vanilla JavaScript can shift fairly easily from React to Angular to Ember. More importantly, they are well equipped to understand if the shift should be made at all. You can’t necessarily say the same thing about an INSERT-NEW-HOT-FRAMEWORK-HERE developer.
Building your core understanding of the web and the underlying technologies that power it will help you to better understand when and how to utilize abstractions.
That’s part one of dealing with the rapid pace of the web.
The second part is letting go and recognizing that it’s ok not to be on the bleeding edge.
In another fantastic A List Apart post today, Lyza Danger Gardner looked at Service Workers and the conundrum of how you can use them today. As she points out, for all the attention they’ve received online, support is still very limited and in several cases incomplete. While I think Service Workers have a simpler migration path than many other standards—the whole API was built from the ground-up to be easy to progressively enhance—I think her nod to the hype versus the reality of support is important.
Service workers are one of those potentially seismic shifts on the web. New uncharted territory. And that brings excitement which in turn has brought a lot of posts and presentations about this new standard. For people who have seen all of this chatter but haven’t actually dove in yet, it can feel like they’re quickly falling behind.
But for all that hype, browser support is still in the early days. Building with service workers is still living on the edge—it’s pretty far from mainstream. The same is true for many of the technologies that are seeing the most chatter.
That doesn’t mean you don’t want to pay attention to them, but it does mean you don’t need to feel left behind if you haven’t yet. These are very new additions to the web and it will take time for our understanding of their potential (and their limitations) to develop.
As Dan McKinley has eloquently argued, there is a great deal of value in forgoing life on the bleeding edge and instead choosing “boring technology”—technology that may not be as “cool” but that has been around awhile. The major advantage is that the kinks have been worked out:
The nice thing about boringness…is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.
Bleeding edge technology is exciting, but there is a reason that phrase is so vivid.
If you were to ask me, “Tim, how do you keep up?” my answer would be this: I don’t. I don’t think any of us do. Anyone who tries telling you that they are keeping up with everything is either putting up a front or they’re not yet knowledgeable enough to be aware of how much they don’t know.
No matter how much time we spend working on the web, there is always some other API or tool or technique we haven’t used. There is always something we haven’t fully understood yet.
We’re blessed with a community full of people willing to share what they are learning about creating a vast knowledge base for us to tap into. We don’t need to know everything about the web. In fact, we can’t know everything about the web.
But that isn’t something to feel guilty about. That isn’t because of increasing complexity. That isn’t some sort of personal weakness.
It’s a sign of a deepening understanding of this incredible continuum we get to build and an honest acknowledgement that we still have so much left to learn.
]]>In the original story of the Wizard of Oz, the Emerald City isn’t actually green nor made entirely of emeralds. All of that came later. In the original story, before entering the city each person had to put on a pair of glasses. These glasses, they were told, would protect them from the bright glow of all the emeralds that would surely damage their sight. These glasses were attached and never removed. You wore them while eating, while going to the bathroom, while walking outside—you wore them everywhere and all the time.
This was all a ruse. The glow of the city wouldn’t damage anybody’s sight because there was no glow. That all came from the glasses which just happened to be tinted green. Through the lens of those glasses, everything glowed. The lens through which those people viewed their world shaped their perception of it.
I’d venture to say that most developers and designers are not big fans of proxy browsers—assuming they pay attention to them at all. They don’t behave in ways a typical browser does, which leads to frustration as we see our carefully created sites fall apart for seemingly no reason at all. And frankly, most of us don’t really need to use them on a day-to-day basis. Through the lens we view the web, proxy browsers are merely troublesome relics of a time before the idea of a “smartphone” was anything other than a pipedream.
But our view of the web is not the only view of the web. People all over the world face challenges getting online—everything from the cost of data and poor connectivity to religious and political obstacles. In these environments proxy browsers are far from troublesome; they are essential.
So while most of us building for the web have never used a proxy browser (outside of the quick spot check in Opera Mini, perhaps), they remain incredibly popular globally. Opera Mini, the most popular of all proxy browsers, boasts more than 250 million users. UC, another popular proxy browser, boasts 100 million daily active users and is the most popular mobile browser in India, China and Indonesia.
These browsers perform optimizations and transcoding that can provide significant improvements. Several proxy browsers claim up to 90% data savings when compared to a typical browser. That’s the difference between a 2MB site and a 200kb site—nothing to sneeze at.
To understand how they accomplish this—and why they behave the way they do—we first need to revisit what we know about how browsers work.
Typical Browser Architecture
A typical modern browser goes through a series of steps to go from the URL you enter in your address bar to the page you ultimately see on your screen. It must:
- Resolve the DNS
- Establish TCP connection(s) to the server(s)
- Request all the resources on a page
- Construct a DOM and CSSOM
- Build a render tree
- Perform layout
- Decode images
- Paint to the screen
That’s a very simplified list and some of them can happen in parallel, but it’s a good enough representation for the purpose of highlighting how proxy browser architecture differs.
We can break these steps out into two general buckets. Steps 1-3 are all network constrained. How quickly they happen, and the cost, depends mostly on the characteristics of the network: the bandwidth, latency, cost of data, etc.
Steps 4-8 are device constrained. How quickly these steps happen depends primarily on the characteristics of the device and browser: the processor, memory, etc.
Proxy browsers intercede on behalf of the user in an attempt to reduce the impact of one, or both, of these buckets. You can broadly classify them into two categories: browsers with proxy services, and remote browsers.
Browsers with proxy services
The first category of proxy browsers are really just your plain-old, everyday browser that happens to offer a proxy service. These browsers alter the typical browser behavior only slightly, and as a result they provide the least benefit for end users as well as—usually—the least noticeable impact on the display and behavior of a web site. (While not really tied to a browser, look at Google’s search transcoding service for an example of how substantially a proxy service could alter the display of a page.)
Instead of requests being routed directly from the client to the web server, they are first routed through some intermediary layer of servers (Google’s servers, UC’s servers, Opera’s servers, etc). This intermediary layer provides the proxy service. It routes the request to the web server on behalf of the client. Upon receipt of the request, it sees if there are any optimizations it can provide (such as minification, image compression, etc) before passing back the potentially altered response to the client.
The browser-specific behavior (steps 4-8) remains the same as the typical browsers you’re used to testing on. All of the optimizations that take place focus primarily on the reducing the impact on the network (1-3).
There are many examples but at the moment of writing some of the more popular options in this category are Google’s Data Compression tool (Flywheel), UC Web’s Cloud Boost, and Opera Turbo.
Remote browsers
Remote browsers push the limits a bit more. They aggressively optimize as much as possible providing a much larger benefit for the end user, but also a lot more trouble for developers. (If that bothers you try to remember that the proxy browsers exist because users need them, not because developers do.) These are the browsers you more typically think of when hearing the term “proxy browser”. With the increase in browsers offering proxy services, I think referring to these as remote browsers can be a helpful way of distinguishing them.
Unlike their more conservative brethren, remote browsers are not content to merely make a few optimizations on the network side of things. They’ve got more ambitious goals.
When a website is requested through a remote browser, the request is routed through an intermediary server first before being forwarded on to the web server. Sounds familiar right? But here’s where remote browsers start to break away from the traditional browser model.
As that request returns to the server, instead of the intermediary server routing it back to the client, it proceeds to request all the subsequent resources needed to display the page as well. It then performs all parsing, rendering, layout and paint on the intermediary server. Finally, when all of that is taken care of, it sends back some sort of snapshot of that page to the client. This snapshot does not consist of HTML, CSS and JavaScript—it’s a proprietary format determined by whatever the browser happens to be.
That’s why calling them “remote browsers” makes so much sense. The browser as we know it is really contained on the server. The application on the phone or tablet is nothing more than a thin-client that is capable of serving up some proprietary format. It just so happens that when it serves that format up, it looks like a web page.
The most important thing to remember for remote browsers is that because all they are doing is displaying a snapshot of a page, anything that might change the display of that page requires a trip back to the server so an updated snapshot can be generated. We’ll discuss that in more detail in a later post as the implications are huge and the source of most proxy browser induced headaches.
There are many options, but Opera Mini, UC Mini and Puffin are some of the more popular.
What’s up next?
Understanding the basic architecture of proxy browsers makes testing on them so much easier and far more predictable. It’s the key to understanding all of the atypical behavior that causes so many developers to cringe whenever they have to fire up a proxy browser for testing.
With the foundation laid, we can spend the next several posts digging deeper into the specific optimizations the two categories of proxy browsers make as well as consider the implications for developers.
]]>There are a lot of unpredictable layers here.
I have no control over the network. It could be fast, it could be slow, it could be down entirely.
I have no control over the end device. It could be a phone, a laptop, an e-reader, a watch, a tv. It could be top-of-the line or it could be budget device with low specs. It could be a device released the other day, or a device released 5 years ago.
I have no control over the client running on that device. It could be the latest and greatest of modern browsers. It could be one of those browsers we developers love to hate. It could be a proxy browser. It could be an in-app browser.
I have no control over the visitor or their context. They could be sitting down. They could be taking a train somewhere. They could be multitasking while walking down the street. They could be driving (I know). They could be color-blind.
The only thing I control is my server environment. That’s it. Everything else is completely unpredictable.
So when I’m building something, and I want to make it robust—to make it resilient and give it the best chance it has to reach across this complicated mess full of unpredictability—I want to take advantage of the one thing I control by letting my server output something usable and as close to working as possible. That doesn’t mean it’s going to have the same fidelity as the ideal experience, but it does mean that provided there’s a network at least there’s an experience to be had.
From there I want to do whatever I can to provide offline support so that after that first visit I can reduce some of the risk the network introduces.
I want to apply my JavaScript and CSS with care so that the site will still work and look as good as possible, no matter how capable their browser or device.
I want to use semantic markup to give clients as much information as possible so that they can ensure the content is usable and accessible.
I want to build something that’s lightweight and fast so that my content gets to the visitor quickly and doesn’t cost them a fortune in the process.
I want to ensure that content is not hidden from the visitor so that they can get what they came for no matter their context.
Of course there’s some nuance here in the details, and assumptions will naturally be made at some point. But I want to make as few of those assumptions as possible. Because every assumption I make introduces fragility. Every assumption introduces another way that my site can break.
We used to call that progressive enhancement but I know that’s become a bit of loaded term with many. Discussions online, and more recently at EdgeConf have confirmed this.
I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.
We’re not building something to work without JavaScript. That’s a short-sighted definition of the term. As both Paul Irish and Kyle Simpson pointed out during EdgeConf, it puts the focus on the features and the technology. It’s not about that.
It’s about the users. It’s about finding ways to make our content available to them no matter how unpredictable the path that lies between us and them.
]]>Mobile web articles can take an average of eight seconds to load, by far one of the slowest parts of the Facebook app. Instant Articles provides a faster and richer reading experience for people in News Feed.
Now before we wring our hands too much over this, it’s worth noting that the articles themselves still start on the web. Facebook just becomes a distribution platform. Here’s the exact statement from their FAQ’s (emphasis my own):
Instant Articles is simply a faster, mobile-optimized way to publish and distribute stories on Facebook, and it supports automated content syndication using standards like HTML and RSS. Content published as Instant Articles will also be published on the publishers’ websites.
From Facebook’s perspective this is a no-brainer. It keeps the content within Facebook’s environment, which is one less reason for Facebook’s users to ever leave the app or site. In addition, we have numerous case studies showing that improved performance improves engagement. So Facebook creating a way to display content—very quickly and within their own little garden—makes absolute sense for them.
What I find interesting is the emphasis on speed. There are a few interesting interactive features, but speed is the selling point here. Facebook is pushing it very, very hard. “Fast” is scattered throughout their information about Instant Articles, and emphasized very heavily in the promotional video.
I’m all for fast as a feature. It makes absolute sense. What concerns me, and I think many others based on reactions I’ve seen, is the fact that Facebook very clearly sees the web as too slow and feels that circumventing it is the best route forward.
Here’s the thing: they’re not entirely wrong. The web is too slow. The median SpeedIndex of the top 1000 websites (as tested on mobile devices) is now 8220 according to HTTP Archive data from the end of April. That’s an embarrassingly far cry from the golden standard of 1000.
And that’s happening in spite of all the improvements we’ve seen in the last few years. Better tooling. Better browsers. Better standards. Better awareness (at least from a cursory glance based on conference lineups and blog posts). Sure, all of those areas have plenty of room for improvement, but it’s entirely possible to build a site that performs well today.
So why is this a problem? Is the web just inherently slow and destined to never be able to compete with the performance offered by a native platform? (Spoiler: No. No it is not.)
Another recent example of someone circumventing the web for performance reasons I think gives us a clue. Flipkart, a very large e-commerce company operating in India, recently jettisoned their website (on mobile devices) entirely in favor of Android and iOS apps and is planning to do the same with their desktop site. Among the reasons cited for the decision, the supposedly better performance offered by native platforms was again a primary factor:
Our app is designed to work relatively well even in low bandwidth conditions compared to the m-site.
Had I been in that interview my follow-up question would’ve been: “Well then, why don’t you design your website to work well even in low bandwidth conditions?” Alas, I was not invited.
But this quote is really the best indicator of why the web is so slow at the moment. It’s not because of any sort of technical limitations. No, if a website is slow it’s because performance was not prioritized. It’s because when push came to shove, time and resources were spent on other features of a site and not on making sure that site loads quickly.
This goes back to what many have been stating as of late: performance is a cultural problem.
While this is frustrating, this is also why I’m optimistic. The awareness of performance as not merely a technical issue but a cultural one, has been spreading. If things are progressing a little slower than I would like, it’s also fair to point out out that cultural change is a much more difficult and time consuming process that technical change. The progress may be hard to see, but I believe it is there.
We need this progress. Circumventing the web is not a viable solution for most companies—it’s merely punting on the problem. The web continues to be the medium with the highest capacity for reach—it’s the medium that can get into all the little nooks and crannies of the world better than any other.
That’s important. It’s important for business, and it’s important for the people who need it to access content online. It’s unfair, and frankly a bit naive and narcissistic, to expect anyone who wants to read your articles or buy from your company to A) be using a specific sort of device and then B) go and download an app onto that device to accomplish their goal. The reach and openness of the web well worth preserving.
Scott Jehl had a beautiful tweet rant prompted by Facebook’s announcement. The one that stuck with me the most was this one:
So yeah, I think any criticism of the web’s terrible performance is totally valid. We can choose to do better, but our focus is elsewhere.
Scott’s right: performance is a decision. We actively choose whether to devote our time and energy to improving it, or to ignore it and leave it up to chance.
Let’s choose wisely.
]]>But if you’ve followed along you know that I am extremely passionate about improving performance on the web. Getting a chance push for better performance from within a company that handles 20% of the web’s traffic and is full of people who are after the same goal was too good an opportunity to pass up.
It’s a big change, but an exciting one. Akamai constantly talks about “building a faster, stronger web”. Sometimes a company has a snappy line that they use, but there is little evidence that they believe in it. That’s certainly not the case here. They’ve been very active in investing in better education, tooling and standards for the web (and recent moves like hiring smart folks like Yoav Weiss to actively work on web standards only further cements that commitment). When they say they want a stronger web, they actually mean it.
The role is a new one within their young (only one year old!) developer relations group. At the moment, it’s pretty undefined other than the goals of helping people make the web faster and helping Akamai figure out the best ways to enable people to do that. While the specifics will be defined over time, here’s what I do know:
- The role involves a lot of me doing the things I already like to do. I’m going to be doing a lot of research and experiments around performance and finding ways to improve it. I’m going to learn a lot and share what I learn. The main difference is I’m going to have more time to do it now.
- I’m almost certainly going to start being more vocal and active in the standardization process. There are a lot of interesting challenges ahead and we will need improved standards to help us overcome them.
- I will not be marketing. Akamai doesn’t want me to do it. I don’t want to do it. On my list of “presentations I don’t like to watch”, product pitches sit right at the bottom just barely above “presentations that involve me getting hit repeatedly in the face.” The stuff I write and talk about is going to be very much like the stuff I’m writing and talking about right now.
- I’ll still be working from my headquarters here in beautiful and frequently cold northern Wisconsin. Akamai has done a lot of work to make working remotely as seamless an experience as possible. There’s a lot of Slack in my future.
- I will be working with some incredibly talented and friendly people. The dev rel team is small (its only other members are Kirsten Hunter, Darius Kazemi and fearless leader Michael Aglietti), but so very smart and so very talented! Beyond that, I’ve gotten to know many folks at Akamai over the years—some of whom I am lucky enough to call friends. There are a ton of incredibly smart and passionate people there. If you subscribe to the adage that “if you’re the smartest person in the room, you’re in the wrong room”, then Akamai is definitely the right room.
- It’s going to be a lot of fun!
We’ve made a lot of progress pushing performance in the past few years, but we’ve got some serious challenges ahead of us as well. Some of them are cultural, some of them are educational, and some of them will require improved tooling and standards.
I’m super excited to get to tackle those challenges head-on!
]]>But that doesn’t mean we can dismiss page weight altogether. The web is not free. Data has a cost and that cost varies around the world. We’ve always sort of guessed that sites could be a little expensive in some areas, but other than a few helpful people tweeting how much certain sites cost while roaming, there wasn’t much in the way of hard data. So, I built What Does My Site Cost?.
The ITU has data about the cost of mobile data in various countires and World Bank provides some great information about the economic situation around the world. Pairing the two together, we can get an idea of how much things might cost—and what that means in relation to the overall economy in those countries. I’m not particularly good with economics, but thankfully for me Victoria Ryan is and she was willing to help me work through the details to make sure the numbers actually mean something.
For starters, the site is going to report three metrics.
- Cost in USD
What is the approximate cost to the user of loading that page around the world (based on information about the cost of 500MB of data). - Cost in USD, PPP
What is the approximate cost to the user of loading that page around the world (based on information about the cost of 500MB of data), with Purchasing Power Parity factored in. This gives a better representation of relative costs based on the differences in values of currency. - Cost as a % of GNI, PPP
Using the PPP cost already calculated, this metric compares that value to the daily Gross National Income per capita to factor in affordability.
Running Tests
Thanks to the always helpful Pat Meenan, the site is powered by everyone’s favorite performance testing tool: WebPageTest.org. You can choose to run the test directly from What Does My Site Cost?. If you do, WebPageTest will run the test using Chrome mobile over a 3G network and you’ll be able to jump to those results once the test has completed.
Figure 1: Site cost indicators are now available directly in WebPageTest results.
But what really has me excited is the integration directly into WebPageTest. If you use WebPageTest to analyze your site, you’ll see a new “Cost” column in your test results giving you an indicator of how (relatively) expensive your site is. Following the link there will bring you back to What Does My Site Cost for a deeper dive. In other words, you don’t have to go out of your way to find out how much a site might cost—the information will be seamlessly presented to you whenever you test a page.
What’s Next?
For starters, I want to get more countries in there. I’m working on that. I also hope to add in information about roaming costs (almost scared to see how bad those numbers will be) but I have to track down more reliable data there first. That’s a little trickier (so it seems), but I’m sure it can be found somewhere.
As I mentioned before, I’m not very good with economics so if one of you out there are and have recommendations for additional metrics to show definitely let me know.
]]>The jQuery team threw their weight behind it this morning.
…we love Pointer Events because they support all of the common input devices today – mouse, pen/stylus, and fingers – but they’re also designed in such a way that future devices can easily be added, and existing code will automatically support the new device.
Unfortunately, as they went on to point out, there are some hurdles to jump yet. While Microsoft has a full implementation in IE11 and Mozilla is working on it, Apple has shown no interest and Google seems ready to follow their lead.
I was willing to give the Blink folks the benefit of the doubt, because I do remember they had specific and legitimate concerns about the spec awhile back. But after reading through notes from a Pointer Events Meeting in August, I’m forced to reconsider. The Chrome representative had this to say:
No argument that PE is more elegant. If we had a path to universal input that all supported, we would be great with that, but not all browsers will support PE. If we had Apple on board with PE, we’d still be on board too.
Doesn’t sound very good, does it?
Let’s set any opinions about Pointer Events aside. Frankly, I need to do a lot more digging here before I have any sort of strong opinion in one direction or another. There is a bigger issue here. We have a recurring situation where all vendors (save for Apple) show interest in standard, but because Apple does not express that same interest, the standard gets waylaid.
The jQuery team took a very strong stance against this behavior:
We need to stop letting Apple stifle the work of browser vendors and standards bodies. Too many times, we’ve seen browser vendors with the best intentions fall victim to Apple’s reluctance to work with standards bodies and WebKit’s dominance on mobile devices. We cannot let this continue to happen.
As you might expect, the reactions have been divided. While many have echoed those sentiments, some have rightfully pointed out that Apple and Safari have made some really great contributions to the modern Web.
Of course they have. So has Mozilla. So has Microsoft. There have actually been quite a few organizations who can make that very broad and generic claim. They all can also claim the opposite.
But here’s the current reality, one that has been accurate for awhile. Apple has a very, very strong influence over what standards get adopted and what standards do not. Partly it’s market share, partly it’s developer bias (see, for example, how other vendors eventually felt forced to start supporting the webkit prefix due to vendor prefix abuse).
Apple simply does not play well with other vendors when it comes to standardization. The same sort of things we once criticized Microsoft for doing long ago, we give Apple a pass on today. They’re very content to play in their own little sandbox all too often.
They also don’t play particularly well with developers. They supposedly have a developer relations team, but it’s kind of like Bigfoot: maybe it’s out there somewhere but boy there hasn’t been a lot of compelling evidence. This splendid rant from Remy Sharp and the follow-up from Jeremy Keith come to mind. They were written in 2012, but the posts would be equally on point if published today.
The other vendors aren’t exactly perfect either. The Microsoft folks, no doubt reeling from all the negativity aimed at them over the years, have more than once been content to let everyone else duke it out over a standard, only getting involved late when a consensus has been reached. The Blink folks, despite being the best positioned to take a stand, have been happy to play the “Apple won’t do it so I guess we won’t either” card on multiple occasions.
But at least you can have a dialogue with them. It’s easy to reach the Mozilla, Google and Microsoft folks to discuss their thoughts on these emerging standards. That’s a much harder thing to do with the Apple crew.
So I’m tempted to agree with jQuery’s stance about Apple stifling the work of vendors and standards bodies. They haven’t exactly done anything to make me feel like they’re particularly interested in the idea of the “open” web.
But I don’t think other vendors get to be let off the hook. I’m just as happy to point my fingers at them for being so easily persuaded by an argument that amounts to “we don’t want to”. I’m not comfortable with a single entity being able to hold that much influence when so many others have expressed interest in an idea.
This isn’t a healthy thing for the web. We need something to change here. And I’m optimistic. To quote Jeremy’s 2012 post:
]]>It can change. It should change. And the time for that change is now.
The car, of course, had to have certain features. A way to steer. Brakes. An engine. Doors. These were things all cars had and all cars had to have if anyone was going to ever consider purchasing them.
From there you decided on the bells and whistles. Did you want power windows and power locks? Did you want a built-in CD player or would a cassette player and radio work just as well? Did you want a sunroof?
We often did without most of those add-ons. They were the extras. They were what drove the cost of a car higher and higher. They were nice to have, but a car would work without these things.
I worry that we have it backwards on the web. We ask questions like: How much does accessibility cost? How much does progressive enhancement cost? Meanwhile we’re shipping sites that support only the most “modern browsers”. We ship sites built specifically to achieve some fancy effect.
Then we say that we’ll get to accessibility later. We’ll make it faster later. We’ll worry about those less-capable devices later. And that’s in the best of cases. More often those “features” are not acknowledged at all. If it’s not a priority at the beginning of a project, why would we expect it to be a priority later?
Yes, there’s a cost associated with building things well (there’s also a cost of not building things well). Building something that is stable and robust always costs more than building something that is brittle and fragile.
The problem is not that there is a cost involved in building something that works well in different contexts than our own. The problem is that we’re treating that as an option instead of a given part of what it means to build for the web.
How did access get to be optional?
]]>So I get a great deal of happiness from reading posts from much smarter folks than I who are rallying against this all-to-common mistake.
Back in December, The Filament Group analyzed a bunch of client -side MVC frameworks to see their impact on the initial load time of a page. The results to render a simple To-Do app were disappointing:
- Ember: 5s on a Nexus 5, 3G connection
- Angular: 4s on a Nexus 5, 3G connection
- Backbone: 1s on a Nexus 5, 3G connection
Only Backbone scores in a way that is at all acceptable, particularly in a world where people are trying to break the 1000 SpeedIndex barrier.
And just last month PPK wrote up his thoughts on client-side templating. The full post is well worth your time, but for those of you who would like to cut to the chase:
I think it is wasteful to have a framework parse the DOM and figure out which bits of default data to put where while it is also initialising itself and its data bindings.
and:
Populating an HTML page with default data is a server-side job because there is no reason to do it on the client, and every reason for not doing it on the client.
I’ve said it before: if your client-side MVC framework does not support server-side rendering, that is a bug. It cripples performance.
It also limits reach and reduces stability. When you rely on client-side templating you create a single point of failure, something so commonly accepted as a bad idea that we’ve all been taught to avoid them even in our day-to-day lives.
“Don’t put all your eggs in one basket.”
It’s pretty good advice in general, and it’s excellent advise when you’re wading through an environment as unpredictable as the web with it’s broad spectrum of browsers, user settings, devices and connectivity.
This might sound like I’m against these tools altogether. I’m not. I love the idea of a RESTful API serving up content that gets consumed by a JavaScript based templating system. I love the performance benefits that can be gained for subsequent page loads. It’s a smart stack of technology. But if that stack doesn’t also consist of a middle layer that generates the data—in full and on the server—for the first page load, then it’s incomplete.
This isn’t idealism. Not only have I seen this on the sites I’ve been involved with, but companies like Twitter, AirBnB, Wal-Mart and Trulia have all espoused the benefits of server-side rendering. In at least the case of the latter three, they’ve found that they don’t have to necessarily give up those JS-based templating systems that everyone loves. Instead, they’re able to take advantage of what Nicholas Zakas coined “the new web front-end” by introducing a layer of Node.js into their stack and sharing their templates between Node and the client.
This is where it gets interesting and where we can see the real benefits: when we stop with the stubborn insistence that everything has to be done entirely on the client-side and start to take advantage of the strengths of each of the layers of the web stack. Right now most of the progress in this area is coming from everyday developers who are addressing this issue for their own sites. Ember is aggressively pursuing this with FastBoot and making exciting progress. React.js emphasizes this as well. But most of the other popular tools haven’t made a ton of progress here.
I sincerely hope that this starts to change, sooner rather than later. Despite what is commonly stated, this isn’t a “web app” (whatever that is) vs “website” issue.
It’s a performance issue.
It’s a stability issue.
It’s a reach issue.
It’s a “building responsibly for the web” issue.
]]>My top three choices for fiction are: The Martian, Ancillary Justice and Genesis. For non-fiction: Chuck Amuck, Stuff Matters and The Noble Approach. For web-specific titles: Responsible Responsive Design, Designing for Performance and The Manual (I’m just going to cheat and say read all the issues).
I saw a tweet the other day from Austin Kleon where he shared that he had read 70 books this past year. He also shared a brief “How to read more” list. I only hit 39 books this year so I’m not as qualified as he is to provide advice on this, but my advice would be very similar. In particular tip #4 is important:
If you aren’t enjoying a book or learning from it, stop reading it immediately. (Flinging it across the room helps give closure.)
I mention this each year, but if I’m not enjoying a book on some level I don’t finish it. That’s why I have yet to give a book a review of less than three stars out of five. I don’t want to review books I haven’t finished and I don’t want to finish books I’m not enjoying. I currently have nearly 300 books on my “to-read” list according to Goodreads. There’s no time to waste on books that aren’t interesting to me.
Onto the list (in order the books were read):
- Deaths of Tao by Wesley Chu 5⁄5
Fantastic sequel in what is shaping up to be a very, very fun series to read. It’s definitely darker and grittier than its predecessor, but there’s still plenty of the same sort of snarky commentary taking place between the main characters. Thoroughly enjoyed it and eagerly awaiting book three!
- Genesis by Bernard Beckett 5⁄5
Genesis is the very best sort of science fiction. It manages to explore topics such as defining consciousness, the nature of the soul and what it means to be human without ever once getting bogged down by these discussions. It grips you from the very start, and when you think you know where everything is headed it takes a sharp turn. Absolutely loved it!
- Getting Naked by Patrick Lencioni 3⁄5
I’m not a big fan of the business fable/parable thing but this was a gift from a friend so I decided to give it a read. As far as business fables go this is a decent one. But, as can be expected from this type of book, it’s very light on meat and lofty on ideals and straw men.
- The Upside of Irrationality by Dan Ariely 4⁄5
The Upside of Irrationality is an interesting (and surprisingly intimate) continuation of the discussion started in Predictably Irrational. Ariely’s style of writing and storytelling moves the book along at a brisk pace. I think a few of the conclusions he came to probably would’ve benefited from a few additional experiments to verify them, but for the most part they’re well thought out. Worth a read.
- Sand by Hugh Howey 5⁄5
Howey is edging his way into must-read territory for me. Another great dystopian novel from him.
- Crux by Ramez Naam 4⁄5
Crux picked up where Nexus left off, letting us see the impact of a post-human technology being used by the masses. It’s a stronger novel than the first book. Though I enjoyed Nexus, it could get a little preachy at times—pushing the underlying ideas a little too heavily. Crux seems more mature. It still explores some really interesting concepts, but it feels better integrated into the story this time.
I also really appreciate how the ideas in the book are backed by current technological advancements. In both this and Nexus, Naam follows the last chapter up with a section describing how similar technologies are being used in real-life today.
Really looking forward to book #3.
- The Flight of the Silvers by Daniel Price 5⁄5
The Flight of the Silvers has a little bit of everything—time travel, parallel universes and X-Men style rules of nature. At first I wondered if it would all be a little too scattered, but Price weaves it all together to create a super fast-paced book that was incredibly difficult to put down and lots of fun to read. Looking forward to the rest of the series and hoping they follow soon: lots of questions left to answer.
- A Web for Everyone by Sarah Horton and Whitney Quesenbery 5⁄5
This is the first book on the topic that I’ve read that I felt did a good job of presenting accessibility not as a list of bulletpoints to check off, but as a way of thinking about how you build your site. Whenever anyone is looking to get started in accessibility, this is where I’m going to point them.
- The Lies of Locke Lamora by Scott Lynch 4⁄5
The book hits the ground running on the first page, but as a results it took me a while to care about the characters in Locke Lamora. Once I did (probably about a quarter or so through the book), I enjoyed the story quite a bit. Good anti-hero novel.
- Falling off the Edge by Alex Perry 4⁄5
Perry walks you through a bunch of first-hand accounts of his experiences in areas where the impacts of globalization has been anything but encouraging. It’s not incredibly in depth, and a few of the stories seem a little more loosely tied to globalization than others, but altogether an interesting look at the “other” side of globalization.
- Salt Sugar Fat by Michael Moss 3⁄5
Moss takes a look at the big three (salt, sugar and fat) not through a scientific lens, but a business one. Based on numerous meetings and interviews, Moss dissects the food industry’s reliance on them—from their impact on taste to how manufacturers market them in a way that can often confuse even the smartest of shoppers.
It’s a really interesting read, but unfortunately it can be a little repetitive. Some of the chapters seemed to retell parts of the same story told in other chapters, as well as reintroduce people we were already introduced too. It doesn’t completely detract from the points the author is making, but it is occurs frequently enough to make the book feel more disjointed than it should have.
- The Humans by Matt Haig 3⁄5
The Humans is about an alien who comes to earth to kill a few humans who know too much, learns to love humanity, and so on. There are certainly things to like: there are indeed a few thought provoking sentences as well as a good amount of humorous insights (like the aliens perception of magazines, for example). Overall though, it was just a little too heavy-handed. Some books can explore the topic of humanity in a way where ti sort of reveals itself throughout the story—this isn’t one of those books. The plot is thinly constructed and exists pretty much entirely to let the author share his thoughts on the topic. It’s not a bad book—just not that great either.
- The First Fifteen Lives of Harry August by Claire North 4⁄5
In a similar vein to Replay (one of my all-time favorite books), The First Fifteen Lives of Harry August revolves around a character who lives his life over and over again. The world is ending, sooner than it used to, and it’s up to him to figure out why. While it does tend to linger on a few details longer than necessary, overall it’s a smart and well-written book that is equal parts drama, thriller and science-fiction.
- Flowers for Algernon by Daniel Keyes 5⁄5
Fantastic! Watching Charlie’s mental progression, and subsequent regression, was both fascinating and heartbreaking. But what really puts the book over the top for me is Keyes’ focus on the emotional baggage that comes along with a sudden burst in intelligence: the bad memories, the sudden realization that folks are not as nice as they had seemed, and the struggle that comes with trying to find a way to match his new found mental maturity with his still stunted emotional maturity. Definitely a book that keeps you thinking long after the final sentence.
- The Martian by Andy Weir 5⁄5
Fantastic! The Martian is a realistic, thrilling and often humorous story of one man’s attempt at survival on Mars. Gripping from the very start of the book through to the very last sentence. Can’t recommend this book highly enough. Read it.
- A Better World by Marcus Sakey 5⁄5
I really enjoyed Brilliance, so when the sequel came out I grabbed it right away and had pretty high hopes. Sakey did not disappoint.
When I wrote my review about the first book, I said he touched on some social topics but didn’t really explore them in much depth. A Better World starts to flesh that out a bit more by adding more dimension to the characters and more meaning to the over-arching plot. The result is a book that is a bit more though-provoking than the first, and just as fun and fast-paced.
- Financial Intelligence by Karen Berman, John Case and Joe Knight 4⁄5
I was looking for a book to help me brush up on some of the things I had forgotten from college and high school, as well as give me a little better understanding of what to pay attention to when it comes to the financial health of my company. Financial Intelligence fits the bill very nicely. It’s a pretty nice refresher for those who learned some of this stuff in the past and gentle enough for people completely new to the concepts as well. Good starting point.
- The Girl with All the Gifts by M.R. Carey 4⁄5
Well this was a surprisingly enjoyable read! I’m not a huge fan of the whole horror genre. Movies, books—there’s precious few of either that I’ve enjoyed. This one definitely breaks the mold. It feels fresh and has significantly more depth to it. The relationship between the main characters is fascinating, as is the way those relationships alter—and even seem to come full circle in some cases—by the end of the book. Thoroughly enjoyed.
- Daily Rituals by Mason Currey 4⁄5
Daily Rituals provides overviews of 150+ people’s day—what they did to be productive, to relax, when they worked, when the rested, etc. Each profile is short and stands alone, so it’s an easy pick it up/set it down read. Some of the profiles are more detailed and interesting than others, but what I enjoyed most was seeing the patterns emerge (for example, you can almost do a 50⁄50 split of people who claimed long walks and exercise were the key to their success, versus people who turned to some sort of drugs or medicine to keep themselves going).
- Off to Be the Wizard by Scott Meyer 5⁄5
Just plain old fun. Heavy on the comedy and full of geeky references.
- Spell or High Water by Scott Meyer 4⁄5
Not quite as funny as book 1, but still plenty to like. Really looking forward to round 3.
- The Stars My Destination by Alfred Bester 4⁄5
The Stars My Destination (or Tiger, Tiger) seems to come up all the time in the discussion of great sci-fi classics. Having finally read it, it’s fairly easy to see why and to see the influence the book has had on cyberpunk and sci-fi in general. While it never quite reached the same level of quality as Bester’s The Demolished Man, that’s more a testament to how good that book is than it is a detriment to this one. The plot moves forward at a blistering pace and despite the fact that the main character is very unlikeable, you still can’t pry yourself away from finding out what happens next.
- The Mobile Web Handbook by Peter-Paul Koch 5⁄5
You can always trust PPK’s writing to be extremely well-researched and thorough. There is no shortage of books about the mobile web but he managed to find plenty of new and interesting tidbits regardless. I especially enjoyed the chapters on the mobile market and browsers.
- Stuff Matters by Mark Miodownik 5⁄5
So, so good. The book starts with a picture of the author and each chapter explores the “stuff” in that picture: glass, concrete, dark chocolate, etc. He discusses how the stuff gets made, what it’s good for, and it’s evolution over time. In some hands, this could be dry stuff but the author is incredibly passionate about materials and it’s contagious. You feel his enthusiasm throughout each chapter and can’t help but start looking at the everyday materials around you in a new light. One of my favorite reads of the year!
- Designing for Performance by Lara Hogan 5⁄5
Designing for Performance is the book to hand to anyone—designer or developer—who wants to get started making faster sites. Lara carefully and clearly explains not just how you can create better performing sites, but how you can champion performance within your organization ensuring it remains a priority long after launch. Consider this the starting point in your web performance journey.
- Chuck Amuck by Chuck Jones 5⁄5
Chuck Amuck is more memoir than autobiography, which makes it all the more fascinating. Chuck talks about the very intense process of cartoon animation, the team that was in place at WB (along with some fairly harsh assessments of “management”) and how iconic characters like Bugs Bunny, Wile E. Coyote, and Daffy Duck evolved and developed their own personalities over time. As a bonus, the book sprinkles sketches and storyboards of the Looney Tunes animations throughout.
- On Web Typography by Jason Santa Maria 5⁄5
Fantastic primer on web typography. Loads of useful information and advice all very clearly explained. If you’ve been interested in typography but have had a hard time making sense of it all, this is the ideal place to start.
- The Manual, Issue 1 5⁄5
Finally got around to purchasing the first three issues of the Manual and I’m wondering what took me so long. Issue 1 was fantastic. A combination of great writing and careful editing resulted in a really enjoyable book with every section providing food for thought. I particularly enjoyed the sections from Simon Collison, Dan Rubin, and Frank Chimero. I also was really impressed by the quality of the book itself: looks great and lovely attention to detail.
- Lock In by John Scalzi 5⁄5
A blend of mystery and science-fiction (leaning more heavily towards mystery), Scalzi’s latest is a good one. Some of the issues discussed in the book are fairly thinly veiled allusions to current situations but they never feel forced in anyway (as happens when an author pushes too hard). Instead, the story moves quickly with plenty of tension, humor and thought-provoking dialogue along the way.
- The Manual, Issue 2 5⁄5
Proving that issue 1 wasn’t a fluke, the second installment is just as excellent. Really tough to choose, but I’d say the sections from Karen McGrane, Cennydd Bowles and Trent Walton were probably my favorites.
- The Noble Approach by Tod Polson 5⁄5
A wonderful blend of biographical details and animation design principles that leaves you with a whole new appreciation for cartoon design. After reading the book, watching the cartoons becomes an even more enjoyable experience as you realize just how beautifully crafted they are. Really enjoyed this one!
- Ancillary Justice by Ann Leckie 5⁄5
I purchased Ancillary Justice almost immediately after talking to a friend earlier this year and hearing her rave about it, but it had remained untouched in my pile of books to eventually since then. I like sci-fi, but I’m not a huge space opera kinda guy so I hesitated. I shouldn’t have.
Ancillary Justice is a great book—well deserving of the awards it won. It’s smart, beautifully written and gripped me from early on. Ann does an incredible job of building tension throughout without a ton of superfluous battles and forced action. A tight plot and smart dialogue is all she needs to put you on the edge of your seat and keep you there until the final page.
- Responsible Responsive Design by Scott Jehl 5⁄5
I’ve already written up a full review, but here’s the short version: this is a fantastic book firmly rooted in real-world knowledge. It should be on the shelves of web developers everywhere.
- The Manual, Issue 3 5⁄5
In typical Manual form, the book was high quality from start to finish. In particular the entries from Duane King, Jeremy Keith and Ethan Marcotte stood out.
- Ancillary Sword by Ann Leckie 4⁄5
A worthy follow-up to Ancillary Justice that only slightly falls short of matching Ancillary Justice’s excellence. Still thoroughly enjoyed it, but it was paced a bit slower and only picked things up about 2⁄3 of the way through. The same smart, high-quality writing is there. It just felt a little more like a setup for the final book in the trilogy (which should be very eventful).
- Timing for Animation by John Halas 4⁄5
A solid introduction to the topic. A lot of good ideas and principles here—not just for cartoon animation, but for other timing in other interactive mediums as well.
- That’s All Folks! by Steve Schneider 4⁄5
While you’ll find more detailed information elsewhere, the author does a pretty good job of providing some context and historical insight into the evolution of the Looney Tunes, the creation of the major characters, and the personalities behind the scenes. Where this book really shines, though, is in the many beautiful and hard-to-find sketches and animation artwork prominently on display. It’s a gorgeous book!
- Hollywood Cartoons by Michael Barrier 4⁄5
Barrier’s book is an extremely well researched and well written look at American cartoon animation from the 30’s to 50’s. While, understandably, Disney gets the most attention, he does discuss the work produced by places such as Warner Bros, Terrytoons, Hanna-Barbera and UPA. That’s really where the book flourishes. Seeing how ideas and techniques spread from studio to studio and being able to compare and contrast their different approaches is fascinating.
Barrier doesn’t pull punches. Nobody in this book is free from criticism: their miscues are highlighted just as much as their successes. In fact, he’s quite critical of all the studio’s and their work. While I don’t necessarily agree with a few of his critiques of some of the cartoons (his opinion on the impact of Noble & Jones combined work couldn’t be farther from my own), it’s interesting to hear his thoughts on them nonetheless.
- Getting Schooled by Garret Keizer 5⁄5
After a 15 year absence Keizer returns to teaching for one year and, thanks to this book, we get to follow along. The result is an insightful look at modern day teaching that is both humorous at times, and depressing at others.
There you have it. If you have any recommendations for what I should add to my stack of books to read in 2015, feel free to let me know!
Past years
]]>Fast Enough
How fast is fast enough? Page weights and load times vary so much from site to site and industry to industry. While it’s easy to spot the obviously bad examples, it can be much more difficult to find the line between is “fast enough” and what is slow.
“RWD Is Bad for Performance” Is Good for Performance
Myths are powerful things. Put the right spin on a myth and you can use it to build up; to create something new and better. I’ve found the “responsive design is bad for performance” myth to be really good for performance.
JS Parse and Execution Time
Too often we focus merely on size and request count when discussing the use of JavaScript from a performance perspective. Using a tool built by Daniel Espeset, I tried to put some numbers to the parse and execution times of scripts as well.
Performance Budget Metrics
Which metric(s) should you use for a performance budget? I tried to sort the basic performance metrics into four distinct categories and identify how to use each in a budget.
Why RWD Looks Like RWD
Earlier in the year, there was a lot of discussion about why responsive design “looks” like responsive design. I threw my hat into the ring with a few theories.
Other years
]]>Re: performance budgets. I wonder if measuring times is smart or not. So many variables, seems like requests/sizes/blockers easier to track.
It’s an interesting question, and one that I touched on at the beginning of the year. I think it’s worth elaborating on a little.
The purpose of a performance budget is to make sure you focus on performance throughout a project. The reason you go through the trouble in the first place though is because you want to build a site that feels fast for your visitors.
One of these goals (prioritizing performance) is an internal one impacting the people who are creating the site. The other goal (building a site that feels fast) is an external one impacting people who visit your site. It’s not surprising that I’ve found the most effective metrics to differ for each.
For the purposes of this post, I’m breaking those metrics down into four categories:
Milestone timings
Examples: Load time; domContentLoaded; Time to renderMost time-based metrics are “milestone timings” (totally stealing that term from the super smart Pat Meenan). Some (like visually complete or time to interact) are closer than others to telling you something about the experience of loading a given page, but they all suffer from the same limitation: they measure performance based on a single point in time.
Web performance isn’t defined by a single moment. Like a book, it’s what happens in-between that matters.
Page A may load for 3 seconds, but not display anything to visitors until the 2.5 second mark. Page B may load in 5 seconds, but display the majority of the content after a mere second. Despite taking 2 seconds longer in total, Page B may be the better experience.
A single milestone timing won’t help you identify that. To get a semi-accurate representation of how it feels to load a page, you have to pair several milestone timings together. You can do that, but there’s a better way (hey there, SpeedIndex).
Still, milestone timings as budget metrics do have advantages. They’re easy to describe. Visually complete is a pretty easy to understand even without a working knowledge of performance.
They’re also easy to track. You would be hard pressed to find any sort of performance monitoring solution that doesn’t allow give you these sort of metrics.
It’s worth singling out User Timing marks as a better option than the default milestone metrics reported by the browser. They do require a bit more planning and setup, but because they are custom to your site they can also give a much more accurate depiction of how your page is actually rendering.
SpeedIndex
I’m giving SpeedIndex its own little category because it deserves it. Whereas traditional metrics focused on a single moment, SpeedIndex attempts to measure the full experience. It focuses not just on how long it took for everything to display on a page, but how the page progressed from start to finish. SpeedIndex scores are like golf scores: the lower the better.In our example from above, Page B would likely have a lower SpeedIndex score. It got most of the content onto the page early, so the page appears faster to load.
SpeedIndex gets a lot of love, and for good reason. It’s the closest we can get to putting a number to how it feels to load a page. But it’s not without its faults.
SpeedIndex is not the easiest of metrics to explain to someone without a certain level of technical know-how. Heck, it can be hard to explain to people with a relatively high level of technical know-how. It didn’t really click for me until I re-read Pat’s original announcement a few times and played around with it in WebPageTest.
The other downside is that at the moment it’s not a metric that gets measured by anyone outside of WebPageTest. While I would be happy if everyone would track performance using a private instance of WebPageTest, I understand that’s not likely. Hopefully Pat’s work on RUM-SpeedIndex will result in better adoption of the metric in other monitoring tools.
Quantity based metrics
Examples: Total number of requests; Overall page weight; Total image weightI’m going to lump request count, page weight and the like under this third category that I just made up for the sake of having something to call them.
Weight and request count tell you virtually nothing about the resulting user experience. Two pages with the same number of requests or identical weight will render differently depending on how those resources get requested.
Yet even they can play a useful role in performance budgets. Their main advantage is that they are much easier to conceptualize during design and development. It’s a lot easier to understand the impact of one more script or another heavy image on a budget of 300kb than it is for a SpeedIndex based budget.
Rule based metrics
Examples: PageSpeed score; YSlow scoreI’ve seen some people use PageSpeed or YSlow scores as budgets. For anyone unfamilar, these are awesome tools that give your site a grade based on a list of performance rules it tests against.
It’s really valuable as a checklist of optimizations you should probably be doing, but I think it’s less effective as a budget. While there’s a slightly stronger relation between these grades and the overall experience of loading a page than there is for quantity based metrics, it’s still a loose connection. A page with a higher PageSpeed or YSlow score doesn’t always mean the experience is better than a page with a slightly lower one.
Monitoring your PageSpeed or YSlow score is a good idea, but not necessarily for your performance budget. Use these tools as a safety net for making sure you haven’t overlooked any simple optimizations.
How I roll
My initial budget is always based on either SpeedIndex or some combination of milestone metrics. Which I use depends on the organization, what they will be using for monitoring, and how they will use the budget.
Regardless of the specific metric I choose, I always start here because these metrics relate in some way back to the user experience. They help keep tabs on the external goal: creating a site that feels fast for visitors. That’s what I’m most concerned with.
These metrics get incorporated into the build process (using something like grunt-perfbudget as explained by Catherine Farman) and in the development environment (like Figure 1) to make sure they’re monitored.
Figure 1: Enforcing a performance budget within Pattern Lab using some custom JavaScript.
After a few experiments to determine a rough equivalent, I pair those metrics with a quantity metric. Quantity metrics are helpful in achieving the internal goal: ensuring priority is given to performance throughout a project. They help guide design decisions. This is something Katie Kolvacin and Yesenia Perez-Cruz have each done an awesome job of discussing.
Each of these types of metrics play different roles in guiding the creation of a site, and each is important. It doesn’t have to be an either/or proposition.
Use rule based metrics to make sure you haven’t overlooked simple optimizations.
Use quantity based metrics as guides to help designers and developers make better decisions about what goes onto a page.
But always back those up with a strictly enforced budget using a metric (like SpeedIndex) that is more directly related to the overall experience to ensure that the result feels fast. After all, that’s why you’ve decided to use a performance budget in the first place.
But there are tricky issues to navigate along the way: tables, performance, and input modes—oh my! That’s where Scott Jehl’s new book Responsible Responsive Design comes in.
It’s almost like two books in one. The first two chapters blow through many of the common hurdles people come across when implementing responsive design. How to handle tables, different input methods, accessibility, feature detection, testing—it’s all there. Scott deftly moves from topic to topic, explaining the issues and how to address them.
The next two chapters focus on performance (a favorite topic of mine). Chapter 3, “Planning Performance”, is a great primer on what the critical path is as well as the basic performance techniques that will help you optimize for it. There’s also some discussion of process and how things like performance budgets can help.
Chapter 4 gets into performance techniques more specific to responsive design: responsive images, structuring CSS, lazy loading content, cutting the mustard, etc.
There is a lot of meat on the bone here. Scott’s explanation towards the end of Chapter 4 about how to enhance through qualified loading of CSS and JS is pure gold, as is his discussion of feature detection. I was especially happy to see a discussion of Filament Group’s “x-ray perspective” included. They first discussed the approach in “Designing with Progressive Enhancement” (a book that has stood the test of time remarkably well) and it’s stuck with me ever since.
What’s so wonderful about the book is how rooted it is in real-world experience. These aren’t just nice-sounding techniques: these are things that Scott and the rest of the gang at Filament Group have battle tested on their own projects.
I also have to admit to being thoroughly entertained by many of Scott’s delightfully cheesy one-liners. (You know the type. The ones where you both laugh and groan at the same time.) A tech book with personality: not something you see everyday, but something that has become a bit of given from the A Book Apart series.
So who should read this? Well, everyone. If you’ve got the basics of responsive design nailed down but haven’t gone much farther, this will be a revelation. If you feel like you’ve flexed your chops on the advanced stuff, you’ll still come away with interesting ideas for how to improve your approach.
Long story short: it’s an excellent, important book and a must-have for anybody building responsively. When it goes up for sale on the 19th, buy it.
]]>I’ve been using Shoestring for awhile now, and I’m a huge fan. In fact it has become my go-to solution when I need such a tool. It’s small, powerful, and very, very smart.
It’s very rare that I write about a specific tool. Tools come and go. However after talking so much about the importance of reducing JavaScript bloat, I figured I should take the time to explain a little about how I’m doing that myself. Shoestring has played a large role in that. In addition, Shoestring has made some really smart decisions that are worth noting regardless of whether the tool is a fit for you or not:
Iterating on the wheel
One argument against throwing together yet another framework or utility is that we shouldn’t be reinventing the wheel. It’s not just about making use of the work others have done, but it’s also about avoiding the disruption caused by yet another syntax to learn. Shoestring doesn’t reinvent the wheel so much as it iterates on it.jQuery is massively popular. In fact, it’s far easier to find information about how to solve a problem with jQuery than it is to find information about how to solve that same problem with plain old JavaScript.
As a results, its syntax is widely recognized and familiar to many. If you’re working on a re-build, it can also hard to escape the hold it has over existing systems and components.
Shoestring is modeled after jQuery. That makes it a very comfortable transition for folks who have become very familiar with jQuery. Anything that works in Shoestring should also work in jQuery, and while it’s not guaranteed, I’ve found the opposite is often true as well.
Allowing smart defaults
I’ve written in the past about the importance of smart defaults. Jared Spool has talked about discovering only 5% of users changed the default Word settings. An academic study showed that you could get a huge increase in organ donors (an increase between 17-50%) simply by making the default option “Yes” instead of “No”. Point being, defaults matter because very few people will change them.Shoestring consists of a tiny core and three sets of extensions: DOM manipulation, events, and AJAX. Using the build tool, you can get very granular about what you do and don’t include and keep things as light as possible. Instead of rolling everything in all at once, it’s set up in a way that you can easily pick and choose as needed. Each extension lives on its own (ex: the after method).
How I’m using it
While you are free to use the build however you want, the way I’ve been using it is to start with a really sparse set of extensions:
require([
"shoestring",
"dollar",
"core/each",
"core/inarray",
"core/ready",
"events/bind"
]);
As I code, I pull in additional extensions only as I find I need them. If I need to use some AJAX functionality, I drop the relevant extensions into the build and move along. The result is that only what is absolutely needed goes in.
There are other ways of doing this. You could look for used Shoestring extensions in your code and have a custom build automatically created based on that, for example. But I like the deliberateness that this “simple by default” approach enforces. It makes me think very carefully about every little piece of code that I’m adding to my project, consciously justifying its existence.
Cutting the Mustard
Cutting the mustard is an incredibly valuable approach in its own right. By drawing a line in the sand between what a core and enhanced experience (and which browsers get each) you remove a lot of complexity and overhead from the process. Less capable browsers benefit from less code and overall bloat. Modern browsers benefit from no longer needed to carry around the extra baggage of endless polyfills and frameworks. Each browser gets what it needs to provide an experience fit for its capabilities, and very little more.Since Shoestring expects browser to support querySelectorAll, one of the tests commonly used to determine what cuts the mustard and what doesn’t, the two compliment each other very nicely.
At the top of the page, I’ll do a check to see if the browser cuts the mustard. If it does, I’ll then asynchronously load in Shoestring and the rest of the JavaScript (using Filament’s tiny loadJS) to get things kicked off. It’s a snippet that probably looks familar to anyone who’s used a cutting-the-mustard approach themselves:
<script type="text/javascript">
window.myApp = {};
function loadJS(src,cb){...}
if ("querySelector" in document && "addEventListener" in window) {
window.myApp.cutsMustard = true;
document.documentElement.className += " mustard";
loadJS("main.js");
}
</script>
Pairing Shoestring with a cutting the mustard approach ensures I’m safe to use Shoestring and benefit from its smaller file size. Cutting the mustard also means I’m a lot less reliant on polyfills and the like, helping keep the overall weight down and reducing the pain of maintenance and overall fragility of the site.
The results
I’ve been very happy with the results I’ve gotten on projects where I use Shoestring. Here’s the total weight of the JavaScript used on the last three projects I used Shoestring on, after gzip was applied:- 8.6kb
- 11.8kb
- 9.5kb
That’s pretty tiny and goes a long way towards helping keep the overall weight, load time, and rendering time of the site down.
Because the API is modeled after jQuery, the transition for has been virtually seamless for team members. And in situations where we stumble upon a legacy script that just can’t be re-written (say, for example, an ad loading script maintained by another team within an organization), switching to jQuery requires no more than a minute or two and results in no loss in productivity.
The Filament Group has released a ton of great tools, but this may be my personal favorite of the bunch. Even if you decide you can’t use Shoestring for your own projects, there is a lot to be learned from the approach.
]]>Contrary to what Hollywood may have you believe about the “developer in a basement”, I actually do enjoy light. So I have spent a decent chunk of time researching how to properly light small, windowless rooms.
It turns out the one of the most important pieces is finding a nice balance of different light sources. Mix a desk lamp with an overhead lamp and a floor lamp or two. Better yet, make sure those floor lamps are different heights. You need variety to produce quality lighting.
The results are rewarding. Working in a poorly lit environment feels depressing and lonely. It’s energizing to step out of the shadows.
But sometimes working in the light doesn’t go so well, as has been the case for Kathy Sierra:
Life for women in tech, today, is often better the less visible they are.
Damn.
This sort of thing has happened before, of course. It still happens. Many, many times. Sometimes we don’t hear about it. Sometimes we do.
I remember a particular exchange of a prominent person in our industry attacking another in a blog post because he felt her position was unwarranted. Despite the many people who spoke up against his post, he didn’t seem to comprehend the damage he had done. He instead boasted of all the people who had privately thanked him and agreed with him.
The impact of these attacks extends far beyond those two incredible women on the receiving end of them. There’s a ripple effect.
In both cases, many came to the same conclusion as Kathy: it’s safer to stay in the shadows.
I have seen tweets from people I know and have incredible respect for saying that they fear that this will happen to them.
I have talked to people who were excellent speakers or writers, but now don’t do it because something like this happened to them.
I have had private conversations with people who point to situations like this and say this is why they don’t put themselves out there. Fear of this happening to them is why they don’t write more, or give more speak at conferences.
We’re losing their voices. We’re turning off their lights.
Each time we follow someone, share their blog post, invite them on our podcasts or to speak at our events, we are giving that person a platform. We’re giving them an opportunity to share their voice.
We have a lot of say in who gets a voice and who doesn’t in our community. That’s a huge responsibility and a tremendous amount of power. We need to use it wisely.
A couple months ago, Andy Baio sent out a tweet:
A yearly reminder to everyone making stuff: For every anonymous idiot trashing you online, there are thousands more that quietly love you.
Let’s flip that around. Let’s make it a point to tell people that we value their contributions, they they are loved and appreciated.
If someone writes something that resonates, gives a talk that alters the way you think, shares some work that you find useful or provides a helpful hand—let them know you value it. Send them a friendly tweet, leave a comment, write an email, send a postcard—how you do it matters less than that you do it. Actively seek opportunities to tell people they’re appreciated.
We’re never going to completely eliminate the trolls, but we can drown them out. We can show them that our community values respect and appreciation. That trolling and harassment are things we will not tolerate. Those voices are not the ones we wish to give power to.
As we do this, we tip the power scales. We make it safer for those who hope to contribute in earnest to come out of the shadows, while making it all the more uncomfortable for those who would try to dim their lights.
Because what’s true of lighting a room is true of building a vibrant community as well. We need a variety of different voices and perspectives.
And right now, that’s exactly what we risk losing.
Daniel shared a few examples in his deck, but I couldn’t wait to take Daniel’s tool and fire it up on a bunch of random browsers and devices that I have sitting around.
For this test, I decided to profile just jQuery 2.1.1, which weighs in at 88kb when minimized. jQuery was selected for its popularity, not because it’s the worst offender. There are many libraries much worse (hey there Angular and your 120kb payload). The results above are based on the median times taken from 20 tests per browser/device combination.
The list of tested devices isn’t exhaustive by any means—I just took some of the ones I have sitting around to try and get a picture of how much parse and execution time would vary.
| Device | Browser | Median Parse | Median Execution | Median Total |
|---|---|---|---|---|
| Blackberry 9650 | Default, BB6 | 171ms | 554ms | 725ms |
| UMX U670C | Android 2.3.6 Browser | 168ms | 484ms | 652ms |
| Galaxy S3 | Chrome 32 | 39ms | 297ms | 336ms |
| Galaxy S3 | UC 8.6 | 45ms | 215ms | 260ms |
| Galaxy S3 | Dolphin 10 | 2ms | 222ms | 224ms |
| Kindle Touch | Kindle 3.0+ | 63ms | 132ms | 195ms |
| Geeksphone Peak | Firefox 25 | 51ms | 109ms | 160ms |
| Kindle Fire | Silk 3.17 | 16ms | 139ms | 155ms |
| Lumia 520 | IE10 | 97ms | 56ms | 153ms |
| Nexus 4 | Chrome 36 | 13ms | 122ms | 135ms |
| Galaxy S3 | Android 4.1.1 Browser | 3ms | 125ms | 128ms |
| Kindle Paperwhite | Kindle 3.0+ | 43ms | 71ms | 114ms |
| Lumia 920 | IE10 | 70ms | 37ms | 107ms |
| Droid X | Android 2.3.4 Browser | 6ms | 96ms | 102ms |
| Nexus 5 | Chrome 37 | 11ms | 81ms | 92ms |
| iPod Touch | iOS 6 | 26ms | 37ms | 63ms |
| Nexus 5 | Firefox 32 | 20ms | 41ms | 61ms |
| Asus X202E | IE10 | 31ms | 14ms | 45ms |
| iPad Mini | iOS6 | 16ms | 30ms | 46ms |
| Macbook Air (2014) | Chrome 37 | 5ms | 29ms | 34ms |
| Macbook Air (2014) | Opera 9.8 | 14ms | 5ms | 19ms |
| iPhone 5s | iOS 7 | 2ms | 16ms | 18ms |
| Macbook Air (2014) | Firefox 31 | 4ms | 10ms | 14ms |
| iPad (4th Gen) | iOS 7 | 1ms | 13ms | 14ms |
| iPhone 5s | Chrome 37 | 2ms | 8ms | 10ms |
| Macbook Air (2014) | Safari 7 | 1ms | 4ms | 5ms |
As you can see from the table above, even in this small sample size the parsing and execution times varied dramatically from device to device and browser to browser. On powerful devices, like my Macbook Air (2014), parse and execution time was negligible. Powerful mobile devices like the iPhone 5s also fared very well.
But as soon as you moved away from the latest and greatest top-end devices, the ugly truth of JS parse and execution time started to rear its head.
On a Blackberry 9650 (running BB6), the combined time to parse and execute jQuery was a whopping 725ms. My UMX running Android 2.3.6 took 652ms. Before you laugh off this little device running the 2.3.6 browser, it’s worth mentioning I bought this a month ago, brand new. It’s a device actively being sold by a few prepaid networks.
Another interesting note was how significant the impact of hardware has on the timing. The Lumia 520, despite running the same browser as the 920, had a median parse and execution time that was 46% slower than the 920. The Kindle Touch, despite running the same browser as the Paperwhite, was 71% slower than it’s more powerful replacement. How powerful the device was, not just the browser, had a large impact.
This is notable because we’re seeing companies such as Mozilla and Google targeting emerging markets with affordable, low-powered devices that otherwise run modern browsers. Those markets are going to dominate internet growth over the next few years, and affordability is a more necessary feature than a souped up device.
In addition, as the cost of technology cheapens, we’re going to continue seeing an incredibly diverse set of connected devices. With endless new form factors being released (even the Android Wear watches quickly got a Chromium based browser), the adage about not knowing where our sites will end up has never been more true.
The truly frightening thing about these parse and execution times is that this is for the latest version of jQuery, and only the latest version of jQuery. No older versions. No additional plugins or frameworks. According to the latest run of HTTP Archive, the median JS transfer size is 230kb and this test includes just a fraction of that size. I’m not even asking jQuery to actually do anything. Basically, I’m lobbing the browsers a softball here—these are best case results.
This re-affirms what many have been arguing for some time: reducing your dependency on JS is not healthy merely for the minor percentage of people who have JS disabled—it improves the experience for everyone. When anything over 100ms stops feeling instantaneous and anything over 1000ms breaks the users flow, taking 700ms to parse your JavaScript cripples the user experience before it really has a chance to get started.
So what’s a web developer to do?
Use less JavaScript. This is the simple one. Anything you can offload onto HTML or CSS, do it. JavaScript is fun and awesome but it’s also the most brittle layer of the web stack and, as we’ve seen, can seriously impact performance.
Render on the server If you’re using a client-side MVC framework, make sure you pre-render on the server. If you build a client-side MVC framework and you’re not ensuring those templates can easily be rendered on the server as well, you’re being irresponsible. That’s a bug. A bug that impacts performance, stability and reach.
Defer all the scripts. Defer every bit of JavaScript that you can. Get it out of the critical path. When it makes sense, take steps to defer the parsing as well. Google had a great post a few years back about how they reduced startup latency for Gmail. One of the things they did was initially comment out blocks of JavaScript so that it wouldn’t be parsed during page load. The result was a 10x reduction in startup latency. That number is probably different on today’s devices, but the approach still stands.
Cut the mustard. I’m a big fan of “cutting the mustard”, an approach made popular by the BBC. This doesn’t solve the problem of low-end devices with modern browsers, but it will make a better experience for people using less capable browsers. Better yet, by consciously deciding not to overwhelm less capable browsers with excess scripts you not only provide a better experience for those users, but you reduce the need for extra polyfills and frameworks for modern browsers as well. On one recent project where we did this, the entire JavaScript for the site was about 43% of the size of jQuery alone!
There are certainly cases to be made for JS libraries, client-side MVC frameworks, and the like, but providing a quality, performant experience across a variety of devices and browsers requires that we take special care to ensure that the initial rendering is not reliant on them. Frameworks and libraries should be carefully considered additions, not the default.
When you consider the combination of weight, parse time and execution time, it becomes pretty clear that optimizing your JS and reducing your site’s reliance on it is one of the most impactful optimizations you can make.
]]>Responsive design just can’t seem to shake the rumor that it’s bad for performance. It’s very frequently spouted as a downside of the technique—a reason why you may not want to pursue responsive design for a project.
Just to be clear where I stand on this: I don’t agree. I don’t agree because I’ve built responsive sites that performed well and because I’ve seen many others who have done the same. I don’t agree because I’ve looked at heavy and slow responsive sites and seen how fixable those issues are. I don’t agree because I’ve seen many non-responsive sites that are just as heavy and slow.
Bad performance stems from a lack of attention and commitment performance within an organization—not from whether or not the site is responsive. Saying responsive design is bad for performance is the same as saying optimizing for touch is bad for performance. It doesn’t make much sense because you can’t expect to do either in isolation and create a good overall experience.
All that being said, I’ve learned to embrace the “responsive design is bad for performance” statement whenever it comes up.
During the planning stages of a recent project with a new client, someone voiced the concern that pursuing a responsive design would be bad for performance. This was not an organization that had a strong culture of performance in the past. They—like many—had sort of casually paid attention but hadn’t really committed to it.
But the stigma around responsive design was so strong that just this rumor was enough for them to express concern. They had read enough articles complaining about responsive design being bad for performance to realize that they didn’t want to go down that same route.
As I felt my typical rebuttal starting to form, I realized that this wasn’t actually a problem. They were interested in building a good website. They knew they needed to hit a broad spectrum of devices and browsers and they knew that responsive design could help them with that.
And—because of the negative stigma around responsive design and performance—they had suddenly become a little more interested in this performance stuff than they had been in the past. The myth wasn’t causing harm; it was opening the door to a more intentional approach to web performance.
So instead of simply refuting the claim we started talking about the things we could do to make sure that their responsive site wouldn’t suffer from those performance issues.
We talked about setting up a system to monitor performance using a private instance of WebPageTest.
We talked about how to tie those performance metrics with their analytics and other business metrics to be able to see the correlation between the two.
We talked about the competitive advantages of good performance.
We talked about setting and enforcing a performance budget to make sure performance didn’t slip through the cracks.
When we were done talking the company wasn’t merely doing a redesign, they were also starting down the path to a better culture of performance inside their organization. They now have a robust system in place for monitoring performance, tools incorporated into their development process to ensure performance budgets are maintained, and performance standards worked into their internal SLA’s.
This all happened because they wanted to create a quality responsive site, but had heard a rumor.
As it turns out, “responsive design is bad for performance” can actually be really good for performance.
]]>- 34% of US adults use a smartphone as their primary means of internet access.
- Mobile networks add a tremendous amount of latency.
- We are not our end users. The new devices and fast networks we use are not necessarily what our users are using.
- 40% of people abandon a site that takes longer than 2-3 seconds to load.
- Performance cops (developers or designers who enforce performance) is not sustainable. We need to build a performance culture.
- There is no “I” in performance. Performance culture is a team sport.
- The first step is to gather data. Look at your traffic stats, load stats and render stats to better understand the shape of your site and how visitors are using it.
- Conduct performance experiments on your site to see the impact of performance on user behavior.
- Test across devices to experience what your users are experiencing. Not testing on multiple devices can cost much more than the cost of building a device lab.
- Add performance into your build tools to automatically perform optimizations and build a dashboard of performance metrics over time. Etsy notifies developers whenever one of the metrics exceeds a performance goal.
- Surfacing your team’s performance data throughout development will improve their work.
- Celebrating performance wins both internally and externally will make your team more eager to consider performance in their work.
It’s for that reason that I have decided to write this: for anyone who is about to give their first presentation, or is considering doing so. Not as a set of rules, but as a set of ideas that you may or may not find work for you.
So, in all its glory, here are the same (rough) steps I find myself going through for every new talk.
- Consider the conference and attendees. How many people will be there? What is their experience like? Why are they there? I often do a little research to see what I can find out about past versions of the conference from folks I know who attended/spoke, or from reviews people wrote on their blog somewhere. From there, I start thinking about a topic I really think should be discussed and would be a good fit for the conference.
Question my ability to give said talk, and in fact any talk at all, and almost give up on the whole thing.
Decide to do it anyway.
Email my speaking coach and book a bunch of hours so that he can help provide feedback on my talk as I’m preparing it. This also doubles as a really good way to make sure I don’t procrastinate.
I really value this step, by the way. Attendees are spending good money to come to conferences. Organizers are being kind enough to give me an opportunity to talk about something I love. I want to make sure I nail it.
Start considering what my primary message is. If there’s only one thing attendees remember about the talk, what do I want it to be?
Post-it notes everywhere. If ever there was an office supply that deserved to have a sonnet written about it, it would be post-it notes.
For the first week or two, anytime I think of something that might be related to the talk and interesting (a story, a phrase, a technique) I write it on a post-it note and put it up randomly on my wall.
Once again consider that perhaps everyone will hate this topic. Decide to plow ahead anyway because backing out of the conference at this point would be rude.
When the post-it notes are plentiful enough, start sorting them. What ends up happening is that many of the ideas can easily be grouped together which starts to bring some cohesiveness to my scattered thoughts.
With the post-it notes sorted, I start re-arranging the order of the groups on my wall to find an arc for the presentation. If a group is a bit sparse, I might also spend some time digging into that idea a bit more to see if there is enough meat to make it worth discussing.
After getting a logical order in place, I start looking at the post-it notes to come up with potential slides. Sometimes a phrase or idea on the note itself is already slide material. Sometimes I have to add another post-it note to suggest to myself that I might need an image here or a code snippet there to demonstrate things.
Take my post-it note storyboard and put it in Keynote. This is the first time I open Keynote during the entire process. At first, I just get all the ideas in place. Then I start worrying about the design of the slides and finding images to put into place.
I also work throughout the process to make sure that the key ideas are distilled into bite size chunks. I want people to easily remember them, so while I may elaborate on them more in the presentation, the hope is that there is at least one sentence that will stick and anchor the idea in their memory. I work closely with my speaking coach to fine-tune these ideas.
Now comes a lot of rehearsing. Often I find that during a practice run, I’ll ad-lib a line that I really like, so I always have a notepad right in front of me so I can quickly jot it down. I also note anything that feels messy—a phrase or idea that doesn’t seem fleshed out enough or a transition between ideas that feels forced.
I also use notes heavily in Keynote at this stage. I find it’s helpful while I’m still juggling with the flow of the presentation.
I never stop the presentation while rehearsing. No science here to back it up, but my thinking is that forcing myself to plow through even really rough runs or distractions makes myself that much better equipped to get through rough spots on stage.
Redesign my entire deck because I see someone else’s far more gorgeous slides.
Go back to original design because it’s clear I’m not a designer and I should just stick with what I know.
When I’m confident with the flow of the talk, I kill the notes. From then on, I rehearse blind. When I use notes on a stage, I read them. Not exactly a great experience for attendees. So instead, my goal is to rehearse enough now to know what I’m going to say and free myself of the need to stick to a script. It sounds backwards, maybe, but the more I’ve rehearsed my talk the more ad-libbing I will do and the less rigid the presentation will sound.
Shortly before the conference, start panicking again as I realize that in fact, yes, people are definitely going to hate this talk.
At the conference, I’m a pretty bad slide tweaker. Typically a few talks will touch on things related to my talk. I try to find ways to reference them where appropriate because I think it makes the talk feel more personalized and creates a much more cohesive narrative.
Right before going on stage, take a deep breath and hold for a few counts. It doesn’t do a ton to calm my nerves, but I do take that moment to refocus on the talk instead of anything else that may be on my mind.
Go on stage. I’m one of the lucky ones. Being on stage typically calms my nerves. My nerves don’t stem as much from public speaking (which I’ve always enjoyed), but from the fear that what I say won’t be relevant and interesting to people. There’s not a ton I can do about that after stepping on the stage so the nerves fade away.
Immediately following the presentation comes the self-loathing. It’s at this point that I start hating myself for all the little mistakes I know I made along the way. I hide this step from everyone except for folks I consider trusted friends, but it’s usually there. In fact, I can only think of a handful of times that I have given a talk where this step didn’t immediately follow it.
In just about every case, attendees didn’t notice any of them. The reality is that those little mistakes are always worse in your head (so I’ve been told) because you know what perfect execution would have sounded like.
The loathing eventually goes away, but usually not until I start seeing that no one has mentioned the mistakes in their feedback or tweeted “OMG WORST TALK EVAR”.
Post-conference, I eagerly get my hands on any feedback I can about the talk. (Note to organizers: please, please, please gather attendee feedback. It’s so helpful.) I match that with my own perception of how things went. What jokes landed? Which ones resulted in crickets? What ideas did people seem to focus on in the feedback, the recaps and on Twitter? What ideas do I wish they would have honed in on, but didn’t?
If I’m giving the talk again, I use this to start tweaking to make sure the next time I’ve addressed those weaker points.
If the talk was recorded, watch/listen to the recording. This is a very cringe-filled 45 minutes to an hour as I relive every mistake (not to mention hearing your own voice can be disconcerting). I do find it’s valuabe though as I, once again, get to review what worked and what didn’t so that I can touch things up for the next go around.
I know I mentioned panic and self-loathing a few times, but I don’t really want to scare anyone off from speaking. The reality is that I really enjoy public speaking and have ever since performing my first comedic monologue in junior high forensics (I believe it was the story of Little Red Riding Hood from the wolf’s perspective, if I recall correctly). I’m incredibly blessed to be able to share with others the ideas I think are interesting and important. It’s a privilege.
But leaving those low points out of the process wouldn’t be showing you the full picture. In her talk “I Suck! And so do you!” (must-watch material, by the way), Karen McGrane talked about how we compare our worst with other people’s best. That certainly applies here. We see speakers giving amazing talks at conferences around the world and don’t realize that they may very well be having the same doubts and fears that we are about our own talks. Some appropriate fear is good; letting that fear stop you from sharing what you’re learning with others is not.
Preparing and giving presentations takes a lot of time and effort, and it can be a little like a roller-coaster with highs and lows along the way. But the thing about roller-coasters is that they’re kinda fun.
]]>But having the budget set in a document somewhere doesn’t accomplish much. It needs to be enforced to really matter.
I’m a big fan of Grunt.js and use it on pretty much every project at this point. I did a lot of digging and while there are some plugins that come close, nothing quite fit what I wanted: different connection speeds, various metrics to budget, and the ability to fail a build if those thresholds aren’t met.
I’m also a big fan of WebPageTest, which has a slick Node API courtesy of Marcel Duran. So, armed with the API and Jeff Lembeck’s helpful guide to creating a grunt plugin, I decided to throw together a simple little task for performance budgeting.
Introducing grunt-perfbudget
grunt-perfbudget is a task for Grunt.js that helps you to enforce a performance budget. Using WebPagetest in the background, the task lets you set budgets for a number of different metrics: SpeedIndex, visually complete, load time, etc.
For example, if you wanted to make sure the SpeedIndex of Google was below 1000, you would add the following to your Gruntfile.js:
perfbudget: {
foo: {
options: {
url: 'https://google.com',
key: 'YOUR_API_KEY',
budget: {
SpeedIndex: '1000'
}
}
}
},
When run, grunt-perfbudget tests the URL’s you specify using WebPagetest and compares the results with the defined budget. If the budget passes, it outputs the results to the console and goes on it’s merry way. If the budget fails, it errors telling you what failed and provides a link to the full results on WebPagetest so you can dig deeper if you’d like.
I haven’t exposed everything in the WPT Node API at the moment, but there is some ability to customize your budget tests beyond metrics. You can:
- Specify a private instance of WebPagetest (Highly recommended! So much power there.)
- Specify a test location
- Specify a preset connection type
- Specify your own custom connection settings
Getting started
grunt-perfbudget is a Grunt.js task which means you will need to have Grunt installed to use it. From there you can install grunt-perfbudget using NPM, the Node.js package manage utility. You can find more information on the readme.
It does take a few moments for the test to run, so I don’t recommend using the task in any sort of watch process. Instead, using it in a more deliberate deploy/build process makes more sense. It also works nicely as a standalone task for quick checks in between formal builds.
Going forward
At the moment, grunt-perfbudget is super simple and does just enough to fit what I needed. Hopefully it’s useful for you as well.
There’s a ton of potential for improvement. WebPagetest is incredibly powerful so there are a lot of different things we could do here. There are also some simple things that need doing. This is my first attempt at a Grunt task and no doubt there are plenty of things that could probably be cleaned up. For example, because I am a horrible person I haven’t actually written any tests for the plugin. (I know. I’m disappointed in me as well.)
If you have any ideas on what to do to improve the tool, or want to roll up your sleeves and tackle a few improvements yourself, hop on over to Github and jump right in.
]]>Yet some are still not happy. Honestly, that’s to be expected. The responsive images debate was incredibly heated and had much in common with an episode of Game of Thrones—minus the nudity. Some folks are still unsettled by the potential verbosity of picture & friends and would prefer a server-side solution or a new image format.
The thing is—they’re not wrong. The markup solution was absolutely needed, but a server-side solution (Hello, client-hints!) would be amazing, as would a new image format. And in some cases, such solutions might even be better than what we currently have to play with.
Yet dismissing one of these three options as less than ideal misses the point. There is no ideal. There is no one-size-fits-all solution here. They each have their own important role in meeting the ultimate goal—serving great looking images to our visitors without all the current excess bloat.
In fact, I’d argue they all work better when in concert. Here’s what the responsive image process looked like on a recent project I worked on:
- Image gets uploaded through the CMS by the content creators.
- Server process compresses images and resizes to different dimensions.
- A template uses these images and, based on how the image is used in the page, generates a picture element with the appropriate breakpoints.
- Users rejoice over faster-loading pages!
It’s an approach that lends itself well to future changes while making use of what we have available to us today. It also doesn’t just depend on a single approach to the task—it combines front-end markup with server-side processes to make an even better end result.
Since the solution was kept in it’s own little module, there is a lot of flexibility to improve it in the future. It’s currently using the slightly older picturefill markup, but it will be easy to update that to the new and improved version. When client-hints come into the fold, it won’t take much work to use that data as well to inform the creation of the picture element and reduce the markup a little.
It took some time to get to that point. We had to analyze what types of images were used on the site, how they needed to change for different screens, and then figure out what breakpoints to use to pull in new images.
Then we had to build the server-side process to do the optimization and resizing, the module to generate the picture element, and the template logic to use the module. But the result of that effort is a better performing page and an image creation process that is flexible, future-friendly and fully automated.
So yes, keep pushing for a new image format or a new server-side solution. We could absolutely use those. But don’t let that stop you from making a better experience for the people accessing your sites today.
The tools are there now. Let’s use them.
]]>Their purpose is to bring information to the people who need it—to reach everyone they can. This information is valuable. So valuable that many countries try to block access to their sites, causing people to resort to all sorts of workarounds. In some countries, people are risking their lives simply by trying to access this content.
And to be quite frank, they face an uphill battle even in the best of circumstances. RFE/RL serves more than 150 news sites in over 60 languages. In some of these areas network connectivity is intermittent and slow. Many of these countries don’t have 3G, let alone 4G, networks available to them.
The quality of devices varies dramatically too. There are many of the typical Android and iOS devices, sure. But in most of these countries those devices are no more popular (and sometimes less so) than simple devices with tiny screens, low memory and limited browsers. Feature phones and proxy browsers are the norm, not the exception.
In short, RFE/RL’s target markets are anything but predictable and stable. And excluding someone because of their slow network connection or their old device or their limited browser is simply not an option—this information is far too important for that sort of casual dismissal.
The sites are powered by a homegrown CMS called Pangea built and maintained for the past seven years by a small but passionate team. Over the next few months I’m incredibly honored to work alongside those fantastic folks as well as super friends Dan Mall and Matt Cook to make the RFE/RL sites capable of reaching everyone. Small screens and large. Fast networks and slow. New devices and old.
Throughout the process, we’re going to be thinking in public. We’ll write about what we’re doing, how we’re doing it, and why at responsivedesign.rferl.org. To start things off, Kim Conger, the design director for RFE/RL, has provided a little background about how the project came about.
If you want to keep up to date about how the project is going, you can keep checking responsivedesign.rferl.org or follow Dan Mall, Matt Cook, or myself on Twitter.
More on the project from other smart folks on the team:
]]>I wonder if #RWD looks the way it does because so many projects aren’t being run by designers, but by front-end dev teams.
This certainly isn’t the first time that someone has suggested that responsive sites have a “look” to them. In fact, it seems that particular topic has been quite popular over the last few years. And to be fair, a pretty large number of responsive sites do tend to share similar aesthetics.
Before I dig into that, let me state my usual “blame the implementation, not the technique” just in case anyone was considering insinuating that responsive design dictates a specific sort of visual appearance. (To be clear: I don’t think that’s what Mark was doing at all—I’m just preemptively dismissing that line of commentary because it’s almost certainly going to come up.)
There are a few reasons why I think we’re seeing this commonality at the moment.
The web can be trendy
Let’s be honest with ourselves: we web folk can be a little trendy. We do this with specific technologies and tools, and we also do this with visual design. There has long been a tendency for people to mimic whatever the recent definition of “beautiful” online is (grunge, “web 2.0”, “flat design”, etc). Any glance through the once massively popular CSS/design galleries will attest to that.
We’re still getting comfortable
Responsive design is still relatively young. With all the articles and presentations about it, it’s easy to forget that. You don’t have to look far though to find companies that are just starting to dip their toes into it for the first or maybe the second time.
Understandably, people will lean on established patterns (or frameworks like Foundation or Bootstrap) to provide a level of comfort as they’re working things out. Eventually as people get more comfortable with how to approach multi-device projects, their reliance on these patterns will lessen and they’ll start to experiment more.
Silo’s and waterfalls are still the norm
Kevin Tamura responded to Mark’s comment on Twitter suggesting our workflow may be to blame:
@markboulton @Malarkey I think it’s an over reliance on the waterfall methodology for projects.
The more multi-device work you do, the more you discover that the toughest problems to be solved aren’t related to technology. The toughest problems are related to people, process, workflow and politics.
You can see this reflected both in projects led primarily by folks more comfortable with development (which may exhibit many of the traits that Mark was noticing) as well as projects led primarily by folks more comfortable with visual design (which may buck the trend a bit, but often at the cost of performance and reach).
Transitioning from the traditional waterfall/siloed approach to a fluid process where designers and developers are working more closely together can be a very difficult adjustment. Not only do you have to battle the internal politics involved in such a move, but you have to experiment to find the right comfort level. Until organizations make that transition it’s natural for things to be off-balance a little bit.
The good news is that the transition can be made—and a lot of folks are sharing how they’re handling it. Eventually those walls between roles will break down. When they do, that healthier process based on collaboration will lead to more creativity and experimentation in design and that’s when this stuff will get really fun.
jQuery and its cousins are great, and by all means use them if it makes it easier to develop your application.
If you’re developing a library on the other hand, please take a moment to consider if you actually need jQuery as a dependency. Maybe you can include a few lines of utility code, and forgo the requirement.
It then proceeded to provide example code—what a line of jQuery was compared to the vanilla JavaScript alternative. Not all of these snippets were exactly identical to the jQuery code in terms of what they accomplished, but they were pretty close.
The site itself didn’t seem particularly exciting or controversial. We’ve seen these sorts of comparison before, and frankly claiming that you “might” not need jQuery is an awful innocent statement to make.
But the reaction was more divided than I expected with many getting a little worked up about the site. Which of course causes me to wonder—if suggesting we “might” not need a technology can cause such a heated discussion, perhaps we “might” be just a little too attached to it.
It’s not that I don’t like or use libraries and frameworks. I do. In fact, I use tools like jQuery and Ender quite frequently. A good library brings the benefit of being well-tested, documented (well, in some cases at least) and can be very helpful for complex functionality or when working in a team environment.
What worries me is that for many, libraries have become the default. They’re rolled into boilerplates and pattern libraries as an assumed dependency. And if we know anything about default settings, it’s that most people will stick with them. This undoubtedly leads to many projects incurring this overhead without every giving consideration to whether it is really necessary.
According to the the latest run of HTTPArchive’s top 1000 sites (January 15, 2014), the average weight for a page is 1463kb. Scripts weigh in at 272kb, second only to images in total weight. Mobile tells a similar story. The average site on a mobile device weighs in at 717kb, of which 168kb is JavaScript.
Compare those numbers to the start of the year and we see an alarming trend. Script weight is up 28% from the start of the year on desktop pages and up 22% on mobile.
This is concerning, but it’s not just download sizes that you should be worried about. In a presentation given at Velocity in 2011, Maximiliano Firtman pointed out that on some phones (older, but still popular, BlackBerry devices for example) can take up to 8 seconds just to parse jQuery. More recent research from Stoyan Stefanov revealed that even on iOS 5.1, it was taking as many as 200-300ms to parse jQuery.
This isn’t even the worst case scenario. I’ve worked on projects where some of the devices we needed to test on couldn’t load the page at all if jQuery was present—it was just too much JavaScript for the device to handle.
Performance is not the only concern. At times, the abstraction that libraries and frameworks provide can actually be harmful. Without an understanding of the underlying language in use, it can confuse developers as much as it aides them.
I’m know I’m picking on jQuery, but that’s primarily because of its un-paralleled popularity. My concerns with starting with a library as a default method of coding are not confined to any one library in particular.
It seems this is a tricky topic to approach because it is so often viewed as being black or white: you’re either against frameworks or you’re opposed to them. Like so many topics, people can get religious about this stuff. But it’s not about what is the right™ way to do it: it’s about using the best tool for the job and arming yourself with the knowledge necessary to make that determination.
The reality is that you don’t always need to use a framework or library. Often times, you can get by with just a little bit of native JavaScript, saving precious bytes and seconds while doing so. When the job calls for frameworks, then use a framework (and do so responsibly). When you can do it just as well with vanilla JavaScript, then roll your own. (For anyone thinking about commenting “Don’t worry about it, just include one less image on your site” response, I’m going to preemptively respond: why not do both? One less image is not an excuse for not being careful about JS size.)
Everything has a cost associated with it. Whenever we add something to our sites we need to be able to think critically about whether or not the value outweighs the cost. JavaScript libraries are no exception.
I’m not saying that we stop using libraries altogether—and neither were the people who created “You Might Not Need jQuery”. I’m suggesting we make that decision with a great deal of care.
]]>I’m asked this question a lot. Page weights and load times vary so much from site to site and industry to industry. While it’s easy to spot the obviously bad examples, it can be much more difficult to find the line between is “fast enough” and what is slow.
My usual answer of “make it as fast as possible” doesn’t seem to make people very happy, so let’s try to get at least a little more concrete.
Compare
One method of attempting to arrive at a measure for “fast enough” on a new site is by seeing how you stack up against the competition. Do some analysis on a few of your key pages. Then do the same for key pages for 10 competitors or so. If you’re doing a redesign, analyze your existing site as well. Then, rank yourself to see where you stack up.
In his book, “Designing and Engineering Time”, Steven Seow talks about the 20% rule. The basic idea is that people perceive tasks as faster or slower than other tasks when the difference in time is at least 20% slower or faster.
For example, say a competitors site loads in 5 seconds. 20% of 5 seconds is 1 second. So to be perceived as faster than them, you need to have your pages taking no longer than 4 seconds (5 seconds load time - 20% difference).
Using this rule, you can come up with minimum viable targets—the slowest possible “fast enough” that puts you ahead of your competition.
Response Times
On the other end of the spectrum are the response time targets discovered in a 1968 study and later made popular by Jakob Nielsen.
- 100ms is how long you have for the user to feel like the task was instantaneous.
- 1.0 second is how long you have for the user’s state of flow to remain uninterrupted (though the delay will still be noticeable).
- 10 seconds is how long you have before the user loses interest entirely and will want to multitask while the task is completing.
Based on these response time targets, the pie-in-the-sky goal is to be hitting that 100ms barrier and providing people with an instantaneous experience. It’s possible in some cases, though certainly not easy. Even the 1 second barrier can be an aggressive target for most companies today.
Pairing these aggressive numbers alongside the conservative targets derived from comparison gives you an acceptable range to target for your site (handy to get you started with a performance budget).
Measure the right thing
The next question is whether load time is even the correct metric to be measuring to determine “fast enough”. It certainly has value (as does watching page weight), but it is not the most accurate indication of how a user is perceiving the loading of a page.
A step up is to measure the Speed Index of a page instead. Speed Index doesn’t really bother with what happens at on load. Instead, it looks how quickly the majority of the page gets painted to the browser. It’s a better metric for measuring how page loading is perceived by users than load time is.
For your own site you can get more granular. At what point is the form functional for the user? When can the tab interface be used? You can track whatever you would like with a little bit of User Timing.
Even better, measure task completion. For key tasks on your site (even those that require moving between several pages), how long does it take your users to complete them? As UIE pointed out back in 2001, task completion has a huge impact on perceived performance. Want your site to be fast enough for your users? Make sure they can get things done easily and efficiently.
Don’t stop with launch
Finally, making sure your site is fast enough means you have to keep measuring long after your initial launch. Continue to track those performance metrics over time. Watch how those measurements change as you make adjustments to your site. Look at key business metrics (conversions, bounce rate, etc) and see how changes in performance impacts them.
Monitoring these metrics over time will also help you to find the right balance between your visual aesthetics and performance. As you make adjustments to either, you should be able to see how it impacts your business helping you to identify whether those changes are adding value or pain.
So…what’s fast enough?
When you make a decision to create a “richer” experience in any way, you’re trading some level of performance for that richness. That doesn’t mean it’s never a worthwhile exchange to make. Eye candy has value when used appropriately and responsibly.
With anything added to a page, you need to be able to answer the question of “What value does this provide?” and in turn be able to determine if the value outweighs the pain. Careful and continual analysis is the only way you’ll figure that out.
In short, the correct answer to how fast is fast enough is: “It depends”.
]]>When a client comes to us to help them make their existing site or app responsive, we know that we’re going to be using fluid grids, flexible images and media queries.
But we also know we’re going to be using much more than just those three techniques. The best responsive web designs are doing much more. And when we teach workshops or help clients, much of what we’re discussing are the things that you do after you’ve got the three techniques down.
Which led me to the idea that there is a difference between “being responsive” and responsive web design. That responsiveness was something larger.
Jason’s post hits on a lot of the same points I’ve been struggling with myself. In my comment on his post, I suggested that maybe the term for this larger concept has already been created.
I’ve been down this path too. In fact, I still go down it sometimes. I like the idea of “responsiveness” having a larger meaning, because I think the term fits: good web design requires being responsive to more than what fluid layouts, fluid images and media queries alone can provide.
However, “responsive web design” is a term with such a specific definition firmly entrenched to so many, that trying to extend its definition just seems to cause trouble.
Maybe the term has already been coined for the bigger idea? “Future-friendly” seems a large enough idea encompass the larger principles you’re suggesting and does so without conflating more specific terms. It also has the benefit of not dictating the techniques being used (leaving room for responsive web design to be paired with whatever else helps you get the job done) and it certainly better represents the idea that “The best responsive web designs are doing much more.”
Either way, I’m really happy Jason wrote the post because I think we need all the attention on the “more” as possible (this is also why I’m so excited for Scott Jehl’s upcoming book). Responsive design is an awesome technique but not a goal unto itself. That’s where a lot of the confusion comes in.
We’ve done a really good job of advocating for responsive design. There have been plenty of posts, conference talks and even a few books covering the subject. We praise companies that release their shiny new responsive sites. Responsive design is now widely seen as a Very Good Thing™. This is good. Adopting approaches like responsive design represents a serious maturation in the way we view the web.
Maybe we’ve been too successful, however. Responsive design has taken on life as a buzzword. To many people responsive design is being viewed as a synonym for multi-device design (Ronan Cremin’s comment on Jason’s post recognizes this). Use a fluid grid, put some media queries in place, and then proudly announce that your site is now friendly no matter the device in use.
It sounds silly when you read it, but you don’t have to look far to see companies making this mistake. They’re focused so much on being “responsive” that they’re overlooking so many of the other things that go into a good experience.
Responsive design won’t fix your content problems. Considering screen size only while ignoring other factors won’t give you the best possible experience. Responsive design does not allow you to forget about performance.
Stephanie Rieger has written about what I think is the healthiest way of looking at “responsive design”—as a characteristic. When viewed this way, responsive design doesn’t preclude you from any other techniques in your implementation that may be appropriate. Instead, it is simply a piece of the puzzle.
We used to go crazy over sites that were built with CSS instead of table-based layouts. Eventually, as our understanding of the it matured, we moved from boasting about how we built our last site using CSS to accepting it as another facet of building a good site. Along with well-structured markup, efficient JavaScript and any other technique required to provide a quality experience.
If there’s one thing I hope we see in the upcoming year it’s this sort of maturity: that we’ll stop celebrating sites simply for being responsive and instead view it as just another (important!) characteristic of a well-built site.
As Jason said:
]]>
- Does it adapt to screen size?
- Does it take advantage of device capabilities?
- Is it accessible anywhere?
- Does it work well?
For our users, those are the things that matter.
That being said, for fiction I’d have probably have to rank Lexicon, Dust and Ocean at the End of the Lane as my top three. For non-fiction: The Reading Promise, Jim Henson: The Biography and Designing and Engineering Time. For web-specific titles: Content Everywhere, Responsive Design Workflow and The Mobile Book.
Each year that I post the list I hear from someone who wanted a little more information about the books. So part way through the year I started jotting down short reviews of each one. The reviews are nothing earth-shattering, but hopefully they provide at least a little extra glimpse into why I enjoyed a book.
As is always the case, if a book is on the list then I enjoyed it on some level. The lowest rating I’ve ever given to a book is three stars out of five—if it was headed toward 2 or 1 star territory, I don’t finish it. Some people are strong enough to grit their teeth and finish books they don’t like. I’m not one of those people.
- Work for Money, Design for Love by David Airey 4⁄5
- Content Everywhere by Sara Wachter-Boettcher 5⁄5
- The Sun Also Rises by Ernest Hemingway 4⁄5
- Shift: Omnibus Edition by Hugh Howey 5⁄5
- Program or Be Programmed by Douglas Rushkoff 4⁄5
- The Reading Promise by Alice Orzman 5⁄5
I confess—as a dad who loves to read to his daughters, I’m very biased towards this topic. It’s a book written by the daughter of an avid reader. He read to her every single night for 3,218 consecutive nights. I loved hearing about the streak, what it meant for Alice and the efforts they went to in order to keep it alive. This is my favorite non-fiction read of the year.
- Money, Real Quick by Tonny Omwansa and Nicholas Sullivan 4⁄5
- To Sell is Human by Daniel Pink 4⁄5
- Slaughterhouse Five by Kurt Vonnegut 4⁄5
- A Whole New Mind by Daniel Pink 4⁄5
Responsive Design Workflow by Stephen Hay 5⁄5
I wrote up a full review of this one earlier this year. If you’d prefer I just cut to the chase: whenever anyone is asking about a workflow for responsive design, I hand them this book.
Building Touch Interfaces with HTML5 by Stephen Woods 4⁄5
Lots of interesting insights and I love all the focus on performance! There were a few topics that I thought he zoomed through a bit too quickly without really digging into, even for the intermediate to advanced audience that he was targeting. But that’s a relatively minor quibble considering all the good information he does discuss.
The Mobile Book 5⁄5
The Invisible Man by H.G. Wells 5⁄5
I don’t re-read many books, but I’ve now read this one 3 or 4 times.
Ocean at the End of the Lane by Neil Gaiman 5⁄5
Gaiman is so consistently good. The Ocean at the End of the Lane feels right at home with his past work and, like American Gods and Anansi Boys, explores the intersection of immortals and regular everyday folk. It’s a short, brisk read that sucks you in almost immediately. While not a “dense” book in any way, shape, or form, I did find myself highlight many sentences that were both funny and insightful. Great stuff!
Stardust by Neil Gaiman 4⁄5
Enjoyed it and blew right through it, but it’s not quite on the same level as American Gods, Neverwhere or Ocean at the End of the Lane. For most authors, probably 5 stars but Gaiman has set the bar so high for himself.
HBR Guide to Creating Persuasive Presentations by Nancy Duarte 4⁄5
Great little book! Concise, to the point, and full of actionable advise for how to give a great presentation: from the earliest stages of planning through to on stage advice and everything in between. I would happily hand this to anyone getting started in public speaking as a fantastic place to begin.
If, on the other hand, you’ve read Resonate or Garr Reynold’s books—you probably don’t need to rush out and grab this one. The information is very solid, but you’ll have heard a lot of it before.
Designing and Engineering Time by Steve Seow 5⁄5
Discussions of performance quickly turn into a discussion of stats and metrics. What really matters, though, is how users perceive the performance of your application/program/website. This book focuses on that: improving the perceived performance of your project. It’s a quick read, but each chapter offers up a wealth of handy references for digging deeper into a specific topic discussed.
An excellent introduction to time perception for any engineer, developer, or designer.
Brilliance by Markus Sakey 5⁄5
I was concerned at first from the synopsis that I was basically going to be reading an X-Men clone, but while there are similarities, that ended up not being the case. Brilliance does focus on norms and “abnorms”, but their abnorms are much more grounded in reality—people with exceptional analytical abilities, or the ability to read how people are going to act based on facial expressions/posture/muscle movement, etc. This meant that this alternate version of modern day still felt real.
It’s not a deep, meaningful read—he touches on some interesting social topics, but doesn’t really explore them in much depth—but its a darn fun one. Even though a couple of the plot points were fairly predictable, the way Sakey tells the story kept me enthralled from page one. A hard book to put down.
Lexicon by Max Barry 5⁄5
Incredibly fast-paced, smart, engaging characters, and enough twists to keep you guessing a bit. Thoroughly enjoyed it!
Dust by Hugh Howey 5⁄5
Wool was fantastic. Shift was very, very good (though perhaps not quite as excellent, still a very fun read). Dust? Might be my favorite of the trilogy.
Quick-paced with lots of answers to questions that had come up throughout the first two books, and a satisfying conclusion. If you liked the other two books, picking this up is a no-brainer.
The Great Indian Phone Book by Assa Doron & Robin Jeffrey 4⁄5
An incredibly thorough look at the mobile technology industry, and its impact, in India. Loads of fascinating examples of how mobile, even the most mundane capabilities of the technology, is significantly altering life throughout India. The research is exhaustive in detail, though at times a little repetitive.
Recommended for anyone interested in seeing how sometimes the simplest technologies can have the most profound impact on society.
Ready Player One by Ernest Cline 4⁄5
It was loads of fun—quickly paced, funny and enthralling. Sort of a Willy Wonka meets sci-fi sort of thing.
The only reason I won’t quite give it 5 stars is because while I enjoyed the references to 80’s music, games and film, it did get a little excessive at one point early in. One of the early chapters seemed to be an excuse to just list all of the authors favorites in one place. For the most part, the references were fun—but this chapter was a little gratuitous.
Minor complaint, though. After that chapter the author settled into a nice rhythm and I was hooked.
Just Enough Research by Erika Hall 5⁄5
In typical A Book Apart fashion, Just Enough Research is one you’ll want to keep nearby so that you can go back to it frequently. Erika manages to pack a ton of information into a small package (154 pages!) without making it a chore to read.
This book should be on the shelf of anyone involved in building websites.
The 100-Year-Old Man Who Climbed Out the Window and Disappeared by Jonas Jonasson 4⁄5
The book’s title very aptly describes the begging of the story—the interesting bit of course is what happens after that. The chapters bounce back and forth between telling us about the main characters current adventures on the run from both the police and a notorious criminal gang and telling us about his life up to that point—a rich story in itself that weaves together major historical figures such as Churchill, Stalin, Truman, Nixon and more.
Quirky and fun!
Sass for Web Designers by Dan Cederholm 5⁄5
This is now at the top of my list for resources for people just starting with Sass.
If you’ve been using it for awhile, you might pick up a few tricks, but you’d mostly likely be better off picking up something else. If, however, you’re just starting to dip your toes into it (or are skeptical) this is pretty much perfect. Short, to the point, and very clearly explained.
Jim Henson: The Biography by Brian Ray Jones 5⁄5
Henson’s creations were so imaginative and timeless—it’s fascinating to get a behind-the-scenes look at how he brought them to life. Jones weaves a great story pulling in bits from Henson’s journal and anecdotes from his family and coworkers along the way.
It’s not all praise. Jones does (respectfully) mention several flaws and mistakes—both professional and personal. The result is a well-rounded and thoroughly enjoyable look at a creative genius.
Remote by Jason Fried & David Heinemeier Hansson 4⁄5
How much you’ll get out of this book depends on your current situation.
If you’re already working remotely and are hoping that the book will give you new insights to do it better, then this book is just ok. You’ll probably get more validation than you will takeaways.
However, if you are A) someone trying to decide whether to institute remote work at your company or B) someone trying to convince your boss to let you work remotely, then this is the ideal book for you. The case against remote work is carefully dismantled, piece by piece.
Front-End Styleguides by Anna Debenham 5⁄5
A brisk 60 pages or so of excellent information for anyone curious about styleguides. Kind of a no-brainer, particularly given the ridiculously low cost.
Nexus by Ramez Naam 4⁄5
Scientists have been experimenting with using the human mind to perform actions: everything from moving a mouse cursor to moving the tail of a rat. Ramez pushes this idea to the limit: what if humans could link their brains together. What if you could share thoughts and emotions with other humans? What if you could “enhance” this functionality with packages (such as one that steadies your nerves)? And what if you could actually control other people using this same technology?
Nexus hits the ground running on the first few pages and then never lets up. The concept is fantastic and the science seems pretty spot on (unsurprisingly given Ramez has also written a book about biological enhancement). There is a lot of good discussion about the ethical implications of a technology that powerful, though it’s not very subtle and can get a little heavy-handed at times. But that’s a relatively minor complaint for an otherwise great read.
14 by Peter Clines 3⁄5
‘14’ is a bit of a departure from my standard reading material. It’s a part mystery and part horror with humor mixed in throughout and just a shade of science-fiction.
The first part of the book, as the characters try to figure things out, was pretty gripping. Each chapter provided a few clues and left me eager to find out what was going to happen next—I had a hard time putting the book down. Unfortunately, the ending didn’t do the rest of the book justice. It may be just personal taste, but from the moment where you find out what’s in room 14 on, it seemed to shift quickly from a taut mystery to a messy pile of whatever bizarre elements the author decided to toss into the story next.
The Lives of Tao by Wesley Chu 5⁄5
This book was just plain old fun. An overweight, under-achieving IT tech gets inhabited by a member of an alien species who must now turn his new host into an agent capable of engaging in the age-old, secret war that has been taking place. Action, top-secret spy stuff, snarky humor and even a few brief philosophical discussions ensue. Reading the sequel next.
Why We Need Responsive Images
The responsive image discussion is the Never Ending Story of web technology. Some people were, understandably, getting frustrated and questioning whether it was worth the fuss so I decided to see how much page weight could be saved by serving appropriately sized images to small screens.
Windows Phone 8 and Device-Width
Building off of last year’s post on snap mode and responsive design in IE10, it turned out the mystery was not yet completely solved as Windows Phone 8 has some odd behavior when it comes to what to return for device-width.
Setting a Performance Budget
Getting people invested in performance has to happen from the start of a project. Setting a performance budget is a great way to help with that.
Why We Need Responsive Images: Part Deux
Returning to the topic of responsive images, this time looking at the impact on memory, decoding and resizing.
Media Queries Within SVG
There was a lot of excitement around using media queries inside SVG files, so I decided to test a little and see which browsers applied those media queries and when.
]]>Bad performance online is not a technological problem; it’s a cultural one. Tools are increasing in number and improving in quality. Posts and books are written explaining the tricks and techniques to use to make your site weigh less, load faster and scroll more smoothly. Yet for all this attention, performance online is getting worse, not better…
If we want to reverse the troubling trend of increasingly bloated and slow sites, we need to attack the cultural and procedural issues with the same fervor that we attack the technical ones. Only by thinking about performance throughout the process can we reverse the trend and start making experiences that delight our users, not frustrate them.
If you’re interested in more about this, I also wrote a chapter for the latest Smashing Magazine book about creating a culture of performance and will be talking about it at In Control in Februrary (if you sign up, use code ‘100TIM’ to save $100).
]]>It’s a common dream because it plays off a very common fear: that of being exposed. Of having nothing to hide (both metaphorically, and in the dream, anatomically).
When I wrote my book, I started off writing in a vacuum. The first several months, I quietly put words on a page sharing with very few people outside of my publisher. I was hungry for feedback, but my discomfort outweighed that hunger. I have an incredibly active inner critic (a grumpy little fellow) and he was letting me know rather loudly that sharing what I was writing and thinking was going to expose me too much to people I respected.
Surely if I did this, they would discover that I don’t know what I’m doing. That I’m making this up as I go. They’d read what I had written, eyes glued in horror as they realized I had how absolutely unqualified I was to be helping other people learn things I was still figuring out.
Eventually, and painfully, I got over this. I started getting bolder about sharing what I was writing, asking questions and seeking feedback. The impact it had on what I produced was dramatic. Thoughts that were loosely formed started to cement themselves into something concrete. I started connecting the scattered dots in my head to form something far more interesting.
I’m proud of how the book turned out. There is little doubt in my mind that much of that is owed to the fact that I sought feedback as I worked.
Idea ping-pong
In his book, Where Good Ideas Come From, Steven Johnson talks about the importance of not thinking in isolation.
The trick to having good ideas is not to sit around in glorious isolation and try to think big thoughts. The trick is to get more parts on the table.
One of the best way to “get more parts on the table” is to get more people involved and get those different perspectives. If those opinions differ from your own, that’s all the better. Dissonance is important and valuable.
This is why I lament the decline in personal blogging.
When I first started working on the web, it was this beautiful era of personal blogs. Every day there were posts discussing new techniques people were using, countering points made by someone else’s post from a day before, sharing people’s progression from “I don’t have a clue what’s going on” to “I think this might solve my issue”. I owe so much to the constant ping-pong of ideas that was happening.
After awhile, I decided to build myself one of them fancy ‘blog’ thingies so I could write stuff too. I wanted a place to experiment and write about what I was learning. Part of the reason was that I thought it might be helpful to share these things somewhere for my co-workers to read. And part of it was because it just seemed so fun.
I don’t think I really understood it at the time, but the real value turned out to not be so much for the people I worked with, but for me.
I learned quickly that every time I write a post about some new technique or tool, I learn more about that topic. I sit down to explain something, and as I do I realize I didn’t really understand it well enough to write lucidly about it. So I dig into the topic a bit more and find the answers so that I can. It’s one thing to use a technology. It’s another to be able to explain it to someone.
When I write about my opinion on something, I find that having to articulate my thoughts into words requires me to have a bit clearer picture of why exactly my opinion is what it is. Don’t like frameworks? Fine. But why? What was it that made me want to avoid them? How could I explain that to someone who was a big fan of them?
Each time I publish one of these posts, I expose my ideas and opinions to feedback. I openly invite people to disagree, to critique and to improve on my ideas. And just as it did when I wrote the book, that feedback helps me to refine my thinking and learn a little more. (For another person’s perspective on “having opinions” online, I highly recommend Marie Connelly’s post.)
Write something
Someone recently emailed me asking for what advice I would give to someone new to web development. My answer was to get a blog and write.
Write about everything. It doesn’t have to be some revolutionary technique or idea. It doesn’t matter if someone else has already talked about it. It doesn’t matter if you might be wrong—there are plenty of posts I look back on now and cringe. You don’t have to be a so called “expert”—if that sort of label even applies anymore to an industry that moves so rapidly. You don’t even have to be a good writer! (I’ve been told I abuse commas in unnatural ways.)
None of that really matters. What matters is articulating your ideas—“sharing ideas and passions” as Jeffrey Zeldman elegantly stated. When you do, your understanding of those ideas and passions will increase and you (as well as others who takes the time to discuss what you wrote) will benefit immensely from the discussions that follow.
At the risk of sounding like the old man who reminisces about the “good ole’ days”, I miss the days where everyone was constantly writing about what they thought and learned. I hope we don’t let go of that.
]]>Besides, I hesitated to post a condemnation of the current state of affairs when I didn’t have a solution to offer up.
But then I read Jeremy Keith’s beautifully written post “A map to build by” discussing the web and our ability to impact what it becomes. In it, Jeremy said:
Perhaps we need our own acts of resistance if we want to change the map of the web. I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.
Jeremy was discussing other issues, but I think those words make sense in many contexts.
So let’s consider this my act of resistance.
If you’ve not been paying attention, the responsive image discussion has continued full-steam ahead, but without any conclusion. If you want to catch-up in full, I highly recommend Bruce Lawson’s excellent review of where we are and Mat Marquis’s gist of where things stand.
For anyone not willing to read the full thing, here’s a TL;DR version:
- The picture element was never received with any sort of enthusiasm by implementors and, while not dead, is barely what you would call alive.
- The srcset attribute was implemented by WebKit for resolution only, and by Blink (also for resolution only) behind an experimental features flag.
- Tab Atkins proposed a new approach, src-N, that address the three primary use cases (art direction, resolution, screen size) for responsive images.
So that’s where we stand today. Just about everyone agrees that srcset is not extendable—hence why no one is implemented the extended syntax for the other use cases. Mozilla, in fact, has come right out and said they have zero interest in it at all—they closed the issue as WONTFIX.
On the other hand, just about everyone has some interest in src-N. It solves the use cases, has implementor support from Blink (the Client Hints proposal was even refactored to incorporate src-N) and Mozilla, and has the support of the RICG. Seems like a great situation to be in.
The one exception, however, is the WebKit gang who have labeled src-N “a grotesque perversion of the HTML language”. Guess you could say they’re not big fans of the syntax.
Coincidentally (or perhaps not so much) the same group that came up with srcset is the one implementor not willing to leave it behind. Unfortunately, because this particular implementor wields a big stick, Blink sounds like they’re willing to play follow-the-leader and do as WebKit does.
The fact that Blink is apparently calling src-N dead in the water because one implementor opposes (WebKit) and yet haven’t said the same about srcset which is also opposed by an implementor (Mozilla) speaks volumes to the rationale here: it’s not because there isn’t a consensus, it’s because of who is wielding the heavier stick. The fact that the RICG have thrown their support behind src-N appears to not really figure into the equation.
So here we are 21 months after the RICG was started. We finally have a solution that both the RICG and a majority of implementors are interested in, and it looks like it’s at risk of not happening because of one single implementor dissenting opinion. The fact that the currently discussed solution on the WHATWG list is a frankenstein combination of HTML and inline CSS doesn’t do much to elevate my spirits (thread starts here).
Here’s the thing: the existence of politics in this doesn’t surprise me, it’s an unfortunate reality. I’m also pragmatic enough to recognize why WebKit has such a large influence here. I get all that.
But I find it upsetting that one party can throw this much weight around, discount the opinions of developers and other browser implementors in one fell swoop, and then watch as everyone accepts their opinion as the ultimate conclusion.
Just as I did last May, I refer back to design principles of HTML once more. In terms of priority, the top three opinions go to three groups, in order:
- Users
This is definitely good for users. They’ll benefit from src-N by getting less page weight and improved rendering times, among other things. - Authors
The RICG is the representative for the developers in this scenario and they are in support of src-N. - Implementors
Here again is almost a full consensus. Mozilla and Blink have shown interest, WebKit is stonewalling.
I’m no standards expert but when I can place checkboxes next to the top two priorities, and when more representatives of the third priority support something than do not, then it seems to me that thing has merit. If those principles can be thrown out the door when one implementor opposes a solution, then what exactly are they there for in the first place?
I openly admit I’m not sure how to solve the situation. Here’s what I do know.
- We can’t stick our heads in the sand and pretend an attribute that lets us give our fancy iPhones and iPads a big, fat, shiny image is a good enough solution.
- There is a very, very small likelihood of everyone agreeing on one perfect solution. As a result, concessions need to be made. For example, the RICG is supporting src-N because of the interest from implementors despite many members still preferring the picture syntax.
Given that no solution is likely to be agreed upon by everyone we need to find one that solves the use cases, can be responsibly polyfilled, has the approval of the authors and makes a majority of implementors happy. And then we need to ship it.
Maybe that solution is src-N, maybe it’s something else. But I’d prefer to work on a web where we didn’t have to wait for WebKit, or any other single party for that matter, to make up our minds for us.
]]>Way back in June, I wrote a post about the need for responsive images. The post discussed the topic from the typical point of view: the impact on page weight. But serving large images and then relying on the browser to scale them has other downsides as well.
Memory
Shortly after the first post, I chatted with Ilya Grigorik a little and he brought up the toll inappropriately sized images can have on memory.
Once the browser decodes an image, each pixel is stored in memory as an RGBA value:
- 255 values for red
- 255 values for green
- 255 values for blue
- 255 values for alpha
This means that for every pixel of an image, it’s taking up 4 bytes of memory. It’s pretty easy to see the impact that this can have when you serve large images to the browser. Here’s an example: let’s say we provide a 300px by 300px image and the browser scales it down by 100px and displays it at 200px by 200px. Those excess pixels are essentially wasted, yet they take up 40,000 (100 x 100 x 4) bytes in memory. That’s about 39kb of useless information in memory.
Ilya also pointed out that this makes it pretty obvious which images developers should target first with an appropriate responsive solution. Let’s look at two more images. This time, we’ll only require the browser to scale them down by 50px—a seemingly innocent number
- 600px by 600px, downscaled by 50px to 550px by 550px
- 200px by 200px, downscaled by 50px to 150px by 150px
At first glance, it seems like these two images would have a similar impact in terms of memory—after all, each are only being scaled down by 50px. But some simple math shows that’s not at all the case:
- (600x600) - (550x550) = 57,500px
- (200x200) - (150x150) = 17,500px
The difference is huge: the larger image is taking up 3.3 times as much pixels in memory—a difference of about 156kb!
So first rule of responsive images: hit those big images first.
Decoding
Another area frequently overlooked in regards to the performance of responsive images is the time it takes for browsers to decode them.
To test this, I set up super crude and simple test pages, purposely keeping layout as simple as possible. On each page, the browser displays a set of images at 200px wide. The actual size of the images themselves varies from page to page. On one page, the images are 1200px wide (6x). On another, they are 400px (2x). Finally, there is one page where all images are appropriately sized at 200px.
Each page was loaded 10 times per browser. Unfortunately, developer tools are still in the infant stage when it comes to this kind of analysis. So for decoding times, I used Chrome on the desktop as well as IE11 because those browsers have tools to see this sort of information. I also tested Chrome for Android, but decodes are not logged as a separate event and a bug results in their timeline not properly recording image resizes so I didn’t include the data here for the time being.
The impact on decoding varies drastically depending on the size of images being passed to the browser.
| Image Size | Number of Decodes | Decode Time | % Difference in Time from Resized |
|---|---|---|---|
| Resized (200px) | 28 | 6.65ms | -- |
| 2x (400px) | 49 | 20.59ms | + 209.6% |
| 6x (1200px) | 277 | 163.08ms | + 2352.3 % |
When images were appropriately sized (200px wide), decoding was relatively light. From the 10 runs on Chrome, the median was 28 decode events making up a grand total of 6.65ms.
When images were sized at two times (400px wide) the intended size of display, those decoding times take a sizeable jump. The median for Chrome was now 49 decodes and 20.59ms (209.6% increase).
{{ table of impact here for Chrome, Chrome Android, IE11 on 6x}}
Decoding times for images that were six times (1200px wide) their intended display size was, of course, much worse. The median for Chrome was 277 decodes taking up a total of 163.08ms. This represents an incredible 2352.3% increase over appropriately sized images and a 692% increase over double-sized images! Honestly, the increased number of decodes seems a little bizarre and unneeded—to the point where I wonder if there’s not a bug somewhere.
Update: After some digging, Ilya Grigorik discovered what is going on here with the number of decodes. Chrome is decoding chunks of image data as its received—not decoding the same images repeatedly. The time increase is still legit, but at least we have a nice explanation regarding the number: larger images means more chunks of data to decode.
While some of the decoding time can be attributed simply to the large sized images, it does appear the scaling (combined with the layout) has an impact as well. A quick test of the same large sized images scaling only about 50% to 600px still showed ~80 decodes and 118ms of decode time.
| Image Size | Number of Decodes | Decode Time | % Difference in Time from Resized |
|---|---|---|---|
| Resized (200px) | 10 | 6.2ms | -- |
| 2x (400px) | 10 | 21.35ms | + 244.4% |
| 6x (1200px) | 10 | 101.37ms | + 1535% |
Looking at IE11 found similar results. For the resized images, IE11 showed 10 decode events (one per image) with a median total decode time of 6.2ms. For the 2x images, IE11’s decode count stayed at 10 but the total time for those decodes jumped to 21.35ms (244.4% increase). Once again, the 6x increase was staggering. While still reporting 10 decodes, the decode time increased to 101.37ms—a 1535% jump from the time spent on appropriately sized images.
I expected that this would be noticeable in someway (other than the slower time to load and display the images), but after adding 70-80 additional images, there wasn’t really a sizeable dip in FPS as you scrolled down the page on either Chrome or IE11.
Even though the tools aren’t there, I fired up Firefox for a visual spot check and there the FPS took a noticeable hit for large images. That fact that we’re able to get better scrolling performance by hacking things together in JavaScript, I’m willing to call say the browsers are missing something here and this a bug that should be fixed.
Still, that’s not an excuse to ignore the issue. Scrolling performance has a huge impact on the user experience (something both Facebook and Etsy have seen first hand). In many scenarios this FPS hit can be really harmful. Any image heavy page with infinite scrolling, for example, would be a prime candidate for jank caused by long decodes. I’m also willing to bet there is a hit here on battery life as a result of all the extra work.
Resizing
But wait, there’s more! As part of the process of getting large images to display at their appropriate scale, Chrome resizes the images. IE11 showed no image resizes, so it’s either not taking this step, including the resizing information as part of the decode in their tool, or simply not exposing the resizing information at the moment for whatever reason.
Those image resize times (unsurprisingly) are also impacted significantly by the size of the image being passed down.
| Image Size | Number of Resizes | Resize Time |
|---|---|---|
| Resized (200px) | 0 | 0 |
| 2x (400px) | 19 | 16.06ms |
| 6x (1200px) | 31 | 115.77ms |
Of course when all the images are appropriately sized, no image resizes are necessary. When the image is double sized, Chrome recorded 19 resizes, taking up a total of 16.06ms. For the page with images that are six times the displayed size, the total of resizes was now 31, taking up 115.77ms—an increase of 620.86% from the time to resize the 2x images.
Summary
One thing to come out of this testing: we need better tools. From what I can tell, only Chrome and IE11 provide this information, and each of them has things to improve on here. IE11 and Chrome Mobile seem to lack any resize information. Chrome doesn’t attempt to link their decodes to the actual image (whereas in IE11, if you hover over a decode it tells you the url of the associated image). And whatever is causing the exponential growth in decode count on Chrome sure seems fishy.
There’s also a great deal of follow-up that could be done here:
- How do other browsers handle image decoding and resizing?
- How do these results change as file weight is adjusted up or down (important for compressive images, for example)?
- Does file format impact decoding and resizing times?
Those questions aside, it’s already apparent that serving large images to the browser has some potentially series side effects from a rendering perspective. On the test page with 6x images (not unusual at the moment on many responsive sites), the combination of resizes and decodes added an additional 278ms in Chrome and 95.17ms in IE (perhaps more if we had resize data) to the time it took to display those 10 images. That much time spent on decoding and resizing can not only delay rendering of images, but could impact battery life and scrolling behavior as well.
While page weight and load time are the most commonly cited examples, those clearly aren’t the only metrics suffering when we serve images that are too large to the browser.
There has been a Google + post by Rick Byers floating around the last few days claiming the best way to deal with the delay was to eliminate the double-tap zoom altogether. With no double-tap gesture to worry about, browsers no longer need that 300ms buffer and can now fire click events immediately.
One of the recommendations made was to kill the double-tap gesture on Chrome and Firefox for Android. To do this, you have to kill scaling:
On Chrome Android and Firefox this involves using the “viewport” meta tag to disable zooming (with user-scalable=no or initial-scale=1,maximum-scale=1)
The popularity of the post concerned me because while the comments discussed it a bit, the original post didn’t mention one massive, glaring issue with this recommendation: the impact on accessibility.
Disabling scaling means not only is there no double-tap to zoom, but there is no pinch-to-zoom either. Many users depend on this functionality for accessibility: for them, disabling scaling effectively renders your site broken and useless.
So my advice is to avoid this like the plague in nearly all scenarios, for several reasons:
- While only Chrome and Firefox for Android will benefit from it from a performance perspective, everyone loses out on an important accessibility feature. For users who require the ability to zoom to use a site, you’ve just broken the experience for them.
- Chrome 32 is going to get clever and disable the double-tap to zoom feature without requiring scaling to be disabled. As long as as “the computed viewport width in CSS pixels is less than or equal to the window width” (so basically, if you’re using width=device-width in your viewport element). Double-tap zoom will be gone—as will the 300ms delay—but users will still be able to pinch-to-zoom if necessary. Cake…eating it too…all that.
- iOS double-tap isn’t going anywhere anytime soon. iOS uses the double-tap gesture to provide a scroll feature so you’re still going to have to account for the delay on one of the most popular mobile browsers.
As Patrick Lauke pointed out on Twitter, this leaves three different solutions for developers, depending on the scenario:
- Use something like FastClick, to account for iOS.
- Use FastClick or kill scalability (as we’ve just discussed, a bad idea) for Chrome versions 32 and under.
- Use width=device-width in their meta tags and celebrate when Chrome 32 and later don’t have a delay.
So what’s a performance and accessibility loving developer to do if they want to get rid of the delay?
At the end of the post, Rick states to “just switch to using FastClick”. That’s frequently my recommendation as well. FastClick does a good job of dealing with the issue—without losing the ability to pinch-to-zoom and does so at ~10kb minimized. That’s not super lightweight, but it’s not too painful either.
Another option is to use something like Tappy!, a normalized tap event from the always-clever folks at Filament Group. Tappy! lets you use a “tap” event that works for touch, mouse and keyboard. Not only do you avoid the 300ms delay, but the script is under 1kb when minimized (though it does require the use of jQuery or a similar framework).
The point being: there are ways to successfully eliminate the click delay without negatively impacting performance. Until more browsers start solving this themselves, in the present day, preserving scaling and finding another way to combat the 300ms delay is our best option.
- Top five sites in terms of monthly use in the US are all write/read experiences—they don’t work unless people write content to them.
- Tim Berners-Lee’s original view of the web—”a place where we can all meet and read & write.”
- In the United States, 78% of Facebook monthly users are mobile. 60% mobile on Twitter, 40% mobile on YouTube.
- “The Mobile Moment”—when your mobile traffic crosses your desktop traffic and becomes your majority experience.
- 127% growth of mobile only Facebook users in the last year.
- 40% of all tweets created on mobile in 2011. 3 hours of vide uploaded per second on Youtube mobile. $13 billion mobile commerce in 2012 on eBay.
- Fastest growing activies in mobile apps are utilities—shopping, accomplishing, preparation.
One Handed Use
- Polar specifically designed for one-handed use. Both consumption and creation process were tested by timing how quickly people could complete their tasks.
- 75% of all interactions on a smartphone screen are with a single thumb/finger.
- Design for the extremes and the middle works itself out. For example, shears are tested with people who have arthritis. If they can comfortably sheer, anyone can.
- 68% of consumer smartphone use happens at home.
- Don’t let the keyboard come up. How can you design your service without forcing people to use the keyboard. Examples: map ui for destinations, tappable date picker, smart defaults, offering popular suggestions, sliders for ranges.
Focused Flows
- Focus on simplifying the core flow—allow users to complete core tasks as fast and simply as possible.
- Getting creative is about distilling things down to their essence until you find the simpliest way to do them.
- Reducing 23 inputs to 11 for Boingo Wireless signup increased conversions by 34% and 53% decrease in sign-up time.
- Hotel Tonight—3 taps and a swipe to book a hotel. This is a competitive advantage.
- It takes big changes to go small. You have to reinvision your product for a small screen, focused flow.
- When you focus and try to be creative to focus core flows, you’re more likely to not go far enough than to be too extreme. The more uncomfortable you’re making people, the more likely you’re thinking along the right lines.
Just in Time Actions
- Just in time education. We learn better in the moment than with upfront instructions/tests.
Bring actions in just as the matter. Example: hiding bottom tab bar while visitor is scrolling through an application.
In the US, 52% of laptop owners have smartphones. 13% have smartphones, tablets and phones.
90% of people with multiple devices use them sequentially.
81% of people use their smartphone while watching TV. 66% use laptop while watching TV.
The web is experienced by multiple screens—both simultaneously and sequentially.
Cross Device Usage
- Access: the ability to use and view features across multiple devices and form factors. Example: browsers enabling tab syncing across devices.
- Flow: not just content should move seamlessly between devices—processes should as well. Example: directions syncing between devices.
- Control: allow devices to control the experience on other connected devices. Example: Polar lets you control the large screen experience on an XBox One from your phone.
- Push: allow people to push experiences from one device to another. Example: eBay on large screens lets you/update add photos to your item from your mobile device using their application.
- Newer speciality devices will force us to simplify and consider what each device does best. Not sustainable to assume all new devices will do all forms of input equally well. Example: Polar on Google Glass would be cumbersome as it requires multiple voice input.
One of the ones that stands out is Nate Koechley’s “Professional Frontend Engineering” presentation (transcript and video). When he first gave the presentation in 2008, he was working at this massive company called Yahoo! and was preaching about the importance of progressive enhancement, graded browser support, and the massive scope of knowledge a professional front-end engineer needed to know.
I mean, just listen to some of the wisdom that is just as relevant, if not more so, today.
On the first of Yahoo’s guiding principles:
First is availability — this is the bedrock of building a website. If the site’s not available to people, game over, we might as well not even bother. And so our job is to make sure that everything we do is out there, working, available to everybody. This needs to be true regardless of where they are in the world, or what special circumstances they may be under — everything needs to be available and accessible. And I purposely used the word availability — you hear a lot of talk about accessibility, but availability applies to everybody, and it’s an umbrella term that encompasses accessibility also. And so I like to think about just making the sites available to everybody, whatever that entails.
On another Yahoo! guiding principle, richness:
Another thing to keep in mind about richness, as a developer, is that you’re probably not the average user. Here at Yahoo!, we sit on top of a tremendously fat internet connection – everything’s blazing fast, many of us work on brand new hardware with lots of memory — and so as we’re doing rich JavaScript development, as I’m writing lots of software that gets executed in the browser, it may work really well for us, but we need to remember that different people have different sort of equipment, and bandwidth, and so forth. And so remember that — that’s another reason to build in the layers, and to make things defensive.
On building things that are stable and future friendly:
The web is young; we don’t know what’s coming around the corner. We don’t know what’s going to be invented next, what technology’s coming next. And so it’s important to continually invest in stability, in strong infrastructure, in stable code, so that we have that strong platform to stand on as the next thing happens. So by focusing on stability, you’re really investing in your future, and again, preparing for that future — whatever it might bring.
On support not being binary:
When I was doing agency work, before joining Yahoo!, I would often get asked at the beginning of a project: “which browsers do we need to support on this project? Are we going to support Netscape 4? Are we going to support IE 5 on the Mac?” And it was always asked in a binary sense, where the answer is either yes or no. And we realized over time that that’s counter to our goal of maximum availability – if we’re ever choosing not to support a particular browser, that means we’re choosing to have less than complete availability. So that was the first thing that we had to understand, is that support is not binary. The second thing we had to understand is that support doesn’t mean identical. Forcing every pixel to be in the exact same place on every user agent in the world isn’t what it means to support the user agent. Instead, to support a browser, we want to give the browser what it can handle, in the most efficient way possible.
And finally, on users:
And then finally, and I think most importantly, is that for me, front-end engineering comes down to supporting users. It’s our job to make sure that they have a great user experience. If they’re using a screen reader, I want to make sure that they’re having the best possible experience with all of our websites. If they’re using a particular browser, or if they want their font bigger, or if they’re on a mobile device — it’s our job to give them that great experience…We want to have stubborn empathy for what they need, and what they want, and what they’re going through — what can we do to make all that better?
Seriously, this stuff is pure gold especially today.
That was 2008. A Dao of Web Design was written by John Allsopp in 2000. Nick Finck and Steve Champeon first coined the term “progressive enhancement” in 2003. That’s how long people have been arguing for embracing the nature of the web—embracing its flexibility, instability, and unpredictability. That’s also how long so many have been resisting it.
In my last post, I argued for embracing the web’s ubiquity. Many comments, while not unexpected, were still a bit discouraging. Some said it wasn’t practical. Some stated that not everyone “has the luxury” of thinking about it. Similar comments seem to come up whenever this topic gets discussed.
The many developers who have been pushing for this general approach for over a decade weren’t doing this because it was impractical. On the contrary, these philosophies (Key distinction here: philosophies, not specific techniques. Techniques have a far more limited shelf-life.) that made sense then and still make sense today have been promoted for so long because they are practical. Because building sites that do not consider what happens when situations are not ideal is a sure-fire way to cause headaches. I’ve been there myself and it’s never fun.
When we dismiss the less-than-ideal situation, we dismiss traits inherent in the very platform we’re building for. And whenever you do that you open the door for trouble.
Is there a cost associated in building robust sites? Yes, though it’s not nearly as bad as many seem to think. As with anything, the more you learn the quicker the process will go. Eventually, it just becomes the way you work. I remember going through this with CSS. At first it was a beast and yeah, building with tables was easier. But as I learned more and more about the spec, about how browsers behaved, and about how to make my process efficient, that time gap gradually reduced itself to being minimal, at most.
There is also a cost involved with not building sites this way. In a comment on my last post I referenced the Boston Globe and how it appears on Google Glass. The point is not that we all need to be testing on Google Glass—time will tell how well that device does. The point is that here is a brand new device and form factor and they didn’t have to do a single thing to have get their site working on it. If Glass takes off perhaps they may see a way (and a value) to optimize for it further, but what people currently see is a perfectly acceptable and usable experience.
Others who made assumptions about capabilities, support, networks, etc are going to be scrambling to fix bugs and make their site just “work”. Meanwhile the folks at the Globe are going to be busy making improvements to their experience—adding new enhancements that make the experience even more awesome. That, and seeing how they can use Glass to punk each other.
When topics like this come up, we focus a lot on the benefits for people with less than ideal situations. That appeals to some of us (it does to me!) and less so to others. For those unmoved by that, consider this: by building something that can handle less than ideal circumstances, by removing assumptions from your development process, you make your site better equipped to handle the unpredictability of the web’s future.
Building something that will hold up as new devices and browsers come onto the scene—that seems like something practical that we can all get behind.
]]>The power of the web is its ubiquity. It is the web’s superpower, and its omnipresence is what sets it apart from native platforms.
This is what excites me about the web and it’s why web technology tends to be my focus. That ubiquity, that ability to get your information to anyone with a device connected to the web, is incredibly inspiring. This is why I tend to get so frustrated when we do things that eliminate that superpower.
When we don’t consider what an experience is like without JS, we’re crippling that super power.
When we use techniques that work only on top-of-the-line modern browsers, but don’t consider what happens in other browsers, we’re crippling that super power.
When we build fat sites that are incredibly slow to load on older devices or slower networks, if they can even load at all, we’re crippling that super power.
When we neglect to consider people with accessibility needs, we’re crippling that super power.
When we slam the door on people because of the device they’re using, we’re crippling that super power.
As you make decisions that don’t include each of these groups you continually reduce the unique power of the web. Individually it might not seem like much, but with each step you’re cutting off more and more people from being able to access content you’re putting online. This makes little sense no matter if you choose to look at it from a business perspective or an ideological one.
In a recent post on the Pastry Box Mat Marquis talked about browser support and the different views web professionals may take when it comes to their work:
Some people want their paychecks and to go home, and that’s fine. You and me, though—we’re gonna work harder than they do. We’ll build things that ensure that entire populations just setting foot on the web for the first time can tap into the collected knowledge of the whole of mankind.
That last line right there is why I enjoy working with web technologies. It’s what I get excited about.
The web has the power to go anywhere—any network, any device, any browser. Why not take advantage of that?
Update: This post ended up being a little more controversial than I anticipated, so I ended up following it up tackling the topic from another angle in a follow-up, “Being Practical”. Aaron Gustafson also followed up with an excellent look at the “cost” of progressive enhancement that is well worth your time.
]]>But blog posts and articles can’t cover every scenario. The author knows nothing about your team, your site, your business goals, your deadlines, the behavior of the people who visit your site or the technology they use to visit with. There are characteristics of projects that may make a prescribed solution inappropriate.
A great example comes from an email conversation I had with Ilya Grigorik (who is wicked smart, by the way) in response to my last post on responsive image weight. One of the things he mentioned was the gray area of whether or not it was fair to count the entire cost of an undisplayed image as wasted or not.
The example he cited was Breaking News. On a small viewport the site uses tabs for latest and most popular posts. By default, the latest posts are displayed. These are a series of tweets with no images other than the Twitter avatars.
The popular tab—not displayed by default—contains numerous images. Is it fair to say that this weight shouldn’t be loaded by default? That’s questionable. Due to the cost of making a connection on a mobile network, you could easily argue that if the popular tab is frequently used by visitors, those images should be there—maybe not before page load, but certainly shortly after.
It’s an excellent point—and an example of one of the many gray areas on the web. We can’t possibly prescribe the appropriate solution here without knowing a little about the behavior of the visitors to the site.
I recently got to talk with Dave Rupert and Chris Coyier on the ShopTalkShow, and the topic of responsive images came up (as it often does). One thing we each agree on: this isn’t a black and white, binary decision. There are many moving parts and as a result, your mileage may vary (YMMV). We were talking specifically about responsive images, but that goes for any discussion online.
Read. Learn about the ways people are approaching difficult problems. Learn about what costs are associated with those solutions and compare them to the costs associated with doing nothing. Then go back and experiment. Look at your situation and see what fits and what doesn’t.
And always remember: YMMV.
]]>The discussion has evolved since then with debates over what sort of solution we need (server-side, client-side), new markup (srcset vs picture) and even, in some cases, wondering whether we really needed to worry about it at all.
It’s a messy issue for sure. The current solutions for responsive images do come with some complexity and overhead. If you’re using a client-side solution and don’t want to make more than one request per image, then you end up breaking the preloader. As Steve Souders explained rather well, this can have a very negative impact on the time it takes for those images to actually start appearing to your visitors.
No doubt there are trade-offs. Complexity of solutions, preloader versus file size—these each have to be considered when making the determination of what solution to use. Eventually we’ll have a native solution which will take care of the preloader issue, but browsers sure seem to be dragging their feet on that.
In the meantime, I was curious just how much page weight could be saved with a responsive image solution in place. I know that on the projects I’ve worked on, the savings has often been huge, but I wanted to see how consistent my experience is with the web as a whole.
Experiment time!
Yoav Weiss created a bash script called Sizer-Soze that, with the help of ImageMagick and PhantomJS, determines just how much you could save in file size by serving optimized and resized images. The script is built for one url at a time, so I modified it slightly to let me loop through a list of 471 URLs (the same list used by Guy Podjarny for his analysis of responsive performance). My bash scripting skills are minimal (read: nearly non-existent), but thankfully Yoav is far more skilled there than I and was happy to help out and make the whole thing run much more efficiently.
The script looped through the 471 urls, spitting out the results into a CSV. Each site was tested at widths of 360px, 760px and 1260px. Numbers were collected for total original image size, size of images if they weren’t resized but were optimized, and size of images if they were both optimized and resized to match the size they actually displayed at (so if a 1200px image was displayed at 280px wide, the script resized the image to 280px and compared the two file sizes).
If Sizer-Soze came across an image set to “display:none” it would re-check the dimensions every 500 milliseconds (for a maximum wait of 25 seconds) to see if things had changed. This was done to account for image-based carousels where images may have been hidden initially but then later revealed. If the image became visible during that time, then the dimensions were used to process the file savings normally. If the image was never displayed, the entire weight of the image was counted as wasted.
Even with that tweak, there are few caveats about Sizer-Soze:
- It does not make a distinction between 3rd party images and images served by the site itself. So some of the weight can be attributed to things like ads.
- It does not analyze background images. That’s fine because that’s not what we want here anyway, but its worth noting that potentially even more bytes could be saved.
- It won’t be able to pickup some clever lazy-loading techniques, so again, its possible that some sites would actually be able to save even more than the reported numbers.
- It doesn’t include data-uri images in the totals as the file name exceeds the length limit for the script.
After looping through, the list shortened to 402 different responsive sites. Some of the original 471 moved to new URLs or apparently went AWOL so Sizer-Soze couldn’t follow along. Others had no images in the source code—either as a result of some sort of lazy-loading mechanism or by design. Still, 402 sites is a pretty good base to look at.
Results: Total Savings
On to the results! First up, the totals.
| Viewport Size | Sum of Original Sizes | Sum of Optimized Savings | Sum of Resized Savings |
|---|---|---|---|
| 360 | 237.66MB | 12.86MB | 171.62MB |
| 760 | 244.39MB | 13.30MB | 129.34MB |
| 1260 | 250.08MB | 13.70MB | 104.31MB |
It’s not too surprising to see that the original size (what’s being served now) is pretty consistent from screen size to screen size. Guy’s research, and many others, have already demonstrated this pretty well.
What is staggering is just how massive the savings could be if these sites served appropriately sized images. At 360px wide, these 402 sites combine to serve 171.62MB of unnecessary weight to their visitors . That’s a whopping 72.2% of image weight that could be ditched by using a responsive image technique.
It’s not just small screens that would benefit. For 760px and 1260px sized screens, 52.9% and 41.7% of image weight is unnecessary.
Results: Average Savings
Let’s look at the savings in terms of per-site averages.
| Viewport Size | Avg. Original Size | Avg. Optimized Savings | Avg. Resized Savings |
|---|---|---|---|
| 360 | 603.89kB | 32.68kB | 436.08kB |
| 760 | 622.53kB | 33.88kB | 329.47kB |
| 1260 | 635.43kB | 34.81kB | 265.06kB |
Looking at it from the perspective of an individual site, the numbers feel even more impactful. For each screen size, sites on average would shed about 5% of the weight of their images (from 32-34kb) by simply doing some lossless optimization. Considering that this could be automated easily into a build process, or manually done with tools like ImageOptim with little effort—that’s an easy 5% improvement.
Unsurprisingly, the gains are much more significant if those images get resized to an appropriate size as well. At 360px, the average site would drop 436.08kb. Consider that for a second. One optimization (resizing images) dropping that large of a chunk of weight. That takes image weight for a page from 603.89kB to a mere 167.81kB. That’s a huge difference that shouldn’t be dismissed.
While the improvements are slightly smaller for larger screen sizes (as you would expect), using some sort of responsive image technique would still save 320kB for sites displayed at 760px and 265kB for sites displayed at 1260px.
Conclusion
We spend a lot of time talking about responsive images online—debating the approaches, trying out new solutions. Sometimes it can be a little discouraging that we still haven’t gotten it ironed out (I know I feel that way frequently).
But the web needs us to be diligent. It needs us to not settle for seemingly simple solutions that sacrifice performance. It is extremely rare where one optimization lets us knock off such a significant amount of page weight, but here we are staring one such technique right in the face.
72% less image weight.
That’s why we need a responsive image solution.
]]>This will be my first time hanging out at Mobilism and I couldn’t be more excited. When I was involved with BDConf, we always saw Mobilism as being very similar: a conference that aimed to push the envelope on the discussion of the web on mobile and other emerging devices.
I’ll be playing the role of MC this year. They’re going to use a different style of Q&A where instead of trying to run a mic back and forth across the room, attendees can post their questions to Twitter (using a hashtag that will be mentioned frequently at the conference) throughout the talk. When the speaker is finished with their presentation I’ll interview them on stage using the questions the audience sent in.
Christian Heilmann has blogged about this approach in the past and the format not only sounds fun, but makes a ton of sense as well. It’s a great way to allow attendees to ask questions as they think of them instead of having to try and remember them for the end of the talk. As Christian also pointed out, you actually have more questions answered because you don’t have to wait while someone runs back and forth with the microphone.
If you’re in town for the conference be sure to post lots of interesting questions to discuss! You won’t have to stand in front of a mic in the middle of the room so there’s really no reason at all to be shy.
And if you haven’t signed up yet, they’ve still got some tickets available for both the conference and workshops (including a dirt cheap price for a full-day of Firefox OS and BB10 development).
As a bonus, you can follow Mobilism with WebPerfDays, an unconference about performance the day after Mobilism. It looks like quite a few people from Mobilism (including myself) will be hanging around to check it out and I’ve heard rumblings of some great discussions in the works (including a much-needed chat about responsive images).
]]>One of the biggest challenges for many is the workflow: how do you find a process that works for building and designing something intended for such a wide range of devices, input types and contexts? It’s not an easy discussion, particularly for people and companies who are used to a very rigid waterfall method. That’s why I was thrilled when I first heard that Stephen Hay was going to write Responsive Design Workflow.
Stephen walks you through his workflow for designing and developing responsive sites. There’s a lot to love about it. Each step in his workflow seamlessly builds upon the step before it. Documentation and deliverables are generated automatically as the site is built so there isn’t a need to manually update them. This is particularly important. Styleguides and documentation are so valuable to a project, yet if they have to be updated manually everytime something changes there’s a tendency for them to become stale as deadlines shift. Automating the creation of of these important resources is an important piece of the puzzle.
This is not your typical discussion of design workflow. He doesn’t pull any punches. He designs in the browser. He uses (gasp) the terminal. But all the while he carefully and patiently explains the how, and more importantly, the why for each step. The book is also sprinkled with plenty of humorous bits—making for a very entertaining read (how often can you say that about tech books?).
Discussing workflow can sometimes get a bit religious as people debate the merits of one tool or another. You don’t have to worry about that with this book. This is his workflow, and he certainly discusses the merits of the approach and steps that he takes, but Stephen makes it very clear throughout the book that his workflow isn’t for everyone and that the tool is far from the most important thing. As he correctly states:
Clients don’t care what tools you use, as long as you get the job done.
It’s safe to say that even if you decide that some of the specific tools and steps won’t work for you, you are still going to come away with a lot of great ideas for how to improve your workflow (the discussion of breakpoint graphs is particularly fantastic).
All in all, it’s a great book that will challenge anyone who reads it to evolve their workflow to be much more in sync with the flexible nature of the web. Highly recommended!
]]>I want to keep it relatively small, so attendance is going to be limited to 30 people. That way everyone should have ample opportunity to ask questions and contribute to the discussion. The workshop is intended to be a bit flexible and include topics that you want to see covered.
Registration for the complete day is $350 and opens, well, now. Parking will be covered, and breakfast, lunch and coffee will be provided. In addition, thanks to Peachpit, everyone who attends will get a copy of Implementing Responsive Design to read, sell, or use as a doorstop—your choice.
To see more details or to register, please have a peek at the workshop page.
Hope to see you in June!
]]>The renewed interest in the technique makes a lot of sense. In 2009 using SVG wasn’t a very viable option, but the landscape has improved quite a bit since then. This has lead to some very interesting ideas such as Estelle Weyl’s experiment using SVG to display different raster images.
There are simpler applications as well. Say you have a logo that includes a slogan. When the logo is displayed at a small size the slogan can become illegible. At those sizes it may make the most sense to simply hide the slogan allowing the rest of the logo to display. With embedded media queries you could apply an id or class to the portion of SVG that generates the slogan and then embed a media query within the SVG file that hides the slogan when the logo is below a certain size. Essentially something like this:
<svg>
<defs>
<style type="text/css">
@media screen and (max-width: 200px) {
#slogan{
display: none;
}
}
</style>
</defs>
<g id="slogan">
...
</g>
</svg>
In the stripped down example above, the code that creates the slogan is wrapped in a group element (<g>) with an id of “slogan” applied. Here’s where the fun part comes in. The media query is based on the width of the image, not the screen! So in the above example, the slogan is hid when the image is below 200px wide, regardless of screen size. It’s sort of a small teaser for what it would be like to have element media queries.
The exception to this rule are inline SVG images. In that case, the SVG file is no longer self-contained and the media queries are relative to the width of the viewport of the screen.
Support
To get an idea of the level of support, I grabbed the image of a Kiwi standing on an oval from Chris Coyier’s post. I put an id on the oval and, using a max-width media query, hid it when the image is below 200px wide.
I love automating test results, but couldn’t figure out a way to pull it off consistenly so these test were run by hand on the following browsers:
- Chrome 14 - 25
- Safari 4.0.5 - 6.0.2
- Opera 10 - 12.14
- IE 10
- Firefox 3.6 - 19
- Android 3.2 - 4.1
- Blackberry 6 - 7
- iOS 5 - 6.0.1
Background Images
| Tested | Passed |
|---|---|
| Android 3+ | No |
| Blackberry 6+ | No |
| Chrome 17+ | Yes |
| Firefox (<= 19 tested) | No |
| IE 10 | No |
| iOS 6+ | Yes |
| Opera 10+ | No |
| Safari <= 5.1 | No |
| Safari 6+ | Yes |
Results
Support for media queries within an SVG file that is used as a background image is still a bit poor. Basically, you have support in Chrome 17+, iOS 6 and Safari 6. Opera, since version 11, and IE10 both hid the slogan regardless of whether or not the media query applied. It’s bizarre and if someone knows why, please feel free to enlighten me.
IMG Element
| Tested | Passed |
|---|---|
| Android 3+ | No |
| Blackberry 6+ | No |
| Chrome 14-18 | No |
| Chrome 19+ | Yes |
| Firefox 4+ | Yes |
| IE 10 | Yes |
| iOS 6+ | Yes |
| Opera 10+ | Yes |
| Safari <= 5.1 | No |
| Safari (6+) | Yes |
Results
Support for media queries inside SVG images placed in a img element is a bit better. Chrome 19+, Firefox 4+, Opera 10+ (maybe earlier—10 is the oldest I had handy for testing), Safari 6, iOS6 and IE10 all behaved as expected.
Object Element
| Tested | Passed |
|---|---|
| Android 3+ | No |
| Blackberry 6+ | No |
| Chrome (14+ tested) | Yes |
| Firefox 4+ | Yes |
| IE 10 | Yes |
| iOS 6+ | Yes |
| Opera 10+ | Yes |
| Safari 5.1+ | Yes |
Results
Using the object element, the picture is even more rosy. Safari 5.1+, iOS 6+, Firefox 4+, Chrome 14+, Opera 10+ and IE10 all passed the test.
Inline SVG
| Tested | Passed |
|---|---|
| Android 3+ | Yes |
| Blackberry 6+ | No |
| Chrome (14+ tested) | Yes |
| Firefox 4+ | Yes |
| IE 10 | Yes |
| iOS 6+ | Yes |
| Opera 10+ | Yes |
| Safari 5.1+ | Yes |
Results
As mentioned before, using inline SVG’s do not respond based on image width, but on the width of the screen. So the results above are based on how it performed at different screen sizes. As it turns out, it’s pretty good. You get the usual suspects, and Android 3.x and up decides to play along as well.
Caveats and Conclusion
Notably absent from the use cases above (aside from inline SVG) is the stock Android browser, which had no SVG support at all in 2.x and still doesn’t support media queries inside of SVG images as of 4.1.
I also haven’t tested what the performance implications are of this. I know with my simple example, there wasn’t anything that noticeable, but I suspect that could change if the number of queries applied increases or you do something more substantial than a simple show/hide.
Finally, this isn’t as clear cut as a boolean supports or no supports. For example, changing the fill of the oval in a SVG background image works just fine in IE10. (Though Opera still appears to apply it no matter what.) Results for support will no doubt vary based on what properties you try to alter, what media queries you use (more on that soon, I hope!), and perhaps even what elements you try to apply those too. I would love for someone with a bit more knowledge of SVG to chime in if they’ve got any ideas about this.
Update: Turns out, Jeremie Patonnier’s has already built a great test for media query support in SVG images.
In short be sure to experiment and test thoroughly to make sure the results are as expected. It’s exciting stuff with loads of potential.
]]>This is why I’ve been so happy to see the recent rash of posts discussing performance as a fundamental component of design. The latest comes from Mr. Brad Frost. He makes the case that performance is not just something developers need to worry about, but that it is an “essential design feature.”
One of the things he suggests doing is mentioning performance in project documents.
Statements of work, project proposals and design briefs should explicitly and repeatedly call out performance as a primary goal. “The goal of this project is to create a stunning, flexible, lightning-fast experience…”
It’s an excellent point. Performance should be brought up early and often to emphasize its importance. Not considering it from the earliest stages of a project is a surefire way to end up with slow and bloated sites. A decision made early on about the visual appearance of a site can have a serious impact on how the site itself will end up performing.
Early in the project, saying things like “lightning-fast experience” is probably sufficient. At some point you need to get a little more direct though.
Enter the performance budget. I’ve mentioned this before, but it’s worth discussing in a bit more detail.
A performance budget is just what it sounds like: you set a “budget” on your page and do not allow the page to exceed that. This may be a specific load time, but it is usually an easier conversation to have when you break the budget down into the number of requests or size of the page.
The BBC did this with their responsive mobile site. They determined that they wanted each page to be usable within 10 seconds on a GPRS connection and then based their goals for page weight and request count on that.
Once those goals are set, you stick to them. Anytime you want to add something to a page, you need to ensure it stays within budget. Steve Souders talked about the three options you have if something does not fit within the budget:
- Optimize an existing feature or asset on the page.
- Remove an existing feature or asset from the page.
- Don’t add the new feature or asset.
Just be sure to define the budget early on. Defining a performance budget after you’ve already finalized the appearance of a site limits its effectiveness. It may still help to guide decisions about plugins and so on, but deciding a page can’t exceed 500kB when a mock-up containing three carousels and a full-screen high-resolution background image has already been approved isn’t going to do you much good.
Clearleft recently wrote an excellent post about their experience with using a performance budget and how it impacted their project:
The important point is to look at every decision, right through the design/build process, as something that has consequence. Having a pre-defined ‘budget’ is a clear, tangible way to frame decisions about what can and can’t be be included, and at a suitably early stage in the project. It can also potentially provide some justification to the client about why certain things have been omitted (or rather, swapped out for something else).
That’s the value of a performance budget: it provides a framework for discussions as you move forward. It serves as a point of reference as you decide what components should, and shouldn’t, get added to a page.
It’s worth noting that I’m assuming you have already determined what content needs to be on the page to begin with. A performance budget doesn’t guide your decisions about what content should be displayed. Rather, it’s about how you choose to display that content. Removing important content altogether to decrease the weight of a page is not a performance strategy.
As Brad stated, it’s time to give performance the attention it deserves and setting a budget is an excellent place to start.
]]>width: device-width to fix responsive design in snap mode instead of Microsoft’s recommendation, which was to use width: 320px. Using device-width is a far more future friendly approach and testing I had done on a tablet running Windows 8 showed this worked just as well.
However, the other week Matt Stow discovered that device-width wasn’t getting the job done on a Lumia 920. Apparently the Lumia 920 (which boasts a 4.5” screen) reports a viewport width of 768px for device-width, which is much larger than what you would expect for a device its size.
Tomomi Imura, who has done a lot of testing around viewports, apparently discovered this behavior awhile back:
So it is correct <meta name=viewport content=width=device-width> gives 320px width, while @-ms-viewport {width:device-width} 768 on Lumia 920” (Source)
Here’s where things get interesting. When you use device-width inside the meta tag on Windows Phone 8, it returns CSS pixels. When you set width=device-width through CSS device adaptation, it returns the actual device pixels. (If that sounds a bit murky, PPK’s excellent articles on viewports should help clear it up: part one and part two)
This is an issue because using CSS device adaptation is necessary for getting responsive sites to work in snap mode in IE10 for Metro. So while CSS device adaptation fixes our issues with snap mode, it causes issues on Windows Phone 8 devices like the Lumia.
After talking to Rey Bango at Microsoft (who is awesome to work with and did not pay me (much) to say that) this behavior was confirmed as a bug—not intentional behavior and the team over there is going to get an update out (not sure when) to fix it. The good news is that this fix will also clear up another issue in IE10 that causes it to always report a screen resolution of 96dpi, regardless of if that is true or not.
The bad news is that getting those updates to people using Windows 8 phones won’t be an overnight thing. Just ask anyone with an Android device about how quickly carriers release updates. Once they’ve finished crying they’ll fill you in.
In the meantime, we’re in a bit of a jam. We have a few options, none of which are ideal:
We could do what Matt (and Microsoft) initially suggested and apply the following code:
@media screen and (max-width:400px) {
@-ms-viewport{
width:320px;
}
}
This would address the snap mode issue, as well as make Windows Phone 8 devices like the Lumia 920 display nicely. Unfortunately this would also impact future devices that may not actually need (nor should they get) this “fix”. Since each Windows 8 device will support the same syntax, the “fix” would be applied to any phone running Windows 8.
We could also leave the @-ms-viewport stuff out entirely. This would mean that phones, tablets and desktops would all behave as you would expect unless the person was using the browser in snap mode. I haven’t seen any stats about this behavior yet, so I really can’t speak to how large an audience that is.
Finally could use width=device-width, which is certainly the most “future friendly” approach. To address the issue on Windows Phone 8, we could apply the temporary fix that the folks at Microsoft has come up with. Their recommendation is to set the meta viewport tag to device-width as you normally would, and set the viewport in your CSS like so:
@-webkit-viewport{width:device-width}
@-moz-viewport{width:device-width}
@-ms-viewport{width:device-width}
@-o-viewport{width:device-width}
@viewport{width:device-width}
Then, you would need to add the following JavaScript.
if (navigator.userAgent.match(/IEMobile\/10\.0/)) {
var msViewportStyle = document.createElement("style");
msViewportStyle.appendChild(
document.createTextNode(
"@-ms-viewport{width:auto!important}"
)
);
document.getElementsByTagName("head")[0].
appendChild(msViewportStyle);
}
The code above checks for the version of IE that has this issue (IE Mobile 10) and then injects a stylesheet that overrides the device-width declaration in your CSS. This gets all Windows 8 devices playing along nicely. Windows Phone 8 devices will pay apply a friendlier viewport, and snap mode users will see a site scaled to their viewport size as well.
My recommendation is to use Microsoft’s fix. Client-side UA sniffing may not be the most eloquent solution, but I prefer it to potentially harming the user experience—something which each of the other two solutions would be guilty of. Perhaps this would be a different scenario if this was IE8 or IE7, but considering it’s the behavior in an operating system that just came out (and therefore, most likely will only increase in marketshare for the time being) I think it’s worth implementing.
]]>Not only do I think reading is important, but it’s also something I enjoy a bunch and probably the closest thing I have to a hobby. However this year the reality is that I just didn’t set aside much time for it. Whereas I used to read at least 30 minutes each night before bed, this past year I spent that time writing my own book and doing a large number of side projects in preparation for starting to work independantly.
There was a point while writing my book that I just didn’t feel like I had the energy to read anything substantial, which is why there is a pretty heavy increase in the number of fiction books this year. I also read a very unusually small number of non-industry related non-fiction, something I plan on remedying next year.
Also of interest, at least to me, was that this was my first year with a Kindle. I had been a paper holdout for a long time, and still enjoy a good paperback or hardcover book from time to time. The Kindle’s influence on my reading is noticeable though. Of the 21 books read, a whopping 17 of them were read on the Kindle.
I’ve found the Kindle really boosts the number of impulse reads. The advantage is that I read things (like Monoculture and Wool) that I probably would have not seen otherwise. The disadvantage is that due to all the impulse buys distracting me, my count of “reading but haven’t finished” has never been higher (I count 8 at the moment, which is just plain silly).
Here, then, is 2012’s list:
- Designing Devices by Dan Saffer
- Flinch by Julien Smith
- Freedom by Daniel Suarez
- Monoculture by F S Michaels
- The Brain and Emotional Intelligence by Daniel Goleman
- The Wise Man’s Fear by Patrick Rothfuss
- The Hunger Games by Suzanne Collins
- Catching Fire by Suzanne Collins
- Mockingjay by Suzanne Collins
- APIs: A Strategy Guide by Daniel Jacobson, Greg Brail and Dan Woods
- Weaving the Web by Tim Berners-Lee
- Design is a Job by Mike Monteiro
- The Wind Through the Keyhole by Stephen King
- The Medium is the Massage by Marshall McLuhan
- The Shape of Design by Frank Chimero
- Everyware by Adam Greenfield
- The Mobile Frontier by Rachel Hinman
- The Naked Presenter by Garr Reynolds
- SMS Uprising: Mobile Activism in Africa edited by Sokari Ekine
- Wool Omnibus Edition by Hugh Howey
- Content Strategy for Mobile by Karen McGrane
As with every year, if the book made the list then I enjoyed it on some level—life is too short to waste time on bad books. For fiction, it’s hard to beat Wool and I love me some Patrick Rothfuss (I also have to admit that I enjoyed the Hunger Games much, much more than anticipated).
For non-fiction, Design is a Job is my top choice. It’s a book that I honestly wasn’t very excited for, but given how solid the A Book Apart series has been, I thought I’d give it a go anyway. I’m glad I did. Smart, funny, helpful—it’s just a fantastic read.
]]>To say I’m leaving an excellent job behind is an understatement. The position was flexible. The people I worked with were smart, talented and fun. The people I worked for were flexible and willing to let me experiment with different ideas.
I got to help organize a conference, 4 of them in fact. I got to watch as the conference grew, watch the community that built up around it, and learn from all the people who came to hang out and talk mobile for a few days.
Yet I left. So very clearly I’m slighty insane.
I was getting an itch though to shake things up though. Some incredibly exciting opportunities were coming my way, and I felt I couldn’t pass on them. It was a tough decision, to say the least.
I’m excited though. Excited about some of the projects coming up and excited about some of the things I want to experiment with. I’m starting by consulting and doing some development work on a responsive redesign of a very media-rich site. The site has some interesting performance challenges that I’m looking forward to tackling head-on.
In addition to consulting and/or development work, I plan on focusing more on helping others make sense of all this. That takes many forms.
I’ll be doing some private training for teams to get them armed and raring to go. I’ll be doing some more speaking as well, though I’ll be limiting myself a bit on how many I say yes to. It’s tough—I had to turn away several exciting events in 2012—but with three kids at home I have to make sure they get enough time with Dad (or maybe more accurately, make sure I get enough time with them).
Expect more writing and research as well. There is quite a bit of research and testing I want to do and I’ll certainly be sharing those results as I go along.
If any of this sounds like something you would like to chat about, then let’s definitely talk.
So I guess if I were asked to summarize this entire post into one, very technical sounding, TL;DR version, it would be this: I’m going to make cool stuff and help others make cool stuff too.
]]>##WTFWG Where-in I get unusually riled up and rant about the picture vs. srcset situation and how it was handled. The conversation has evolved, but I still have concerns about how parts of the process were handled.
##Media Query & Asset Downloading Results A look at the results from the tests I ran to see how images (both background and content-based) are downloaded when media queries are involved.
##IE10 Snap Mode and Responsive Design IE10 is generally a huge step forward, but when in the new “snap” mode, it ignores the meta viewport element. Here’s the fix and some of the reasons for why the move was made (it was a concious choice, not a bug).
##Blame the Implementation, Not the Technique There is a disturbing trend in our industry to blame the technique when we should be blaming the way it was implemented. I mentioned this several times over the course of the year and finally took the time to elaborate.
##Mobile Navigation Icons For whatever reason, there was a ton of discussion this year about what icon to use for navigation on mobile sites. Unicode would be awesome, but support is a bit sketchy so I threw together a very quick example of how the icon could be generated using CSS.
]]>The folks over at 24 Ways were kind enough to let me dig a little deeper into the issue. It’s sort of an extension of what I was driving at back in October (as well as what I was whining about in January), but with some concrete suggestions.
To whet your appetite a bit:
We love to tout the web’s universality when discussing the need for responsive design. But that universality is not limited simply to screen size. Networks and hardware capabilities must factor in as well.
The web is an incredibly dynamic and interactive medium, and designing for it demands that we consider more than just visual aesthetics. Let’s not forget to give those other qualities the attention they deserve.
While you’re at it, dig through the rest of the articles from this year’s calendar. As always, there’s loads of good stuff!
]]>In Windows 8, there are two “modes” of use: Metro mode and classic mode. Metro mode sports the spiffy new UI while classic is the same old boring Windows of yore. When you run Internet Explorer 10 in Metro mode (the default) there’s a cool new feature that lets you “snap” a window to the side so you can use two simultaneously. This window, of course, is made to be far more narrow.
Here’s the wrinkle: when snapped, IE10 ignores the meta viewport tag for any viewport smaller than 400 pixels in width (which it is, when in snap mode). This in turn messes up your beautifully set responsive plans and results in the same kind of smart scaling you see on non-optimized sites on an iPhone or Android device.
To get IE10 in snap mode to play nicely you have to use CSS Device Adaptation. For the unfamilar, CSS Device Adaptation allows you to move your viewport declarations (such as width, zoom, orientation, etc) into your CSS, using a rule like so:
@viewport{
[viewport property];
}
IE10 supports the @viewport rule with a -ms prefix. So the viewport rule ends up looking like:
@-ms-viewport{
[viewport property];
}
What Microsoft recommends is adding the rule:
@media screen and (max-width:400px) {
@-ms-viewport{
width:320px;
}
}
The rule above would ensure that for any viewport under 400px wide, IE would set the width to 320px and scale from there. I’m not crazy about the introduction of pixels into an otherwise fluid layout (see Lyza’s post on em-based media queries for a little bit about why). Instead I recommend:
@-ms-viewport{
width: device-width;
}
Seeing that this worked, I had three main questions.
1. Why does device-width work?
While the specification states that device-width should return the width of the “rendering surface” of the device, Windows 8 doesn’t seem to adhere to that when in snap mode. Unless they are claiming that in snap mode the “rendering surface” is just the snapped portion of the screen.
2. What about other browsers?
Early versions of Chrome and Firefox are both available for Metro and using the typical meta viewport element is enough to ensure layouts adjust while in snap mode.
3. Why did Microsoft choose to ignore the viewport tag?
My initial reaction was to reach for the torches and pitchforks. Though they have been doing some awesome work lately, IE does have a history of questionable moves. However, after chatting with the awesome Nishant Kothary over at Microsoft (seriously—Nishant is a great example of how to do developer relations), I have to soften my stance at least slightly.
The meta viewport element is non-normative, that is, it isn’t actually a standard. It was first implemented by Apple for the iPhone and quickly adopted by other platforms.
The @viewport rule, however, is in the process of being standardized by the W3C. In fact, the only time the viewport element is mentioned is to explain how to translate it into the @viewport rule. So in this way, you could make the case that the team over at Microsoft is banking on the future.
The other thing to keep in mind is that you can build native Metro apps using HTML, CSS and JS. This means that the approach Microsoft chooses for handling device-adaptation in IE10 is the same approach they have to use for Metro apps.
I’m with them in thinking that @viewport is the more elegant solution and I also admire their decisions to try and adhere to standards. That being said, it’s a gutsy move and I do worry that it was premature. By going this route, they’ve ensured that the ever-increasing number of sites that make use of the meta viewport element will not display as intended in the narrower viewport. And considering that support for CSS Device Adaptation is currently limited to Opera and IE10, developers aren’t going to switch their approach anytime soon.
On the surface it seems to me that it would have made more sense to introduce support for @-ms-viewport while maintaining support for the meta viewport element across all modes. They still could have pushed @-ms-viewport as the best way to build for Metro apps (in particular) and IE10, but it would have ensured that existing sites weren’t breaking in their browser.
That being said, the standard disclaimer that it’s easy to judge from the outside is in full effect. I know that the IE team felt that this move was the responsible one to make, and the best scenario for the web. I also know that they undoubtedly have a lot more data on how well the two methods work than I do. It’s entirely possible that my suggested way of handling this would’ve caused issues at some point.
Regardless, the takeaway for today is that you need to start adding the @-ms-viewport rule to your CSS now to ensure your sites look as you would expect on Windows 8.
]]>“Responsive design is bad for performance.”
“User agent detection is bad. Don’t segment the web.”
“Hybrid apps don’t work as well as native apps.”
“CSS preprocessors shouldn’t be used because they create bloated CSS.”
If you create for the web you’ve no doubt heard at least a couple of these statements. They’re flung around with alarming frequency.
There is a fundamental problem with this line of thinking: it places the blame on the technique instead of the way the technique was implemented. Generalizing in this way discredits the validity of an approach based on poor execution, and that’s a very harmful way of thinking.
With any of the examples above, the technology itself wasn’t the problem. CSS preprocessors, PhoneGap, user agent detection, responsive design—these are tools. They are neither inherently bad or good. Their quality depends on the way you wield them.
I’m not a carpenter. If you asked me to build a table you would end up with a lopsided, three-legged abomination. That’s not because of the hammer, or the saw, or the drill—that’s because I suck at using them. Give the same equipment to a carpenter and you get something beautiful.
It’s no different with our own tools.
When someone builds a 4MB responsive site, blame the implementation. There is no reason why a responsive design can’t perform well. If you take the time to carefully build from a base experience up, only loading assets as needed and using patterns like the anchor include pattern to keep things light along the way, a responsive site can look beautiful and load quickly.
When someone builds a site and uses server-side detection to exclude some browsers or devices from the experience, blame the implementation. There’s nothing evil about user agent detection. You don’t have to use it to segment experiences. In fact, it’s quite handy as a compliment to feature detection. Consider that some devices can make phone calls, and that those devices don’t all agree on the same protocol. Start with a smart default. Use server-side detection to try to determine which protocol should be used. If a value is reported use that. You’re enhancing the experience where you can and offering something usable where you can’t. There’s nothing wrong with that.
The same goes for using hybrid applications, CSS preprocessors, text editors or any number of tools. They’re only as good as the person using them. If you get to know them, identify their strengths and weaknesses and use them when appropriate, they can be really powerful and helpful additions to your toolbox.
It’s all too easy to cling to the one or two tools we’re most comfortable with and discount the rest. Luca Passani hammered (no pun intended) this home in a recent post. He was discussing the oft-mentioned responsive web design (RWD) vs server-side detection debate and came to a very sound conclusion:
In this context, isn’t the discussion between RWD and {device-detection} a direct corollary of the old “when all you have is a hammer, every problem looks like a nail”? and of course, doesn’t this apply also the other way around, with backend developers favoring a solution to device fragmentation that leverages the tools they know best?
Experiment with techniques before you condemn them. Find out for yourself if the tool is really where the blame should be placed.
Building great experiences on the web isn’t getting any easier. We need all the tools we can get. Don’t discredit them simply because someone uses them poorly.
]]>Go ahead and give it a listen.
]]>It wasn’t what I was expecting, but it ended up being just what I needed.
When one of the people who attended Breaking Development in Dallas, told me that on the last day of the event, I couldn’t help but smile. Single-track events are awesome, but they’re always a little nerve-wrecking as well. How do you balance code and design, pragmatic and conceptual? Each of those discussions has to happen to move the discussion forward, but balancing can be a challenge.
Sometimes the single track thing can scare people off. If you’re a designer, do you really want to sit through a bunch of talks about code? If you’re a developer working at a large corporation, do you really care about conceptual looks at what mobile could become? On paper these discussions sometimes feel like they’re going to be something that might not apply.
It’s often said that one of the great things about single-track events is you avoid the feeling of “missing out” which often comes with multi-track events. That’s certainly true, but the real beauty of single-track events is that it forces discussions to come together. Rubbing two stones together creates a spark. Similarly, allowing these discussions to take place together generates ideas that would not have been there otherwise. It also lets you avoid the feeling of “missing out” that often comes from multi-track events.
There are conferences that give you all of one thing: lots of live coding, all design, or all high-level inspiration talks. Breaking Development tries to blend them. There are talks that arm you with information that you can take back to your company and apply tomorrow, but there are also talks that you take with you and slowly digest over the next few months, forcing you to think bigger.
This was certainly true of Dallas. We had phenomenal case study presentations from people like Tom Maslen of the BBC and Christopher Bennage of Microsoft. Lyza Danger Gardner talked a lot about the real-world challenges she’s faced building for mobile. Brad Frost and Ronan Cremin gave pragmatic looks at responsive design and server-side detection. Chris Coyier took everyone on a whirlwind tour of the tools and workflow he uses when building sites. Karen McGrane hammered home the importance of careful consideration of your content. Belen Barros Pena dissected different mobile OS’s like a frog, revealing that fragmentation isn’t quite as bad as it may seem. Scott Jenson, Jonathan Stark and Luke Wroblewski all took a look forward at the incredible potential of the web, and what we need to do to fulfill that potential. It was a great blend of perspectives.
One of the great things about conferences is that blend, that tension between how do I get things done today and what will I be able to do tomorrow. Day to day work and pressing deadlines have a way of forcing us to put our heads down, and in some cases, forces us to lose some of the excitement we get from thinking about the potential of working on this incredible platform. But a good conference with great attendees recharges the batteries. It gets you excited again to work on the web while also arming you with the information you need to do awesome work at your company.
Judging by some of the responses we got, it seems the blend works:
@bdconf was truly inspiring and amazing! I am all prepped up to do something new!! — Sonali Agrawal
#bdconf is definitely the best conference of its ilk. Fantastic speakers, collective of design/dev/4ward thinking talent all in 1 room — Paul McManus
Kudos to @bdconf and all of the speakers for a fantastic 3 days. Wickedly smart and wildly entertaining – didn’t want it to end — Melissa O’Kane
Thx @bdconf! Incredible speakers. Great conversations. Optimistic energy. More pumped than ever to be designing for the web! — Jon Troutman
Every time I get a review sheet for each speaker, I wonder why are there other options than “Mind Blown”?… — Dillon Curry
My last night at #bdconf full of great conversation and good laughs. Rest assured, I’ll be back! — Jennifer Robbins
Love that I keep taking a step back, questioning my own methods and thought process. What a fantastic conference #bdconf — Kat Archibald
I mentioned great attendees, and that can’t be understated. One attendee, who had attended Breaking Development in the past, pointed out on the first night that it was the side conversations (like the one we were having at the time with a bunch of other attendees) that brought him back. The presentations were just icing on the cake.
The presentations have to be good, of course, but he was right: it is the hallway discussions that transform a good conference into a great one and a great conference into an inspiring and incredible experience. We’ve always had that kind of atmosphere at Breaking Development. The people who attend are passionate and eager to share challenges and solutions alike. We’ve seen people up until all ends of the night at every event we’ve done.
If it’s possible, Dallas took it to another level. Right from the beginning, people were sharing stories, asking questions and offering advice. Dallas re-emphasized something we’ve believed from the beginning: Breaking Development isn’t a conference so much as it is an ongoing discussion. It’s been fun to watch that discussion move forward, step by step, with each new event.
We’re headed back to Orlando in April, with a stellar lineup of speakers and we’re sure to have more incredible conversations. Here’s hoping you can come out next time and help us keep the discussion going!
]]>Theoretically, it would be easy to create the icon using Unicode symbols. For instance, you could create the icon by using the following HTML:
<a>☰ Menu </a>
Unfortunately, as Jeremy points out, many mobile devices fail to handle it correctly. Android and Blackberry devices, for example, don’t display the icon as intended.
I recently wanted to use the icon, and ran into this same issue. Inspired by Nicolas Gallagher’s post on pure CSS generated icons, I was able to get the icon to render nicely in about 10 minutes of CSS wrangling. So, if you’re dead set on rendering the icon without using an image, here’s how you can render it in CSS:
<li id="menu"><a href="#">Menu</a>></li>
li {
list-style-type: none;
}
#menu{
position: relative;
}
#menu a{
padding-left: 20px;
}
#menu a:before {
content: "";
position: absolute;
top: 30%;
left:0px;
width:12px;
height:2px;
border-top: 6px double #000;
border-bottom: 2px solid #000;
}
The above will render the icon to the left of the Menu link. (As someone pointed out on Twitter yesterday, Stu Robson did something very similar.) This is great, but we still have the problem of scalability. If the font-size is 16px, you’re sitting pretty. If it’s any larger or smaller, the icon will become disproportionate. Converting to ems makes for a more flexible solution.
li{
list-style-type: none;
}
#menu{
position: relative;
}
#menu a{
padding-left: 1.25em; /* 20px/16px */
}
#menu a:before {
content: "";
position: absolute;
top: 30%;
left:0px;
width:.75em; /* 12px/16px */
height:.125em; /* 2px/16px */
border-top: .375em double #000; /* 6px/16px */
border-bottom: .125em solid #000; /* 2px / 16px */
}
If you want to be extra safe, you can wrap those styles inside of a media query as Roger Johanson has suggested. This should ensure that the styles are only applied to devices that can support generated content.
Is it a hack? Oh, absolutely. And several people were quick to point that out on Twitter. The result though, is the same: the trigram icon rendered without the use of images. The only difference? It’s supported much better.
If you see anyway to improve on it, feel free to fork the Gist.
]]>A podcast seemed like a really fun idea. It’s an awesome time to be working on the web, and there are so many fascinating discussions to be had. The way I see it, the best case scenario is that people will like the show, listen, and keep coming back for more. The worst case scenario is people won’t like it, but I’ll be able to use it as an excuse to talk to super smart people on a regular basis about topics that interest me. I really don’t see a downside here.
The first episode features two guests: Erik Runyon from the University of Notre Dame and Dave Olsen from West Virgina University. They’ve both been doing some awesome responsive work for their respective universities. We talked a lot about why and how they’ve implemented responsive design on their sites, how they’re both using RESS, whether user agent detection is evil, designing in the browser and lots of other good stuff.
Give it a listen and let me know what you think. If you enjoy it, a kind word on iTunes would be incredibly helpful. Solid reviews on iTunes are one of the best ways to help others find out about the podcast.
And stay tuned—we already have several more shows lined up with some awesome guests and topics. If you’ve got any suggestions, drop me a line!
]]>This all means that there are already people reading the ebook, and others will have the paperback within the next couple of days. This is both terrifying and exciting.
As I stated back in January, the book is an exploration of how a responsive approach applies to your entire workflow. It starts with a few chapters laying down the groundwork: media queries, fluid layouts and fluid images. Then it builds from there and talks about how responsive design affects planning, design workflow, content, and how to build introduce feature detection and server-side enhancements.
I’m very happy with the result. Proud even. The early reception has been great, and the fact that so many people who I admire and respect have had such nice things to say means a ton to me. I’ve posted a few quotes from some of them on the book site and I’ll try to add more soon.
I know it’s not the most popular section of the book, but I wanted to re-post the acknowledgements below because without all the help provided by so many people, this book wouldn’t have happened.
I hope you guys enjoy the book!
Acknowledgements
It is frequently said that writing a book is a lonely, solitary act. Perhaps that is true in some cases, but it certainly wasn’t the case with this book. If this book is any good, it’s because of all the hard work, patience and feedback provided by everyone who helped along the way.
I owe a huge thank you to…
Michael Nolan, who invited me to write a book in the first place. Thanks for being willing to gamble on me.
Margaret Anderson and Gretchen Dykstra for overlooking my horrible misuse of punctuation and for generally making it sound like I know how to write much better than I do.
Danielle Foster for making the book look so fantastic, and putting up with a few last minute adjustments. Also, to Rose Weisburd, Joy Dean Lee, Aren Straiger, Mimi Heft, Rebecca Winter, Glenn Bisignani and the rest of the team at New Riders for helping make this book come to life.
Ed Merritt, Brad Frost, Guy Podjarny, Henny Swan, Luke Wroblewski, Tom Maslen and Erik Runyon for their incredible contributions. By being willing to share their expertise and experiences, they’ve made this a much richer book than it would have otherwise been.
Jason Grigsby for making sure I wasn’t making things up along the way and for providing valuable (and frequently hilarious) feedback and encouragement throughout. Not only is Jason one of the smartest people I know, but he’s also one of the most helpful. I’m thankful to be able to call him a friend.
Aaron Gustafson for writing such a great foreword. I’ve been learning from Aaron since I first started working on the web—to say I’m humbled and honored that he agreed to write the foreword is an understatement.
Stephen Hay, Stephanie Rieger, Bryan Rieger, Brad Frost, Derek Pennycuff, Ethan Marcotte, Chris Robinson, Paul Thompson, Erik Wiedeman, Sara Wachter-Boettcher, Lyza Danger Gardner, Kristofer Layon, Zoe Gillenwater, Jeff Bruss, Bill Zoelle, James King, Michael Lehman, Mat Marquis, Nishant Kothary, Andy Clarke, Ronan Cremin, Denise Jacobs and Cennydd Bowles for the insights, feedback and encouragement they provided along the way. This book owes a great deal to their collective awesomeness.
To everyone whose conversations, both in person and online, inspired the discussion that takes place in this book. This is an awesome community we have going and I’m proud to be a part of it.
My mom and dad for their love and words of encouragement throughout.
My lovely daughters for reminding me it was ok to take a break every once in awhile to play and for filling each day with laughs, kisses and hugs.
And my incredible wife, Kate. This book, and anything else I do that is any good, is a direct result of her loving support and encouragement. There are no words powerful enough to express how thankful I am for her.
]]>Responsive images are a difficult beast to tame: there really isn’t a good solution for them today. As a result, some discussion started on the WHATWG mailing list months ago about what to do. The WHATWG pointed out that the list was for standardizing and suggested it would be better if the discussion were moved into a community group.
So, obediently, a community group chaired by Mat Marquis was started (in February). A lot of discussion took place about the appropriate way to handle responsive images and one solution, the new picture element, garnered the majority of support.
On May 10th, the previously mentioned srcset attribute was presented on the WHATWG mailing list by someone from Apple.
That same day it was recommended to the list that they take a look at all the discussion that had taken place in the community group. A debate about the two solutions ensued.
The feedback from developers was not particularly glowing. To quote Matt Wilcox:
I do not see much potential for srcset. The result of asking the author community was overwhelmingly negative, indirection or no indirection.
It was argued by Simon Pieters of Opera that the srcset attribute would be easier to implement and that as a result, that would help developers:
I think an attribute is simpler to implement and thus likely to result in fewer bugs in browsers, which in turn benefits Web developers.
This morning, the attribute was added to the spec.
I’ve got my own opinion about the correct solution, but that’s not really what’s I think is most troubling here. Note what happened:
Developers got involved in trying to standardize a solution to a common and important problem.
The WHATWG told them to move the discussion to a community group.
The discussion was moved (back in February), a general consenus (not unanimous, but a majority) was reached about the picture element.
Another (partial) solution was proposed directly on the WHATWG list by an Apple employee.
A discussion ensued regarding the two methods, where they overlapped, and how the general opinions of each. The majority of developers favored the picture element and the majority of implementors favored the srcset attribute.
While the discussion was still taking place, and only 5 days after it was originally proposed, the srcset attribute (but not the picture element) was added to the draft.
What. The. Hell.
The developer community did everything asked of them. They followed procedure, they thoroughly discussed the options available. They were careful enough to consider what to do for browsers that wouldn’t support the element—a working polyfill is readily available. Their solution even emulates the existing standardized audio and video elements.
Meanwhile an Apple representative writes one email about a new attribute that only partially solves the problem and the 5 days later it’s in the spec. In case there is any doubt, I’m not blaming him at all for how this all played out. That blame falls on the WHATWG. Whatever their rationale was for putting this in the draft, the method in which it was added reeks of valuing the opinion of implementors over developers.
In the draft of the W3C HTML design principles, they clearly state the priority that should be given when determining standards:
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone.
Those levels of priority make a lot of sense to me and it’s discouraging (to say the least!) to see them dismissed in this case. This kind of thing simply cannot happen. It’s tough to get people to voice their opinions to begin with. To find that their opinion holds no weight won’t make it any easier going forward.
What message does it send when developers try to contribute their time, energy and effort to help solve a problem only to have it so casually dismissed?
As Scott Jehl responded on Twitter:
insulting. Not to mention, that it can’t work today. What was the purpose of our @w3c community group?
Insulting indeed. Not too surprising though. After all, we’ve seen this sort of thing before.
]]>The lineup is amazing. Seriously.
Dave Rupert, lead developer at Paravel, co-host of the ShopTalk podcast, and creator of awesome tools like FitVids and FitText, will get the day kicked off with a four hour workshop.
After lunch, Mat Marquis will get the afternoon started with his presentation, “Next Steps for Responsive Design”. Mat worked on the Boston Globe project and heads up the Responsive Images Community Group for the W3C, so he’s got some hard-earned experience to pull from. He’s also a seriously funny guy.
Then, Kristofer Layon will present “The Minimal Viable Web”. Kristofer is the author of two books, most recently the excellent “Mobilizing Web Sites”. I love how practical and pragmatic that book is and I expect his talk will keep the same tone.
I’ll close the day with a new talk for me, “Creating Responsive Experiences”. I’ve been on a kick for awhile now that I think responsive design has to be about more than just layout, and I’ll try to make my case (and back it up with a few examples).
It should be an awesome day! The venue, Open Book, has a cool feel to it and we’re going to keep attendance low—you won’t be lost in a huge ballroom of people.
There are still passes available. If you want to make the trip (and you do!), code ‘KADLEMN’ will shave $100 off the cost of registration.
I hope to see you in Minneapolis—it’s going to be awesome!
]]>First, any credit has to go to the awesome team at Cloud Four. Most of the tests were created by them for some testing they were doing. I just added some Javascript to automate them.
On to the results!
Test One: Image Tag
This page tried to hide an image contained within a div by using display: none. The HTML and CSS are below:
<div id="test1">
<img src="images/test1.png" alt="" />
</div>
@media all and (max-width: 600px) {
#test1 { display:none; }
}
The results
If there is one method of hiding images that I can say with 100% certainty should be avoided, it’s using display:none. It’s completely useless. It appears that Opera Mobile and Opera Mini don’t download the image (see the initial post for the reasons why), but the image is requested by, well, everyone else.
| Tested | Requests Image |
|---|---|
| Android 2.1+ | Yes |
| Blackberry (6.0+) | Yes |
| Chrome (4.1)+ | Yes |
| Chrome Mobile | Yes |
| Fennec (10.0+) | Yes |
| Firefox (3.6+) | Yes |
| IE | Yes |
| iOS (4.26+) | Yes |
| Kindle (3.0) | Yes |
| Opera (11.6+) | Yes |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| RockMelt | Yes |
| Safari (4+) | Yes |
Conclusion
Simple: don’t do this.
Test Two: Background Image Display None
In this test, a div was given a background image. If the screen was under 600px wide, the div was set to display:none. The HTML and CSS are below:
<div id="test2"></div>
#test2 {
background-image:url('images/test2.png');
width:200px;
height:75px;
}
@media all and (max-width: 600px) {
#test2 {display:none;}
}
The results
The same as with the first test: every browser tested, aside from Opera Mini and Opera Mobile, will download the image.
| Tested | Requests Image |
|---|---|
| Android 2.1+ | Yes |
| Blackberry (6.0+) | Yes |
| Chrome (4.1)+ | Yes |
| Chrome Mobile | Yes |
| Fennec (10.0+) | Yes |
| Firefox (3.6+) | No |
| IE | Yes |
| iOS (4.26+) | Yes |
| Kindle (3.0) | Yes |
| Opera (11.6+) | Yes |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| RockMelt | Yes |
| Safari (4+) | Yes |
| Silk | Yes |
Conclusion
Once again: don’t do this. Thankfully, as some of the other tests show, there are a few easy ways to hide background images without having the image requested.
Test Three: Background Image, Parent Object Set to Display None
In this test, a div was given a background image. The parent of the div (another div) was set to display:none when the screen was under 600px wide. The HTML and CSS are below:
<div id="test3">
<div></div>
</div>
#test3 div {
background-image:url('images/test3.png');
width:200px;
height:75px;
}
@media all and (max-width: 600px) {
#test3 {
display:none;
}
}
The results
Kudos to Jason Grigsby for catching this one. On the surface, it’s not entirely obvious why this would be any different than test two. However, when doing his initial research, he noticed this seemed to make a difference so he decided to test it. Lucky for us he did because this method is actually pretty reliable.
| Tested | Requests Image |
|---|---|
| Android 2.1+ | No |
| Blackberry (6.0+) | No |
| Chrome (16+) | No |
| Chrome Mobile | No |
| Fennec (10.0+) | Yes |
| Firefox (3.6+) | No |
| IE 9+ | No |
| iOS (4.26+) | No |
| Kindle (3.0) | No |
| Opera (11.6+) | No |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| Safari (4+) | No |
Conclusion
This method works well. With the exception of the over-eager Fennec, every tested browser only downloads the image when needed. The issue with this method is that you do have the requirement of being able to hide the containing element. If that’s an option, then feel free to go ahead and use this approach.
Test Four: Background Image with Cascade Override
In this test, a div is given a background image. If the screen is under 600px, then the div is given a different background image. This tested to see if both images were requested, or only the one needed. The HTML and CSS are below:
<div id="test4"></div>
#test4 {
background-image:url('images/test4-desktop.png');
width:200px;
height:75px;
}
@media all and (max-width: 600px) {
#test4 {
background-image:url('images/test4-mobile.png');
}
}
The results
While certainly better than using display:none, this method is a little spotty.
| Tested | Requests Both |
|---|---|
| Android 2.1-3.0? | Yes |
| Android 4.0 | No |
| Blackberry 6.0 | Yes |
| Blackberry 7.0 | No |
| Chrome (16+) | No |
| Chrome Mobile | No |
| Fennec (10.0+) | Yes |
| Firefox (3.6+) | No |
| IE 9+ | No |
| iOS (4.26+) | No |
| Kindle (3.0) | Yes |
| Opera (11.6+) | No |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| Safari 4.0 | Yes |
| Safari 5.0+ | No |
Conclusion
I’d avoid it. While the situation is improving, Android 2.x, which dominates the Android marketshare, still downloads both images as does Fennec and the Kindle. Between the three, but particularly because of Android, I would recommend looking at other options.
Test Five: Background Image Where Desktop Image Set with Min-Width
In this test, a div is given one background image if the (min-width: 601px) media query matches, and a different one if (max-width: 600px) matches. The HTML and CSS is below:
<div id="test5"></div>
@media all and (min-width: 601px) {
#test5 {
background-image:url('images/test5-desktop.png');
width:200px;
height:75px;
}
}
@media all and (max-width: 600px) {
#test5 {
background-image:url('images/test5-mobile.png');
width:200px;
height:75px;
}
}
The results
The situation here is a little better.
| Tested | Requests Both |
|---|---|
| Android 2.1+ | No |
| Blackberry (6.0+) | No |
| Chrome (16+) | No |
| Chrome Mobile | No |
| Fennec (10.0+) | Yes |
| Firefox (3.6+) | No |
| IE 9+ | No |
| iOS (4.26+) | No |
| Kindle (3.0) | No |
| Opera (11.6+) | No |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| Safari (4+) | No |
Conclusion
More browsers play along this time. Fennec, as always, just can’t control itself. Android 2.x is….odd. It requests both images, but only if the screen size is over 600px and the min-width media query kicks in. This behavior appears to stop as of Android 3. This is an odd one and I would love to know why the heck it happens. Actually, good news here. Jason Grigsby pinged me and said his results for this test weren’t jiving with what I reported here, so I re-ran the tests on a few Android 2.x devices. Turns out, my initial results were off: Android 2.x plays nicely and my initial runs of this test on that platform were wrong. Not only is this good news for developers, but it is also a much more sane behavior and it has restored my faith in humanity. Or at least my faith in Android.
It’s also worth nothing that if you use this method, you’ll need to consider alternate options for Internet Explorer 8 and under. Those versions of the browser don’t support media queries, so no image will be applied. Of course, this is simple enough to fix with conditional comments and an IE specific stylesheet.
Test Six: Background Image Display None (max-device-width)
This test was the same as test two, but it used max-device-width for the media query instead of max-width. The HTML and CSS is below:
<div id="test6"></div>
#test6 {
background-image:url('images/test6.png');
width:200px;
height:75px;
}
@media all and (max-device-width: 600px) {
#test6 {
display:none;
}
}
Conclusion
I’m not going to spend much time on this, as it ended up being a throw away test. There were no differences in behavior between this and test two. The test was added because of a tweet where someone had mentioned they were getting different results than the original tests by Cloud Four, but the discrepancy ended up being caused by something else entirely (a typo, if I remember right).
Test Seven: Cascade Override for High Resolution
The final test was added to the suite a bit late. With the retina iPad around the corner, there were a lot of posts about how to handle serving images to high-res displays. In one post, Brad Frost mentioned he thought it would be interesting to see test results for this, so I added it in.
In this test, a div is given a background image. Then, by using the min-device-pixel-ratio meda query, a new background image was applied if the minimum ratio was 1.5.
The HTML and CSS are below:
<div id="test7"></div>
#test7 {
background-image:url('images/test7-lowres.png');
width:200px;
height:75px;
}
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (min--moz-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
#test7 {
background-image:url('images/test7-highres.png');
width:200px;
height:75px;
}
}
The results
Of all the tests, this one is the one that could benefit the most from having some more people run it. That being said, it does look like the following behavior is accurate.
| Tested | Requests Both |
|---|---|
| Android 2.1-3.0? | Yes |
| Android 4.0 | No |
| Blackberry 6.0 | No |
| Blackberry 7.0 | No |
| Chrome (16+) | No |
| Chrome Mobile | No |
| Fennec (10.0+) | No |
| Firefox (3.6+) | No |
| IE 9+ | No |
| iOS (4.26+) | No |
| Kindle (3.0) | No |
| Opera (11.6+) | No |
| Opera Mini (6.5+) | No |
| Opera Mobile (11.5) | No |
| Safari 4.0+ | No |
Conclusion
Again, this test could stand to be run a bit more, just to be safe. It looks like this method will work the vast majority of the time. Unfortunately, it appears Android 2.x will download both images if the device pixel ratio is above or equal to 1.5 (or whatever value you set in the media query). So in the case of the above tests, if you’ve got a high resolution device running Android 2.x you’re out of luck.
The good news, for now, is that I’m unaware of any Android device with a device pixel ratio over 1.5. So if you’re targeting the retina display iOS devices, you could set your min-device-pixel-ratio to 2 and be safe. And of course, now that I’ve said it, I fully expect the first 3 comments for this post to all correct me and point out the one Android device that just has to prove me wrong.
The earliest rounds of this test looked more promising for Android, so this is a bit of a bummer for me. They’re the only browser that seems to mess it up, but they’re also one of the biggest players.
Recommendations
If you’re going to hide a content image, you’re not going to be able to do it by setting display:none. I recommend using a Javascript or server-side based approach instead.
If you want to hide a background image, your best bet is to hide the parent element. If you can’t do that, then use a cascade override (like test five above) and set the background-image to none when you want it hidden.
For swapping background images, define them both inside of media queries.
Going Forward
If you run any of the tests and think something above is incorrect, either drop me a line or say report it on GitHub so I can dig into it. The same goes for adding any additional tests.
]]>Mark Boulton just put his thoughts to screen with a post entitled “Responsive Summit: The One Tool”. In it, he makes the case that knowing your materials is more important than using a specific tool. He makes an excellent point when he discusses why he feels comfortable in Photoshop:
Since 1997, I’ve been working almost exclusively on the web. Throughout all of that time, the realisation of what the projects would look like are done in Photoshop. That means, yes, I’ve been using Photoshop in a production environment for fifteen years. Malcolm Gladwell said it takes 10,000 hours, or 10 years of repetitive use, to become an expert in something. I guess that means I’m an expert in creating pictures of websites. Photoshop is like an extension of my mind. To use Photoshop for me is as effortless and almost as fast as a pencil. I get stuff done; quickly.
That point, that the familiarity you have with a tool matters, is an important one to keep in mind. Designing is a creative endeavour. You can’t do it well with tools you aren’t entirely comfortable with.
If we had been designing in the browser since 1997, this would all be a non-issue. Of course it wasn’t possible back then to do so—our tools were too limited. That’s not the point I’m making. The point is that if we had that same level of experience designing in the browser, I suspect no one would debate whether the approach made sense. Designing in the browser lets you get deeply entrenched in the characteristics of the web. Designing in a graphics editor like Photoshop removes you from them.
Those against designing in the browser talk about how working in code is too limiting. Of course the opposite is true as well—working in a graphics editor is too limiting in many ways. You are limited to designing for a specific size at a time. You are limited by not being able to design for interactions. That’s a big one. The web is an interactive medium, not a static one. It has little in common with print and much more in common with software. Graphic editors, for all their powerful tools, aren’t equipped to handle this.
There is room for a better tool here. One that lets you experiment easily, but doesn’t detach you from the constraints and capabilities of the environment you are creating for. Later in his post, Mark continues:
I can’t have happy accidents in a browser when I’m writing specific rules and then watching the results in a browser. There is too much in the feedback loop.
This made me think of a presentation by Bret Victor called “Inventing on Principle”. Not only do I recommend it, I think it should be required viewing.
During the presentation he discusses the need for immediate feedback from our tools:
Creators need an immediate connection to what they create. And what I mean by that is when you’re making something, if you make a change or you make a decision, you need to see the effect of that immediately. There can’t be any delay, and there can’t be anything hidden. Creators have to be able to see what they’re doing.
Specifically, he tackles coding. He demonstrates a tool that lets him instantly see how the changes in his Javascript affect the canvas for which he is creating. This instantaneous feedback provides fertile ground for experimentation, and he demonstrates that over and over in the video. Because of the direct connection between the code and the result, you’re able to start using advanced controls (start around 3:45 into the video, and again around 10:45) to help the process of discovery and experimentation.
This, I think, is where we need to head. We need to be able to create on the web, but we need tools that make it easier for us to experiment. Tools that let us be creative without decoupling us from the very medium we are creating for.
We’re not likely to ever remove a graphic editor completely from our workflow, nor should that be our goal. There is nothing wrong with graphic editors. We simply need to be aware of what they are good at, and where they fall short. Instead, our goal should be to move towards tools and processes that let us capitalize on the interactive nature of the web.
It’s about using the right tool for the right job. I’m not convinced we have the right tool yet.
]]>To that end, I’ve hacked together a few tests (using Jason’s tests as a starting point) that store their results in Browsercope. The test is fairly simple. For each test case, I check to see if the background image (or content image) has been loaded by checking the image.complete property. The property (which appears to be well supported) returns true if the image has been requested and downloaded, and false otherwise. So, if I want to see if image2.png has been downloaded, my code looks like this:
{% gist 3316747 %}
Early results
It’s early, but already a few trends (some interesting, some less so) are emerging:
Setting an image to
display:nonewon’t stop the image from downloading (see test 1). So don’t do it. We already knew this, but the tests are reinforcing it.The same goes for setting an element to
display:none: the background will still be downloaded by everybody (see test2).Setting the parent object to
display:none, however, does work pretty consistently (see test 3). It looks like Fennec still downloads the image, but Android, iOS and Opera won’t.Downloading behavior for a simple cascade override is pretty inconsistent (see test 4). However, setting background images within a media query and then overriding seems to work pretty well (test 5). Fennec is a little eager again, but Android, iOS, Opera and the Kindle only download what’s needed.
Finally, my favorite nugget of information so far pertains to Opera Mobile. Opera, as it turns out, is darn clever. Instead of using the parser to trigger resource downloading, they use layout code. This means that since they have information about viewport size and visibility settings, they can be much more selective about which resources they download. So, for example, if an element is outside the viewport then the background image doesn’t have to be downloaded on page load.
When I talked to Ola Kleiven of Opera about this optimization, he said that Opera used to implement the same behavior on Opera for desktop prior to 11.60 but had to pull it due to compatibility reasons. Developers were relying on things like the load events of these images, so when they didn’t load in Opera, things would break. It’s too bad: it’s an interesting and effective optimization method. I would love to see this behavior implemented cross-browser, but as an opt-in feature (maybe a meta tag or something could trigger it).
Thanks to everyone who has already been testing—it’s been fun to watch the results come in! If you haven’t run the tests yet and you’ve got a few minutes, please do. Once the number of results gets to a nice level, I’ll post a more detailed follow-up about which browsers behave in what ways. I’ll also include any interesting findings in the book.
In the meantime, feel free to fire up the tests on any and all devices you have. If you think of another test you would like to see added, or see a potential issue with the test, let me know. One of the benefits of automating the test results is that it should be very easy to add new tests and quickly get broad results.
]]>We’ve responded by creating a lot of tools that help us collect this information. We can easily store quotes, snippets or even full articles in any one of a hundred different sites and services. We can save links to the videos and recordings that moved us on some level. RSS feeds make it incredibly easy to consume massive quantities of online articles and blog posts. Tools like the incredible Ifttt help make our many online services interact with each other, further easing the process of collecting information.
But what happens to that information after it has been carefully tagged and stored away? The more new information we collect, the more old information gets buried. That post we read that sparked an idea, that quote that stirred something deep within—lost and buried. Forgotten amongst the piles of all the other information we’ve collected.
Certainly this is nothing new—the issue has merely been amplified. Technology, though, is supposed to work for us. It’s supposed to help us solve issues we’ve had in the past. Why not push our tools to not merely collect, but to remind us what is already there?
We need more services like the Kindle Daily Review and Timehop. Kindle’s Daily Review delivers “flash-cards” of a book you’ve read in the past. It displays notes and highlights that you made. It’s fantastic! I love seeing a passage from a book that I had forgotten all about, but that still sparks something within me. Timehop is similar—it lets you know what you posted on Twitter, Facebook, Foursquare or Instagram a year ago. I’ve only been using that service for a short time, but already I’ve found several articles and conversations that I had forgotten about.
Why is this important? Because serendipity is a stimulant. In his book, “Marshall McLuhan: You Know Nothing of My Work”, Douglas Coupland had this to say about Marshall McLuhan, one of the most prescient minds of the last century: “For Marshall, the fun of ideas lay in crashing them together to see what emerged from the collision.“ When you rub two stones together, you can make a spark that starts a fire. Put two seemingly unrelated ideas next to each other and the effect is the same.
Searching, for the most part, eliminates those kinds of serendipitous discoveries. It’s a more or less direct path to the very specific type of information we are looking for. A service like the Kindle Daily Review, a service that provides automated nostalgia—that’s the kind of tool that encourages the mixing of ideas, the friction that causes the spark.
We have enough piles. What we need are more shovels.
]]>And of course, it’s responsive. That adds another level of loveliness. The navigation adjustments in particular are kind of interesting to watch. My favorite layout is the last one to kick in before you hit 1020px. It’s clean, easy to read, and the ads are not yet there.
But….
There’s a catch here. For as lovely as the site looks, there’s a lot going wrong from a technical perspective.
Performance
For starters, the size. Even on my phone, the site weighs in at a massive 1.4MB. A large part of the issue is that those ads, the same ones that don’t display below 1020px, are still being requested and loaded on smaller resolutions. They’re just being hidden with a little touch of ‘display:none’.
When I tested, the site also made about 90 requests. That’s an awful big drag on page load time—no matter what device or network you are viewing the site on.
Advertising
Another potential concern is the advertising. I’m not sure exactly on Smashing Magazine’s business model, prices, etc., so it’s hard to criticize their advertising efforts too much. I do find it interesting that their ads are all hidden below 1020px though, leaving their ads visible to only a portion of their audience.
One reason for this may be the high number of ads they display. In their sidebar, I count 16 ads. They are distracting at large resolutions, so I imagine they got to be very overwhelming on smaller resolutions. Having to re-orchestrate 16 ads onto a small screen layout would be a very tall task.
Again, we’re talking business model here so there’s obviously much more at play than what an outside perspective grants, but I would love to see fewer ads. Not just for Smashing Magazine, but across all sites. Less ad spaces, more money per slot. (Roger Black talks about this in detail in his posts The holy grail, part 1 and part 2.) The result would be three key improvements:
You would have a lighter, cleaner experience.
The ads would provide more value to the advertisers—less ads competing for eyeballs per page.
The smaller number of ads would be much easier to manage across resolutions.
Bigger picture
Now, having said all that, I could be guilty of premature condemnation. Perhaps this is the interim solution and a fix to these issues (performance in particular) is forthcoming.
Jason Grigsby put it nicely in two tweets:
When I have guests and don’t have time to clean, I shove things in a closet. No biggee. Everyone does it. But the house isn’t really clean.
The key is following through and cleaning the closet as well. Let’s hope others are better at it than I am at home. :-)
Of course he’s right. In fact, I have a few messy closets myself. (Both literally and metaphorically.)
And I don’t mean to pick on Smashing Magazine. They are far from the only site making these kinds of mistakes on the technical side of things and from a business perspective, the discussion about how to handle advertising is far from being resolved. And again, from a visual perspective I think they did an awful lot of things right.
We simply need to ensure that the discussion broadens. Responsive design is a fantastic approach, one that brings us closer to taking advantage of the inherent flexibility of the web. But simply being responsive is not the destination. To maximize the potential of a responsive approach, we need to focus not only on the visual components, but on the technical execution and business ramifications as well.
]]>To say I’m excited is a bit of an understatement. I love sharing what I know, and writing a book has been something I’ve wanted to do for a long time. I’m not sure of the exact publication date yet, but it looks like the book should be out sometime in the second half of the year.
Uh…there’s already an awesome book on responsive web design
Why yes, there is, and I wholeheartedly recommend buying a copy. Like, now. Ethan’s book is a brilliant read. I even wrote a glowing review shortly after finishing it. In that review, I said I was pining for a sequel—something that would build on the core principles Ethan discussed. When Michael Nolan of New Riders got in touch a few months ago and asked if I was interested in writing a book, I saw it as an opportunity to get that book written.
More info, please!
The book will be an exploration of how a responsive approach can be integrated into the workflow—from planning and early mockups through to the actual development of the site. In addition to fluid layouts, media queries and fluid images, the book will discuss topics such as design deliverables, structured content, feature detection and server-side enhancements.
If you want to keep up with the progress, your best bets are to follow me on twitter, stay tuned to this blog, and sign up for the mailing list at responsiveenhancement.com.
I’ll keep you posted!
]]>One interesting trend—at least to me—is that I returned to reading a lot more web related books (10!) this year. This is in no small part related to the A Book Apart series. If they keep churning out quality books like this, that count is likely to stay very high.
As always, if the book made this list, then I enjoyed it on some level. There are far too many good books out there to suffer through one that doesn’t interest me. If I’m not enjoying it I set it aside.
If you’re looking for specific recommendations, “The Invisible Man” (which I had read before and will read again) and “The Demolished Man” top my (short) list of fiction. “Obliquity”, “Marshall McLuhan: You Know Nothing of My Work!” and “The Death and Life of the Great American School System” are at the top for non-fiction (excluding the web-related ones).
Hardboiled Web Design by Andy Clarke
2001: A Space Odyssey by Arthur C. Clarke
Linchpin by Seth Godin
Pull by David Siegel
The Death and Life of the Great American School System by Diane Ravitch
Confessions of a Public Speaker by Scott Berkun
Rainbows End by Vernor Vinge
The Demolished Man by Alfred Bester
Where Good Ideas Come From by Steven Johnson
HTML5 for Web Designers by Jeremy Keith
Obliquity by John Kay
Responsive Web Design by Ethan Marcotte (See my review)
Adaptive Web Design by Aaron Gustafson
CSS3 for Web Designers by Dan Cederholm
The Elements of Content Strategy by Erin Kissane
Presentation Zen by Garr Reynolds
The Invisible Man by HG Wells
The Filter Bubble by Eli Parson
Big Deal by Robert Hoekman Jr.
Delivering Happiness by Tony Hsieh
The Information by James Gleick
Mobile First by Luke Wroblewski (See my review)
Designing for Emotion by Aaron Walter
Content Strategy for the Web by Kristina Halverson
Bird by Bird by Anne Lamott
Marshall McLuhan: You Know Nothing of My Work! by Douglas Coupland
Mindfire by Scott Berkun
Simple and Usable by Giles Colborne
Loose by Martin Thomas
This year’s post is an overview of how inconsistent mobile networks are, as well as a plea for more communication between carriers, manufacturers and developers. If you’re interested in mobile performance, please give it a read.
Be sure to check out the rest of the articles there as well. As usual, there’s lots of good content already posted with more sure to come. In particular, I recommend Stoyan’s post on asynchronous snippets, Guy’s look at when you should and shouldn’t inline resources, and a post on localStorage performance by Nicholas.
]]>Luke argues that you should design, and build, your mobile experience first. He hits you (gently) over the head with data point after data point making it increasingly obvious that this mobile first technique not only makes sense, but should in fact be the de facto standard for creating sites on today’s web. He makes his case carefully, succinctly and convincingly.
After he has you sold on the importance of this approach, he spends the rest of the book arming you with the information you’ll need to start creating better mobile experiences. He walks you through how to organize your content, develop appropriately sized touch targets, embrace new touch gestures, simplify the process of input on mobile devices and more. Amazingly, he manages to do this in only 120 pages.
All of this knowledge is described in very clear detail. The book is interspersed with subtle humor that makes the book not just educational, but entertaining as well. I was particularly fond of the revelation that anchor tags are part of HTML 0 which works in most browsers except Internet Explorer.
If you pair this book with Ethan’s ‘Responsive Web Design’ (the last book published by A Book Apart) you are arming yourself for the future. Combine these two approaches and you are well on your way to creating more bulletproof sites (if there is such a thing on today’s web).
Needless to say, I highly recommend Luke’s book. As with the rest of the A Book Apart series, it succeeds brilliantly at fulfilling their goal of arming you with the information you need quickly so you can get back to work. You would be doing yourself a disservice by not adding it to your library.
]]>After months of planning, the second ever Breaking Development conference came to an end the other week. To say that it was fun and inspiring would be selling it short. To some extent, I am still recuperating but I thought I should post my thoughts while things are still fresh in my mind.
The Speakers
The speakers did an absolutely incredible job! There was plenty of pragmatic information to take back and apply right away, but there was also a lot of talk about the future: where we need to be and what we can do to get there. We’ll get video posted of all the talks at some point in the future, but for now, be sure to check out all the decks at Lanyrd. Scott Jenson recorded his presentation off his laptop, so his deck includes accompanying audio. I can’t recommend his presentation enough. It was a call to arms: a forward-thinking and inspiring talk to conclude the first day of the conference.
Every once in awhile I hear a question or two about the timing of the release of the conference schedule (not just in regards to our own event, but in regards to web conferences in general). There are two general routes to take for choosing topics for a conference. One is to do it early. That way attendees know what to expect early on and it helps to sell more tickets throughout the registration period. The other is to give the speakers a bit more time and wait until closer to the event to finalize all the topics. It means you have to hope the attendees will have enough trust in the speakers and the conference to spring for registration without knowing all of the topics. It also means, however, that the talks will be timely and something that the speaker is passionate about now—not something they were passionate about 4 months ago. We opted for the latter, and I believe we were a stronger conference for doing so.
The Attendees
As great as the speakers were, what really makes these events fun are the attendees. I wonder if people realize just how great a difference an exceptional group of attendees can make in the quality of the experience at a conference. It simply cannot be underestimated. There was no shortage of excellent discussions taking place in the evenings and during lunch. The quality of the beer conversations were incredible.
In fact, I consider those side conversations one of the most important ingredients in a conference experience. The speakers set the stage with inspiring and informative presentations, but the real fun is seeing everyone start to talk about how this information can be applied to create better mobile experiences: both for today and for the future.
The feedback was incredibly kind. As tiring as it can be to organize an event, the adrenaline rush you see from people enjoying it is mind-blowing. There is nothing that gets you more ramped up than seeing people talk about how inspired they are to go back to their companies and create something amazing. Here are a few of my favorite quotes:
I’ve never felt so much energy and geekery under one roof in all my nerdy life.—Elizeo Benavidez
This was simply the best conference I have attended. Every session had real-world, immediately applicable techniques and ideas.—Jen
The Breaking Development conference is wrapping up here on spacecraft Opryland One. It’s been a wonderful experience. The conference itself was superbly curated—a single track of top-notch speakers in a line-up that switched back and forth between high-level concepts and deep-dives into case studies.—Jeremy Keith
“If you’re the most talented person in the room, then you’re in the wrong room” #bdconf is the RIGHT room people.—Luke Wroblewski on Twitter
I’m hitting that point in the conference where I just want to lock myself in room and finish hacking on related projects. INSPIRED #bdconf—Lyza Danger Gardner on Twitter
And perhaps my personal favorite:
One thing blowing me away about #bdconf the talks with attendees, let alone speakers! Tough questions being addressed with incredible zeal—Kevin Griffin on Twitter
Kevin’s might just be my favorite because I think he pin-pointed what I felt made the event so special: the absolutely ridiculous amount of smart, passionate and inspired people all coming together to try and make sense out of this rapidly changing and increasingly complex ecosystem of devices we find ourselves working with.
A huge thank you is in order to everyone who made the event so awesome. The speakers for all their hard work, the sponsors for all their help supporting and promoting the event, all the awesome people I get to work with on the Breaking Development team (Jeff Bruss, Erik Wiedeman, Paul Thompson, Derek Pennycuff, Michael Lehman and Matt VanSkyhawk) and in particular, the attendees.
I can’t wait to get to do this again in April!
]]>Ethan doesn’t present RWD as an end-all-be-all approach. He simply presents it as a potential solution (a good one). In his own words:
…more than anything, web design is about asking the right questions. And really, that’s what responsive web design is: a possible solution, a way to more fully design for the web’s inherent flexibility.
He does discuss the three main ingredients he laid down in his original article (flexible images, media queries and fluid grids) but he goes beyond that and acknowledges some of the initial concerns people had about the approach, and introduces some methods to potentially fix those trouble spots. Rightfully, he cautions “these aren’t problems with responsive design in and of itself–we just need to rethink the way we’ve implement it.” In his own delightfully humorous and approachable way, he challenges you not just to take a more responsive approach to your design, but to do so with a great deal of care and thoughtful consideration.
The final chapter is chock-full of topics I would like to see discussed in a bit more detail: responsive assets, context, and mobile first (which will be fleshed out in more detail in Luke’s upcoming book) for example. That’s not intended to be a slight in anyway to this book–the excellent A Book Apart series is intended to be concise and get you back out and working with your newly acquired knowledge and the content Ethan covers fits the book, and the spirit of the series, perfectly. This is just me being greedy and pining for a sequel of sorts.
It’s a bit sad to think about, but after all these years, we are just now starting to really embrace the web for what it is–a truly flexible and malleable medium. I don’t think it is out of line for me to suggest that Ethan’s book will soon be viewed in the same light as books such as Jeffrey Zeldman’s Designing with Web Standards–as a book that challenged the way we practiced our profession and helped to push us forward.
To make a long story short, you would be doing yourself an absolute disservice by not buying this book. It lives up to the hype and then some.
]]>Thinking ‘mobile’ web is a big, fat red herring. Just like ‘apps’ was a few years ago. Next year, it’ll be something else.
While I’m not willing to go quite as far as that, I do think the term has become loaded with historical assumptions that are no longer true.
Originally, it worked out alright. ‘Mobile’ came to encompass both the device and the context of use in one fell swoop. It could do that, because their wasn’t a lot of diversity in the kinds of devices you could call ‘mobile’. In addition, those devices lacked the capability of offering a full-web experience. ‘Mobile’ use was pretty clearly defined because to be quite honest - the devices weren’t capable of offering much more.
{% fig The word ‘mobile’ is loaded full of historical assumptions. banner %}
An old phone
Fast-forward to today. Now, using the historical definition of a ‘mobile’ device, we have smart phones, tablets, even netbooks - all of which are substantially more capable of providing a rich, full-web experience than their ‘mobile’ ancestors. As a result, there is much more variety in ‘mobile’ - both in terms of device type and use case. The device and the context no longer go hand-in-hand, they must be decoupled.
The problem, though, is that we can’t just eliminate the term. An earlier tweet by Aral Balkan is just as accurate:
I feel there’s a real danger that the “no mobile web” meme will translate to “we don’t have to rethink interactions for mobile”
There is a difference in how you use the web on these new devices and there is a ‘mobile’ context out there somewhere - it’s just not as clearly defined as it once was. We can’t ignore the differences in use. We owe our users the best browsing experience possible, regardless of context or device and that means that we have to find and embrace those differences.
So what term should we use? I have no idea. This is something I’ve been wrestling with for a few months now, and I have yet to find a term that I feel is sufficient - a term that accurately portrays this new medium without implying too much, or too little.
If this seems like I’m quibbling over semantics, well, I guess I am. As the saying goes though, “Words have meaning and names have power.”
]]>On May 25th, I’ll be giving a talk on Mobile Web Performance Optimization for the Web Performance Summit. It’s an online event, and the lineup looks fantastic! Considering the low cost of attendance (not just the ticket price - but no travel costs!) I’m not sure how you could afford to pass on it if you’re at all interested in performance.
Then, on the 27th, I’ll be giving a shorter presentation at WebVisions in Portland entitled “Can Media Queries Save Us All?”. In case you’re wondering—it won’t be a full-out cheerleading session for media queries (or RWD). There are some very real issues with the technique, and as with most techniques, there are times when it is appropriate, and times when it is not.
Both events look great—I highly recommend checking them out. If you do make it to Portland, let me know—it would be great to meet some of you! Of course, you can also meet me at the next Breaking Development conference in September if you would like. (I promise to post my thoughts on the first event soon!) *[RWD]: Responsive Web Design
]]>Also, since last I wrote about the event, we added a special half hour talk by Brian Alvey. Brian has a ton of experience managing the content for some major mobile applications and I think his talk on mobile and the cloud is going to be a great addition to the lineup.
In addition to the topic selections, we’ve been spending a lot of time lately figuring out how to handle the food (I think we’ve got something both unique and cool lined up for that!) as well as the bevy of other little details that go into getting a conference up and running.
There are still tickets available and we’ll be selling them right up to, and at, the door. As a reminder, the pass includes attendance for both days (13 talks) as well as admittance to the opening night party and lunch, breakfast and snacks on both days. Hope to see you all in April!
]]>The primary issue that those opposed to the one web approach tend to mention is that responsive web design ignores the mobile context. This, of course, broaches the question: What exactly is the mobile context? The answer is not particularly clear.
It used to be. The mobile user used to be always on the go; trying to consume location related and task-oriented content very quickly. The problem is that this is not necessarily the case anymore. Phones are getting more and more capable and the browsing experience on many of them can be downright enjoyable. That has resulted in more people partaking in casual browsing on their mobile devices. Jeremy Keith hits it on the head in his comment to Paul Boag’s thought provoking post:
There’s also this assumption that mobile users have just one context (“I’m in a hurry! I need to find a time or a location!”) while desktop users have another (“I’ve got all the time in the world; I don’t mind wading through a bunch of irrelevant crap”) whereas, as Stephanie rightly pointed out—and I believe Luke Wroblewski is also doing user research in this area—this simply isn’t true.
People will use their Android phones or iPod Touches over WiFi while they are lounging on the sofa and people will use their laptops over 3G while traveling on a train.
Now tell me: which is the mobile context?
The issue is that there is no longer a clear mobile context. The stats seem to support this fact. Luke Wroblewski posted a summary of stats taken from Compete’s Quarterly Smartphone Report that pertained to where people are using their mobile devices to access the internet. The results were varied:
84% at home
80% during miscellaneous downtime throughout the day
76% waiting in lines of waiting for appointments
69% while shopping
64% at work
62% while watching TV (alt. study claims 84%)
47% during commute in to work
One could argue that a few of these settings might lean towards our traditional view of the mobile context. I’m willing to bet that a large portion of the 69% of the people browsing the web while shopping are looking for information to help them make their purchase, perform price comparisons etc. Take a look at those top two results though—those aren’t your traditional scenarios of mobile use.
Let me be clear—I’m not saying that there is never a need to tailor the content of a mobile site. I’m also not saying that responsive design and one web is an end all be all for mobile. It’s not a black and white issue—there are many, many shades of gray. We shouldn’t ignore the unique needs and characteristics of mobile devices and their users—it would be irresponsible to do so. However, we should be very careful not to assume too much. Mobile context is important, but first we need to figure out what the heck it is.
Responsive design is just one piece of the puzzle. By itself it is, in many cases, an incomplete solution. It’s a tool, however, that when leveraged properly and in conjunction with the proper techniques (see https://yiibu.com for an example of what I mean) can greatly aid in optimizing for multiple devices. To assume it is the entire solution is a mistake; to discount it as a hack seems to me to be just as bad.
]]>Her post, along with recent posts by Stephen Hay and Alistair Croll have got me thinking quite extensively about hosting my own data—everything from bookmarks to tweets. Tantek Celik is already utilizing his own home-brewed solution to do this, and he recently posted about his stance on hosting your own data.
This is what I mean by “own your data”. Your site should be the source and hub for everything you post online. This doesn’t exist yet, it’s a forward looking vision, and I and others are hard at work building it. It’s the future of the indie web.
Right now, our social networks are all essentially data silos. I post tweets to Twitter, status updates to Facebook, bookmarks to Delicious (at least I used to) and images to Flickr. I don’t own that data, nor do I have easy access to it all in one central location. Aside from backups, if one of those services were to disappear, it would take any data I had posted there with it.
Even if I am diligent with my backups, most silo-based sites today do not have a common exportable data format. Which means that while I may be able to back my data up, in most cases I can not easily move that data over to another service.
One solution is to post through your own site, as Tantek is doing. Then, using the appropriate protocols and semantic standards, post that data to the appropriate hub or service. That way, you now have a local copy on your site, and you can continue to post even if say, Twitter, were to go down (which, you know, never happens).
The more I consider it, the more I am surprised that we ever settled for anything else. So often my posts into these silos are related to each other. I gain interest in a topic so I research it and add several bookmarks to Evernote. I talk about it with some people on Twitter. All of this leads me to write a post on my blog. With all three of those types of information being sectioned off from one another, each inherently loses some of it’s original value. If I have access to all the updates and replies from all of these different services in one location, however, I can see an the actual progression of an idea.
This future version of the web does not come without problems though. One such problem is how do we capture the social aspect of these sites. For example, consider any conversation on Twitter. If I’m archiving my tweets, I will only get half of the conversation. Without somehow having access to the other half of the data I no longer have a complete thought and that conversation loses it’s value.
Thankfully, there are some solutions being actively developed. The first step, is to use a protocol called PubSubHubbub which allows you to specify a “hub” for third-party services, like Twitter or Google Buzz for example, to refer to for new content. Using this model, whenever you post something, you ping your hub with the new content. The hub in turn alerts any services that have subscribed to the feed of that content that there is a new update for them to publish.
This accomplishes a few things. First, it allows you to publish once and potentially update many services. Secondly, it operates at near real time as opposed to the current practice of repeated polling.
To resolve the conversation issue, services can implement the Salmon Protocol. In the model described by the Salmon Protocol, the source (your site or whatever you are using to publish content) pushes new content out via one consistent protocol (PubSubHubbub) to all the aggregator services (Google Buzz for example). When a comment or reply is posted on one of those services, that service will then push the comment back to the source.
At this point, you can choose to simply store locally, or push that comment back out to your hub so other services can post the reply as well. If a large amount of services get behind this technology, it offers tremendous potential. Imagine being able to maintain a conversation that would span across several social networks!
Of course, the downside to all of this is that I may very well have to reconsider my aversion to Google Buzz. They already implement the PubSubHub protocol, and they claim to be actively working on implementing the Salmon Protocol as well.
]]>The W3C just released an official HTML5 logo that would look quite appropriate on any superhero costume with a cape (see SuperHTML5Bruce as evidence). The issue, as I see it at least, is not really with the logo itself. Sure, it’s kind of an odd idea, but it’s pretty well designed and could theoretically be a good way to market the new standard.
I say theoretically because in it’s current form it fails, at least if we are measuring by accuracy and clarity. To take a quote directly from the site:
The logo is a general-purpose visual identity for a broad set of open web technologies, including HTML5, CSS, SVG, WOFF, and others.
In case you didn’t catch the blunder, the W3C is basically lumping a variety of technologies under the HTML5 buzzword that has become so popular. There’s been great discussion online about whether this matters or not. Back in August of 2010, Jeff Croft wrote a very well thought out post about the topic. In it he argues that we should ‘embrace’ the buzzword because:
Our industry has proven on several occasions that we don’t get excited about new, interesting, and useful technologies and concepts until such a buzzword is in place.
That’s a valid point, and good reason for the need for an umbrella buzzword of some sort. What’s unfortunate is that the buzzword chosen now carries two meanings with it: HTML5 the spec and HTML5 the overarching buzzword that includes HTML5, SVG, WOFF and (shudder) CSS3.
That being said, I can see the case for sticking with it. While ‘HTML5’ was an unfortunate choice, it has spread rather rapidly and it may be too late to change it. Also, as Jeff stated, buzzwords tend to generate excitement and marketers and journalists have effectively used it to do just that.
Here’s where this particular usage case falls apart for me. Who exactly is the W3C ‘marketing’ too? Isn’t it the very people responsible for utilizing these standards to build applications and sites? If so, then what good could lumping all of those different technologies under a confusing umbrella term possibly do?
Jeremy Keith said it best back in August:
Clarifying what is and isn’t in HTML5 isn’t pedantry for pedantry’s sake. It’s about communication and clarity, the cornerstones of language.
In that same article, he tells a story of a web developer who wanted to know if Jeremy’s new HTML5 book covered CSS3. When he was told it does not, the developer replied “But CSS3 is part of HTML5, isn’t it?”
That’s where this buzzword fails and that’s where the issue lies. When the buzzword is causing confusion amongst the very people who have to be able to distinguish between these technologies, we have a problem. It’s a problem that is certainly not helped by the standards body that write the specs that these developers utilize failing to clearly separate and distinguish the technologies from one another.
A lot of work has been in the last decade or so to help clarify these technologies and push their adoption. We’ve seen web standards go from being a tool of the few to a tool of the many. We’ve seen people preach the importance of separation of concerns–HTML for structure, CSS for styling, Javascript for interaction. A great deal of effort has gone into clarifying the ever evolving, never quite sane world of web standards. I fear if we’re not careful with it, improper use of this buzzword threatens to undo at least some of that work.
]]>Just a warning ahead of time because I know people are going to mention it—none of these solutions are 100% foolproof, just as captchas are not 100% foolproof. I acknowledge that, but that’s beside the point. While it would be nice to use a system that detects and eliminates 100% of the spam without any false positives, I don’t think that’s a realistic expectation at this point in time. My goal is to eliminate as much as spam as possible without negatively affecting the experience of my users.
So let’s get started shall we?
Akismet
The only alternative solution I proposed in the prior post is also the one I most commonly use. Taken directly from their site:
Akismet is a hosted web service that saves you time by automatically detecting comment and trackback spam. It’s hosted on our servers, but we give you access to it through plugins and our API.
Here’s the basic gist. Every time a form using Akismet is submitted, that information is sent to the Akismet service. Akismet then performs a series of tests (including whether that email or message has been marked as spam by other users of the service) on that information to determine if it’s spam or not. The service then returns a value indicating the results of these tests.
If Akismet thinks it’s spam, it keeps the information in their database for 15 days in case you need to check it out and approve it. If you do find a false positive, Akismet will re-analyze the information to attempt to learn from it.
Traditionally, Akismet has been used on blogs (primarily Wordpress driven ones). However since the API is available, there have been many plugins and libraries created that allow you to easily use the service on other platforms, and for any type of form you would like (contact, sign-up, etc).
I use Akismet on this site and it’s been very effective. I can remember only a handful of spam comments that got through, and even less false positives. Meanwhile, Akismet has managed to successfully mark ~45,000 messages as spam. Not too bad.
Roll Your Own Heuristic Solution
Another alternative solution is to use a heuristic spam detection system. Basically, you would check the message against a series of specific criteria to see how likely it is to be spam. Jonathan Snook outlined a method for doing just this a couple of years ago.
In his system, he checked a long list of rules and used a point based system to weigh the results. For example, for anytime ‘viagra’ was found in a comment he would apply -1 points. At the end of the checks, if the number is below a certain level, he marks the message as spam. At the time he wrote the post, he was only getting a spam message every one or two weeks.
Building a heuristic approach like this would not be particularly difficult or time-consuming. It would also allow you the opportunity to fine-tune the system based on common traits you are seeing amongst spam submissions. More importantly, as with Akismet, it would not get in the way of your users.
Bayesian Filtering
Taking the heuristic solution one step further, you could use a bayesian filtering method (like most email spam filtering services). A bayesian approach determines the probability that a message is spam based both on what that message contains and also the contents of past messages that were marked as spam.
It actually works the other way too–a bayesian filtering system can also compare the words that are typical in ‘good’ messages to help determine if a message is safe. What works so beautifully about a bayesian approach is that it will progressively get better the more it is used and the larger it’s library of ‘good’ and ‘bad’ words grows.
You could easily roll your own solution here if you would like. There are numerous bayesian filtering classes written in a variety of programming languages, that are readily available online.
Any combination of the above
You can of course combine any of the above to give yourself an even more robust system if you would like. In my experience, there isn’t much of a reason to combine Akismet with anything. It’s been robust and accurate enough that I just haven’t had the need. If you’d rather not use their service, however, combining bayesian filtering with some general criteria would create a strong layer of defense against spam attacks.
Again, as stated above, none of these methods are entirely, 100% foolproof. They are, however, quite effective and will bring the number of spam messages that get through to an almost negligible amount. A few minutes a week would be all it would take to clean those up.
To me, these solutions are much better alternatives to using a captcha system. They still catch the spam, but they remain transparent to the user.
]]>Firstly, it is worth pointing out that captchas are nowhere near as secure as you’d like to believe. Back in 2005, the W3C pointed out that third party services had demonstrated that most captcha services could be defeated with 88%-100% accuracy by using some simple OCR. I suspect that since then captchas have probably gotten a bit better, but spam bots probably have as well.
Then of course there are the accessibility issues. In particular, visitors who suffer from blindness, dyslexia or low vision will struggle greatly with a captcha system. You can aid them slightly by offering an audio alternative, but the audio used in captcha systems tends to be rather noisy and doesn’t help a great deal. Audio alternatives are particularly useless if you are in a noisy environment such as a coffee shop or office. To make matters worse, these audio alternatives are often not provided in a way that is accessible to the very audience that needs them the most.
Let’s assume, however, that all of our visitors have good vision. Captchas are still the wrong solution because they put the onus on the user to figure them out in order to successfully continue. Spam is not the users problem, it is the problem of the business that is providing the site. It is arrogant and lazy to try and push the problem onto a site’s visitors.
Captchas cause a great deal of frustration for many users. On average, it takes around 10 seconds to solve a captcha correctly. I have watched many a savvy user struggle 2, 3, even 4 times to correctly solve a captcha. That’s no way to reward someone who is trying to interact with your site.
The cute ‘solution’ to wasting 10 seconds of a users time was to make that time somehow productive. So reCAPTCHA came into play. reCAPTCHA’s show two words. One word can be deciphered using OCR. The second is a word, taken from a book, which OCR failed to decipher. Correction - both words are originally undecipherable by OCR. One word, the ‘control’ word, is a word that has been identified consistently and is therefore ‘solved’. The second word is one that has yet to have a large enough base of consistent answers to correctly determine what word it is.
The idea is that if the user correctly solves the more legible word, the reCAPTCHA system will assume they are probably right about the second word. By showing that second word to a large number of people and comparing results, they can figure out what that word should be. It’s crowd sourcing the digitization of books. Of course it too completely ignores the real issue: the assumed new-found ‘productivity’ doesn’t benefit the user. In fact, reCAPTCHA systems make the user get frustrated for no reason whatsoever about a word that even the reCAPTCHA system itself cannot decipher!
In conclusion, captchas are inaccessible, inconsiderate and frustrating. In addition, most captchas are not as secure as you would like to believe. A far more elegant solution is to use some sort of filtering system (like Akismet). Such a system can run behind the scenes and work without complicating the user experience.
It’s time to kill off captchas and stop punishing users for trying to interact with our sites.
]]>As I stated last year, I don’t finish books that I am not enjoying, so each book in the list below I’d recommend to varying degrees. If I’m picking favorites, I’d have to go with Ender’s Game, The Gun Seller and Daemon for my favorite fiction reads. My top three non-fiction books this year (excluding the web-related ones) would be Amusing Ourselves to Death, Flow and Better Off.
Brave New World by Aldous Huxley
Amusing Ourselves to Death by Neil Postman
What the Dog Saw by Malcolm Gladwell
I, Robot by Isaac Asimov
Flow by Mihaly Csikszentmihalyi
Dirk Gently’s Holistic Detective Agency by Douglas Adams
The Long Dark Tea Time of the Soul by Douglas Adams
Essential PHP Security by Chris Shiflett
Infoquake by David Louis Edelman
Good to Great by Jim Collins
Blindness by Jose Saramago
The Search by John Battelle
Ender’s Game by Orson Scott Card
Speaker for the Dead by Orson Scott Card
Natural-Born Cyborgs by Andy Clark
The Forest and the Trees by Allan G. Johnson
Better Off by Eric Brende
Xenocide by Orson Scott Card
Alice In Wonderland/Through the Looking Glass by Lewis Carrol
The Shallows by Nicholas Carr
97 Things Every Programmer Should Know by Kevlin Henney
The Time Machine by HG Wells
Glasshouse by Charles Stross
Rapt by Winifred Gallagher
The Gun Seller by Hugh Laurie
Man’s Search for Meaning by Viktor Emil Frankl
Daemon by Daniel Suarez
Frankenstein by Mary Shelley
Marooned in Realtime by Vernor Vigne
Rework by Jason Fried
Forever War by Joe Haldeman
HTML5 Up and Running by Mark Pilgram
Flashforward by Robert Sawyer
Be sure to check it out. While you’re at it, have a look through the rest of the articles. There’s some great stuff in there about mobile performance, new tools, the state of WPO in general, response and request headers, etc. Definitely worth browsing through.
]]>In the world of web development, there are many choices that are commonly presented as true or false, black and white, Boolean, binary values, when in fact they exist in a grey goo of quantum uncertainty.
What prompted my thoughts on the subject (at least recently) was a post on Simply Accessible entitled Speed vs. Accessibility. In the post, Derek Featherstone tells a story of someone who went so far as to change their markup, in a manner that made it significantly less semantic, in order to save a few bytes in file size and therefore improve performance. The result? They saved about 50 bytes, but lost contextual meaning and reduced accessibility. This lead Derek to ask if it had to be speed or accessibility (by the way, the answer is no—they can coexist).
The truth of the matter is, web development is a series of trade-offs. Sure, some best practices overlap between say, performance and accessibility. Many, however, do not. To make an educated decision requires a healthy level of knowledge both of the project being worked on, and of these concerns (semanticity, accessibility, performance, etc.) and their implications. Knowing what is most important to a project will give you a roadmap to follow when you inevitably have to decide which trade-offs to make.
It’s for this reason that I give little credence to the many one-sided argumentative posts you will see online. The goal should always be to be as semantic as possible, but you should also strive to be as performant, as accessibile and as well designed as possible. For example, anyone who reads my blog knows how seriously I take performance. In my opinion however, I would never be willing to adjust my markup as the person in Derek’s story did in order to shave a few bytes of my HTML. I may be a performance zealot, but to me, having a well structured page is the base from which I prefer to build off of. I believe a well marked up document provides the ideal starting point for optimal semantics, accessibility, performance and maintainability. This means that my sites will never be quite as peformant as they could be and I’m ok with that. I’ll do my best to optimize my site in ways that I think maximize my gains and have a minimal negative impact on my markup. There are many far more effective ways to optimize my site without having to pay such a steep price.
So by all means, find something you feel strongly about—learn about it, share your knowledge with others, become a strong advocate for it, but always remember that web development requires a balance. Find a solid base to build from, determine the considerations most important to the project and always keep those in mind as you make your decisions about what trade-offs to make.
]]>Breaking Development is a two day conference dedicated to mobile web design and development. I’m incredibly proud of the speakers we were able bring on board—with presenters like Peter-Paul Koch, Luke Wroblewski, Jonathan Snook, Nate Koechley and Jason Grigsby (just to name a few—the full list of speakers is available on the site) I think we’ve created an event chockfull of quality content.
I am also pretty excited about the format of the event. Often, the most valuable experience of a conference is the informal conversations and friendships that are formed in between sessions. We really wanted to encourage that type of discussion among the speakers and attendees so we molded the event around that idea: attendance is small (there will only be about 200-250 people attending), breakfast and lunch is being catered on both days and there will be an opening night party (naturally with free beer) to kick things off. For me at least, the format was almost as important as the content.
So if you are interested in an event centered around the mobile web with great speakers, free food and beer and good conversation, then I hope to see you in Dallas.
Also, I would be remiss if I didn’t mention that there is another conference a month later (May 12-13), in the Netherlands, which will also be covering mobile web design and development. Mobilism, hosted by Peter-Paul Koch, Stephen Hay (who is also speaking at our event) and Krijn Hoetmer, has a great lineup as well and will also be keeping their attendance small (I believe it’s around 250 people). If Amsterdam is more convenient for you, be sure to check that event out. They haven’t started ticket sales yet, but they should be available before long.
]]>One such factor is color. Different hues, values and saturation levels can all influence how a person perceives time. Typically, this can be linked to how “relaxed” or “stressed” a user feels during the wait. The more relaxed they feel, the shorter the wait will feel. It’s entirely possible that a stressed user may feel as though a site is very slow, while a relaxed user may feel that same site is very responsive.
So how do we induce a feeling of relaxation using color? For starters, we can choose blue hues as they elicit the most relaxed feeling state. In sharp contrast, yellow and red hues generate more excitement and thus, more stress. Red is particularly concerning since it also induces a feeling of avoidance and failure, further increasing the level of stress.
Another important consideration is the saturation level of a given color. Users who view low saturation colors have been shown to be in a more relaxed state than those who view a highly saturated color. This effect is particularly emphasized in environments where contrasts are intense – like computer screens.
Finally, we should consider the value of colors. Pastel colors (high value) result in a more relaxed state, and therefore a shortened perception of time, than lower value colors (darker colors).
Using this knowledge, we can create designs that inherently imply a fast, responsive experience to the user. By no means is this a replacement for taking the time to fine-tune the performance of your site. If used in conjunction with performance optimization techniques, however, you can further optimize the experience of your users by providing them with a site that feels as responsive as the stats say it is.
]]>We trade self-reflection for busyness, gorging ourselves on it and drowning in it, without recognizing the violence of that busyness, which we perpetrate against ourselves and at our peril.
Of course this isn’t exactly a new issue. Henry David Thoreau was lamenting our propensity to clutter our lives way back in 1854 when he wrote in Walden that “Our life is frittered away by detail.” It’s just that now it’s become easier for us to clutter our lives.
In fact, it would seem that our busyness is one of the greatest personal challenges that we face in the digital age. Our always connected lifestyle means that while we always have quick access to our email, Facebook messages and tweets, we rarely have moments of quiet, uninterrupted reflection and relaxation—we don’t allow ourselves whitespace.
I’m certainly guilty of this. I have an iPhone, which I have with me almost all the time. The urge to fire up Twitter, Facebook or my email is almost compulsory at times. I can certainly notice the difference on those days when I actually am able to resist the urge and allow myself even just a couple of hours with no technological distractions. In those cases, I feel far more refreshed and recharged. It makes sense: in those cases where I resist the siren call of a continuous stream of information my mind actually gets to relax for a bit and reflect.
Quiet reflection is far too important for us to push aside as often as we do. Studies on rats, for example, have shown that “down time” is used to transfer information gathered from experiences from the hippocampus into the rest of the brain; essentially it’s used to record memory. When you don’t have this down time, your ability to absorb maximum information and truly learn from an experience is greatly diminished.
Let’s be clear—I am not condemning technology. In fact, technology is not the issue here; we are. We need to recognize the value of a quiet moment, the value of reflection. We need to learn to manage our consumption of information.
For me, it means leaving my iPhone behind and instead taking my daughter for a walk outside. It means shutting down the laptop a little earlier on some nights and picking up a good book. It means letting that podcast sit one more day and turning on some quiet music, or even no music at all. I don’t disconnect every night, and I’m not saying we have to. We just need to find ways to simplify and reduce our consumption of information so that we can find a healthy balance instead of “gorging” ourselves on our busyness.
]]>Performance optimization isn’t that necessary. This misconception doesn’t stem from a lack of caring–most of the people I talk to truly care about crafting a good user experience for their visitors. I think this myth stems more from a lack of awareness. Most of us work on connections that are typically quite a bit faster than that of the average internet user. As a result, we experience the web differently than our users. In addition, most of the people I talk to just haven’t heard about the studies that have come out regarding the effect of performance on the user experience.
Performance optimization is too difficult and takes too much time. The statement is often followed by “…and my clients won’t pay for all that extra time.” The prevailing belief is that you have to invest numerous hours and considerable energy to improve the performance of your site.
High performance and beautiful design are mutually exclusive. In the early days of the web when we were browsing the internet via a dialup connection, improving performance meant removing the images from your site. On today’s web people don’t want to have to settle for a less graphically compelling site, so removing most of the images on a site isn’t a very compelling option. The fact that most sites that talk about performance optimization aren’t exactly well known for beautiful designs doesn’t help to dispel the myth.
Thankfully, there’s been a flurry of recent studies and new tools that help to both demonstrate the value of a high-performing site and simplify the process of getting there.
Performance matters
Information on the how of performance has been available for awhile, but not until the last couple years have we really seen answers to the question of why.
In 2009 at Velocity–the annual performance optimization conference–there was a flurry of information released by companies that clearly demonstrated how performance effects key business objectives. Google and Bing teamed up to present results from their respective experiments with page load time. Bing showed that by slowing their load time down by 2 seconds, they saw a 4.3% drop in revenue per user, as well as a 1.8% decrease in the number of search queries per user.
Google’s results were perhaps even more startling. They found that by introducing just a 400ms delay in their pages, the number of searches per user decreased by .59%. Even more concerning was the fact that even after the delay was removed, the slower initial user experience continued to affect how their users interacted with the site. With the delay gone, those same users still had .21% less searches.
Performance isn’t just tied to the business objectives of search engines. Shopzilla presented information detailing how they were able to speed up their site from 4-6 seconds to 1.5 seconds per page. The results were impressive. They saw their conversion rate increase by 7-12% and their page views increased by 25%.
In April of this year, Mozilla performed just a couple of simple performance techniques and shaved 2.2 seconds off their average page load time. This boost in performance increased their download conversions by 15.4%. Based on their daily traffic, they estimated that this increase translated to an astonishing 10.28 million additional downloads per year.
Buoyed by these studies, Google recently announced that they will be taking page load time into consideration when ranking sites. While it’s not yet one of the primary considerations, a high performing site of otherwise equal or similar ranking as a lower performing site will benefit from a bump in their ranking.
With performance contributing to revenue, frequency and depth of interaction, conversion rates and search engine optimization, it becomes crystal clear that performance optimization is not only important but that it should be a primary consideration.
You can have your cake and eat it too
Of course, performance is not all that matters. (If it was, we’d all have sites that look like Jakob Nielsen’s.) Eye candy, time and energy are all critical too. That’s why it’s important to note that there are many techniques you can use to improve the performance of your site without affecting the design at all. In most cases, you can do so using tools that greatly simplify the process and reduce the time investment required.
Improving page load time primarily boils down to two concerns:
reducing page weight;
and reducing HTTP requests.
Reducing page weight
Reducing page weight is really a lot simpler than you might think. For images, take the time to optimize them by hand using your favorite graphics program. You’ll be amazed by how much smaller you can make them without any noticeable degradation in quality.
If your images have already been produced, you can use a couple of different tools to reduce the size simply and quickly. One such tool is Smushit.com. Smush.It lets you upload graphics which it will then return to you optimized, without any loss in quality. If an app is more your style, ImageOptim is a very powerful app for Macs that wraps up the power of several different image optimization tools in an easy drag-and-drop interface.
For CSS and Javascript, run your code through a minimization tool like YUI Compressor or JSMin, both of which can be found online. These tools will remove unnecessary whitespace and comments from your files, decreasing the file size. YUI Compressor will take it a step further and substitute shorter variable and function names for the longer ones in your core file, resulting in even more savings. Again, if an app is more your style, Stoyan Stefanov’s OMG will automatically minimize and save your CSS and Javascript files.
Reducing HTTP requests
To reduce the number of HTTP requests, you can again use a tool like the online wrapper for YUI Compressor. The online tool gives you the option to upload several files, which it will then combine into one.
In addition to being a good idea in general, making use of progressive enhancement with CSS3 will help with the performance of your site. Using CSS3 for things like gradients, drop shadows and rounded corners can allow you to significantly reduce the number of images included on a page. Doing so will not only reduce the number of HTTP requests, but also reduce your page weight.
Use CSS sprites where applicable. They can be a little tedious to create and maintain, so consider using a tool like the SpriteMe bookmarklet. SpriteMe will search through your CSS to find references to images and recommend which of them you can combine into one sprite. It will not only create the sprite for you, but it will also generate the new CSS for you to place into your stylesheet.
If sprites aren’t your cup of tea, then use Data URI’s. Like sprites, they can be tricky to efficiently maintain, but again, someone has done some of the heavy lifting for us. CSSEmbed, created by Nicholas Zakas, is a great command line tool for parsing CSS and generating Data URIs where appropriate.
Server side optimizations
You can reap great benefits from some basic optimizations in your .htaccess file as well. Enabling gzip compression for not only html, but also stylesheets and scripts, can greatly reduce the size of your files, typically by about 70%. In fact, you can–and should–gzip any textual content that is requested on the server. It’s not necessary to enable gzipping for files like images or PDFs, as you will not notice any improvement in size. In fact, you may see the size of those files actually increase if you try to gzip them.
You should also set a far future Expires header. The Expires header is used by the server to tell the browser how long a component can be cached for. While this technique doesn’t do anything for first time visitors, applying a far future Expires header to your images, stylesheets and scripts will greatly reduce the number of HTTP requests for return visits.
Depending on your role and familiarity with server side technologies, you might feel a bit uncomfortable making significant changes to your .htaccess file. Thankfully someone has already done the heavy lifting for us. There’s actually an .htaccess file already configured with all of these settings that you can just download and drop into place on your server.
Conclusion
Performance matters. More and more studies are confirming that it’s tied to the success of major business objectives. With Google now taking page performance into consideration in their rankings, it can no longer be ignored. Users want sites that are fast. We can give them those, without having to reduce the quality and effectiveness of our designs.
If you perform these basic optimization techniques, you’ll considerably improve the performance of your site and you’ll do so without having to sacrifice your design at all. In fact, you can implement all 35 of the best practices for performance as recommended by Yahoo! without any noticeable difference in your design.
If you also make use of the plethora of freely available tools to simplify each step, you’ll find you don’t have to spend a lot of time to notice a significant improvement in your page load time. Of course, if you’re using a build tool the best solution would be to grab the command line versions of these tools and automate them, but that’s a discussion for another article.
It’s time we stop making excuses and start giving performance the respect it deserves and our users the faster experience they want. *[OMG]: One-click Minifier Gadget
]]>…when we seek happiness as the ultimate state, we’re destined to be disappointed. Absent unhappiness, how would we even recognize it? If we’re fortunate, happiness is a place we visit from time to time rather than inhabit permanently. As a steady state, it has the limits of any steady state: it’s not especially interesting or dynamic.
He goes on to talk about how our desire for happiness is derived from our impulses to avoid pain and to seek gratification, and the problem with taking those impulses too far:
But it also turns out that pain and discomfort are critical to growth, and that achieving excellence depends on the capacity to delay gratification.
When we’re living fully, what we feel is engaged and immersed, challenged and focused, curious and passionate. Happiness — or more specifically, satisfaction — is something we mostly feel retrospectively, as a payoff on our investment. And then, before very long, we move on to the next challenge.
What this boils down to is that we need to be willing to push ourselves outside of our comfort zones if we want to continue to grow and expand our skills. The problem is not being happy, the problem is believing that we should always be happy and never feel discomfort.
This doesn’t mean you should walk around looking for reasons to feel upset and depressed and it’s not a stand against having a generally positive outlook; there is a lot of power in a positive attitude. What you should instead take away from Tony’s article (or at least it’s what I took away) is that you need to be willing to experience the highs and lows, to push yourself out of your comfort zone, and to never be content with what you’ve accomplished. As he eloquently puts it:
Express your joy, savor your good fortune and enjoy your life, but also feel your disappointments, acknowledge your shortcomings, and never settle for happiness.
Enough said.
]]>Designers struggle endlessly with a problem that is almost nonexistent for users: “How do we pack the maximum number of options into the minimum space and price?” In my experience, the instruments and tools that endure (because they are loved by their users) have limited options.
He further elaborates, explaining our desire for intimacy with our tools:
With tools, we crave intimacy. This appetite for emotional resonance explains why users - when given a choice - prefer deep rapport over endless options. You can’t have a relationship with a device whose limits are unknown to you, because without limits it keeps becoming something else.
It’s not a revolutionary concept, designing and building only what is necessary, but it’s a good one to keep in mind. With tools, it’s often better to be the master of few than the jack of all trades.
]]>Those cool bouncing Google homepage balls everyone was talking about last week were an example of HTML5, but if you want to see an example of what the format can really do, take a look at this.
When the first couple of commentors questioned the accuracy of the information, she responded by stating that:
According to numerous sources, the balls were indeed in HTML5, specifically CSS3, part of the standard.
Now as most developers will tell you, CSS3 is most certainly not a part of the HTML5 spec. They’ll also be quick to tell you that Google didn’t actually use HTML5 for that logo—it was in fact just a series of divs (a poor structural decision).
Here’s the thing though—she’s right. There were articles that said that Google was using HTML5 for the logo. There are also articles that imply that CSS3 is a part of the HTML5 spec. It’s just an example of how much confusion there is around the spec and what it consists of. Whether that confusion matters (I think it does) is a topic already discussed.
Granted, there should have been someone doing some technical editing on an article like this. Still, I think it’s unfortunate that our immediate response is to criticize instead of inform. She’s taking a beating in the comments and on Twitter, but her intent wasn’t to misinform. She had read a few articles, seen some examples, and thought she could trust those sources.
How is it more constructive for us to condem than it is educate? Telling people they’re wrong doesn’t fix the situation. Informing them about why they are wrong might.
]]>It’s a very simple concept, but it is one that, if adhered to, would fundamentally alter the priorities of a project. If the user comes first, SEO takes a backseat to content strategy. Accessibility and performance cease being afterthoughts and become crucial components of a site. More time is given to user research to determine what users expect to find on your site, and where they expect to find it.
The irony is that by putting the users first, the success of the project would greatly be improved. Build something that users care about and want to use and they’ll reward you with their loyalty.
Imagine if every company took the time to ask, and more importantly to answer, this question before launching their projects. Wouldn’t that be refreshing?
]]>Webgrind is a freely available PHP profiling frontend that sits on top of XDebug. Using it, you can see how many times different functions are called and find what functions called them. You can also quickly see the inclusive cost (time spent inside a function plus calls to other functions) and self cost of each function.
Viewing the logs for the last page load, I could see that mysql_query was called a whopping 52 times and accounted for 84.93% of the processing time (which was at an unacceptable ~3.1ms).
Using the Webgrind frontend, I was able to trace back 23 of those calls to one function. In turn, this function was called by one other function 17 times. I decided to focus there first.
Again, with the help of Webgrind, I could see that this function was called several times, in separate files, for each page. The function produced the same results each time it was called.
The quick and simple fix, then, was to use a property to cache the results of that function call. So the first time it was called, it would process completely and run the necessary queries. The next time it was called, it would check it’s cache property to see if a value existed for the parameter being passed. If the value did exist - it would return that value.
This simple optimization immediately brought the total queries down from 52 to 36. They still accounted for 74% of the processing time, but that time had dropped dramatically from 3.1ms to ~2ms.
36 queries was still more than I wanted. A similar function to the one I had just optimized was responsible (both indirectly and directly) for 25 of the remaining 36 queries, so I thought I’d take a look there next.
Looking at the source, I could see that while the function asked for a boolean parameter to indicate if it should run certain queries, it never actually checked the value of that parameter. So no matter what, it was running all the queries, all the time. Fixing that error brought the total query count down from 36 to 16 and the total time to process the page was now ~1ms.
As a surprising bonus, there were no locations in the code that I had to change now that the function had been corrected. People hadn’t been expecting to get those extra values in their return object, so they never tried to use them unless they had passed a true value in for that parameter.
All in all I was able to take the query count down from 52 to 16 and the processing time from ~3.1ms to ~1ms. There’s more room for optimization, but this is certainly not a bad start for about 45 minutes of work.
]]>Enough self promotion. After having taken a look through the options, here’s a few more talks that I’m really hoping make the cut. Again, if they sound interesting to you, please give them a thumbs up in the PanelPicker.
Developers: Saving the Web From Your Dick Move (Hoping they correctly answer their fifth question with “You don’t have to”)
Challenging the Challenge of Accessibility on Yahoo! Homepage
If any of you have a panel submitted, or came across one particularly interesting, feel free to let me know. I know many people frown upon the annual “panel pimping” but I actually enjoy it. There are quite a few panels I probably wouldn’t have heard about otherwise.
]]>Data URIs can be tricky to implement efficiently however. Since they are a Base64-encoded representation of an image, there is a built-in level of obfuscation that can make manual maintenance tedious. Thankfully, Nicholas put together a command line tool called CSSEmbed which takes the pain out of using them.
CSSEmbed is a very straightforward tool that parses a stylesheet and converts all references of images to their data URI equivalents. Installation is as simple as downloading the .jar file and placing it where you’d like. Then you use a simple command specifying any options, the file to output to, and the file to parse, like so:
java -jar cssembed.jar -o styles_uri.css styles.css
Since versions of Internet Explorer prior to IE8 don’t support data URIs, you have to use MHTML as a workaround (again, Stoyan has an excellent post with more info). The command for that is very similar — you just need to make sure to declare a “root” (for example, I’d use https://timkadlec.com as my root for this site) which CSSEmbed will use in the MHTML.
java -jar cssembed.jar -o styles_mhtml.css styles.css --mhtml --mhtmlroot https://timkadlec.com
Right now, to my knowledge, you can’t parse an entire directory of CSS files, but that’s about the only thing I can think of that I’d like to see added. It’s a great tool to use during an automated build to really simplify the process of using data URIs and I definitely encourage you to go download it and give it a try.
]]>Her big sister, thankfully, thinks Jessica is at least somewhat interesting so far and likes to come and “talk” to her. Both girls, as well as their mother, are doing well (if not a bit tired).
I’m sure there’s going to be plenty of barbies and tea parties in the future (heck, there’s already some “Ring Around the Rosie”) but I’m sure I won’t mind. I’m just a happy dad whose two little girls have him firmly wrapped around their fingers.
]]>If you don’t research and vet your potential client before asking them to sign your contract, stop being stupid. If you bid on projects even though the potential client doesn’t know much about you or why you’d be a good (or bad) choice for them (they “just need a web designer”), stop being stupid.
He continues by analyzing how web designers can continue to lay the groundwork for “stupid” clients by failing to have a proper workflow in place:
If you aren’t the one defining the project process, stop being stupid. If you don’t define, police, and unfailingly adhere to specific milestone requirements and deadlines for both yourself and your clients, stop being stupid. If you’re producing design artefacts before completing a comprehensive discovery process, stop being stupid.
Too often, we rush blindly forward into new projects and new relationships with clients. This process is not at all conducive to high quality work. Quality work requires an investment of time and a devotion of resources. To craft a site of true quality, you need to take a step back and slow the process down — making sure you understand the problem you are trying to solve and ensuring that the solution you are proposing is the right solution for that particular problem.
While the stupid tag may feel a bit confrontational it does not detract from the argument that Rutledge is making: not all failed relationships are the fault of the client. By failing to invest the proper amount of time and attention into planning, research, and careful consideration of requirements, firms and freelancers often set themselves up for failed client relationships.
]]>I gave Tumblr a try for a little bit, and I loved the freedom it gave me to post content I found important, regardless of how much detail I felt it warranted at the moment. Really, the only thing I didn’t like was the fact that I was now blogging in two different places - this site and my Tumblr blog. Since Tumblr had no easy way to import all my old posts from Wordpress, I decided to make use of the custom post type capabilities in Wordpress to build my own version of a tumblelog.
Since the frequency of posts will undoubtedly be picking up with the additions of these shorter post types, let me know if any of you would prefer to have a feed for just the “feature-length” posts. Right now, the main feed pulls everything.
The underlying structure didn’t change a ton - it’s still HTML5 based. As of the time of this post, there’s only one image being used in the site (other than anything called by the Google Analytics script). The rest of the graphical elements are a combination of CSS gradients and data URIs to help reduce the number of HTTP requests.
So have a look around. The plan is for this version of the site to stick around awhile.
]]>The “repetitive now” user is someone checking for the same piece of information over and over again, like checking the same stock quotes or weather. Google uses cookies to help cater to mobile users who check and recheck the same data points.
The “bored now” are users who have time on their hands. People on trains or waiting in airports or sitting in cafes. Mobile users in this behavior group look a lot more like casual Web surfers, but mobile phones don’t offer the robust user input of a desktop, so the applications have to be tailored.
The “urgent now” is a request to find something specific fast, like the location of a bakery or directions to the airport. Since a lot of these questions are location-aware, Google tries to build location into the mobile versions of these queries.
I think it’s a pretty accurate categorization, and a good thing to keep in mind when you’re building your mobile site or app. Each “type” of user is interacting with your content with a different goal in mind, and the experience should be tailored accordingly.
]]>I turned off comments in the last redesign of powazek.com because I needed a place online that was just for me. With comments on, when I sat down to write, I’d preemptively hear the comments I’d inevitably get. It made writing a chore, and eventually I stopped writing altogether. Turning comments off was like taking a weight off my shoulders. It freed me to write again.
I too have been trying to decide whether to continue using comments on my main blog. On the one hand, I can sympathize with Derek. I often “hear” the comments I’ll get, or won’t get, and ultimately allow that to either adjust the content in a post, or don’t publish the post at all.
On the other hand, I truly do enjoy the good discussion that can sometimes takeplace, and I don’t want to lose that. Perhaps a reply by Twitter option (as Jon Hicks is considering) is a decent option to generate that discussion without the feeling of obligation?
]]>Links are wonderful conveniences, as we all know (from clicking on them compulsively day in and day out). But they’re also distractions. Sometimes, they’re big distractions - we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head. Even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters. People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form. The more links in a piece of writing, the bigger the hit on comprehension.
I like the approach taken by Readability. They generate a list of footnotes from the links, and then remove any special formatting for links within the text. I still have my links, in their original context, but I’m no longer distracted by them since they appear to be regular text at a glance.
]]>The issue has been brought up many times, usually right after Apple announces some change in the way they accept applications into their store. With each change, invariably some apps, and some companies who have built a living off of those apps, get the short end of the stick and are no longer deemed acceptable for Apples’ standards (whatever they may be at that particular minute).
Disclaimer first - I’m generally an Apple fan. I don’t care what you say - they know how to package an awesome user experience in their devices, and they do it better than anyone else I’ve seen. Honestly, I don’t even fault them for being so dictatorial in the policing of their app store. There are benefits in a controlled system, among which are an assurance of quality and security.
That being said, as developers, I think many of us miss the boat. The hot thing to do nowadays in mobile is to build an iPhone app. The problem is that in doing so, we limit our audience (even if it is a good sized one). Android alone has a wider user base than the iPhone, so should we be creating Android specific apps? Again, the answer is no.
What needs to be done is what Peter Paul Koch recently suggested - we should be building HTML5 apps. For the many constraints mobile development and design comes with, browser capabilities are becoming increasingly advanced in smartphones. Most of the major players (Android, iPhone, Blackberry for example) are using some form of WebKit right now which means that there is a plethora of CSS3 and HTML5 features we can be tapping into.
Features like local storage and cache manifest’s make it possible to significantly improve the performance of our mobile apps, and in many cases, make them indistinguishable from a native app. The benefit is that we build once, and our content gets to be displayed on a variety of devices - not just one.
You could argue that the downside is monetization - which the App Store certainly provides (though for far less apps than some people believe). I’m not buying it though. As smartphones become more and more prevalent, I think we’ll see the mobile web emerge into a similar state as it is now with desktops. People will be able to purchase their apps and services via subscriptions and downloads from the web making the monetization argument a moot point.
As the market becomes more and more saturated with web-capable phones, the mobile web is going to become more and more widely used by a variety of different devices, using many different operating systems. Targeting just one of them is rarely (if ever) going to be the right move to make - and the more devices that are released, the more painfully obvious that point will become. The question is not should you be developing mobile web apps - the question is are you going to start now or play catch up later on?
]]>What Does It Test
Page Speed analyzes the performance of a page based on a set of 26 rules (as of version 1.7) that Google has documented. Each rule is given a priority code based on how great the potential impact would be on the page load time. Once Page Speed has determined which rules are broken, it gives the page a score between 0 and 100, which can be exported in JSON format, or sent straight to ShowSlow.com - a tool for recording YSlow and Page Speed scores over time.
The rules range from common rules like optimize your images, to lesser known techniques like defining a character set early. A complete list of the rules Page Speed checks for is below:
Combine external CSS
Minimize DNS lookups
Leverage browser caching
Remove unused CSS
Leverage proxy caching
Minify CSS
Minify HTML
Minify Javascript
Specify image dimensions
Serve static content from a cookieless domain
Use efficient CSS selectors
Avoid bad requests
Combine external JavaScript
Enable compression
Minimize redirects
Minimize request size
Optimize images
Optimize the order of styles and scripts
Put CSS in the document head
Serve resources from a consistent URL
Serve scaled images
Specify a character set early
Avoid CSS expressions
Defer loading of JavaScript
Parallelize downloads across hostnames
Specify a cache validator
What Else Can It Do
Like I said, while auditing may be the most well known feature of Page Speed, it is far from being the extent of its capabilities. In addition to performing a basic audit of your page, Page Speed automatically optimizes the Javascript files, CSS files, and images that it finds on the page, and saves them to a folder on your computer (which you can specify). It also has the ability to profile deferred Javascript files - something it does not do by default.
Perhaps the most fun though comes when you use the Page Speed Activity panel. At it’s most basic usage, the activity panel lets you record a timeline of all the browser activities, including both network activity and Javascript processing, that take place during the time you choose to record. That means that you get detail like how much time was spent connecting to the server, and how much time was spent executing the Javascript.
The activity panel also allows you to see a list of all the Javascript functions that were instantiated, but not called, during the recording period. This information could help you determine what portions of your code are not necessary immediately so that you can choose to load them later, allowing you to further improve your load time. You can also record “paint snapshots” in the activity panel which highlight each element in a page as it is rendered.
While the advanced features offered by the activity panel are very useful, it’s important to keep in mind that they do slow down the Page Speed plugin a little. Since they add a little bit more overhead, the timeline will not be 100% accurate and will serve you better as a relative reference than an exact approximation of the time it takes your page to load.
It’s Open Source Too!
Just recently, the Page Speed SDK was released as open source. Already, Steve Souders has demonstrated the usefulness of this by building HAR to Page Speed - a tool that will apply the Page Speed rules to a HAR (HTTP Archive specification - a format gaining popularity) file you upload.
By opening the SDK up to open source, the potential is there for developers to build cross-platform tools that would allow people to analyze the performance of their site according to the Page Speed rules, regardless of the browser in use. It’s going to be exciting to see what other tools get built around the SDK as it continues to evolve.
]]>The Pot Calling the Kettle Black
I generally really like Apple products. They really know how to polish up a beautiful user experience, probably better than any other company I know of. So while I wouldn’t go so far as to call myself a fanboi, I will say that I drool heavily over most of their devices. Yet despite my generally positive view towards Apple, even I have to admit I found it funny when Jobs was calling out Flash for being “100% proprietary”. It’s true of course, but Apple has little room to talk.
What made the point even funnier was how he said in one paragraph that Apple believes “that all standards pertaining to the web should be open”, and two paragraphs later, talks about Apple’s support for the H.264 format, a proprietary codec they chose to support over the open OGG.
In General, He’s Right
For the most part, I agree with him. Flash isn’t as essential as Adobe would like you to believe. Many very, very big names have gotten on board with supporting HTML5 video which contradicts Adobe’s claim that users of Apple mobile devices will go without video. Furthermore, the fact that Apple’s mobile devices use WebKit means that there are some very cool HTML5, CSS3 and Javascript features that developers can make use of to make their mobile web applications run smoothly while providing a high level of interactivity that used to be only possible via Flash.
Honestly — I think Flash is a tool that, while very useful for a while, is becoming less and less necessary and is inching towards becoming obsolete (and cue the rabid Flash fan base in 3….2….1). The new capabilities that HTML5 and CSS3 provide us with make it possible to provide interactive experiences that were once unimaginable without the help of Flash, and to do so with openly available technologies.
Good For Him
Finally — I just wanted to state that I found it refreshing to see a company so openly address their critics. It shows us that even though Apple may be getting bigger and bigger, and even though they have demonstrated some Orwellian traits, they’re not entirely out of touch with the development community and are in fact listening.
]]>One web guru who you may have heard of, Jeffrey Zeldman, posted an article on Sunday wherein he describes the choice he feels designers are now faced with:
So now we face a dilemma. As we continue to seduce viewers via large, multiple background images, image replacement, web fonts or sIFR, and so on, we may find our beautiful sites losing page rank.
It’s a fair enough point to make - sometimes a designer will need to make a decision between additional aesthetic effects and improved performance (but not that often). What followed in the comments was disturbing though - many people were actually viewing Google’s move as a negative thing and seemed to be very worried about it’s effects. Some felt Google was simply abusing their power, others believed they’d have to sacrifice good design in order to receive a decent ranking. The situation, I think, needs a little diffusing.
Deep Breaths People
First, let’s remember that Google themselves have said that this addition to their algorithm affects less than 1% of their search queries. That means that while it is a factor, it is certainly not the most important one. It also means that while Google wants to display sites that perform well, their standards are probably not that high. If they were, there’d be a lot more sites being penalized in the rankings.
Secondly, this is a great move by Google and is fantastic news for users. Study after study has demonstrated that users really care about how well a site performs. It’s been shown that performance has an effect on bounce rate, purchase amounts, queries per user, credibility and more. If Google really cares about presenting their users with the best content, then this was a great demonstration of that belief.
You Can Have Your Cake and Eat It Too
Finally, there’s no need to panic - a site can be beautiful and still perform very well. For an example, let’s look at Happy Cog’s own visitphilly.com, both because it is a beautiful site, and because this conversation was started on Zeldman’s blog.
A quick look lets us identify some easy improvements. For one, there are 47 CSS background images. A quick run through SpriteMe trims that number down to 15.
There are also 7 scripts being called, 3 of which are unminified. At least a few of the seven could be concatenated to reduce some more requests and running the 3 unminified scripts through a compressor shaves a whopping 54% percent off their combined size. Finally, by running the images on the home page through Smush.It, we eliminate another 133KB of baggage.
When you add those fixes together, the performance of the site would be dramatically improved, and the visual experience would not be diminished at all. It would be indistinguishable from the unoptimized version in every way, except that it would load significantly quicker.
So in summary, performance and beauty are not mutually exclusive. By taking performance into consideration, Google is not making the web a difficult place for designers; they’re simply making the web a more usable and less frustrating place for users.
]]>Having a lot of images in a page can be very costly. Each image requires an HTTP request, and each HTTP request comes with plenty of overhead. I’ve seen pages with 20+ icons, each requiring their own request - that’s a serious hinderance to performance.
A way of combatting the issue is to use sprites. For the uninitiated, an image sprite is simply one big image that includes many smaller images. This allows you to make one HTTP request, and using CSS, still make use of a variety of different images. If you plan ahead and do this while initially building the site, it’s rather simple to do. How do you quickly implement this strategy in an existing site though?
That’s where SpriteMe comes in. Steve Souders wrote a handy bookmarklet that looks through the images on a given page and identifies those that could potentially be combined into a sprite. At that point, it gives you a simple drag and drop interface with which you can add or remove items from a sprite. Once you’ve determined which images you’d like to combine, you can click “make sprite” and the bookmarklet will automatically create the image sprite for you, inject it into the page, and show you the CSS you’ll need to add.
From there it’s as simple as downloading the image and copying the CSS the tool provides into your stylesheet (I believe the ability to export the CSS is being worked on). On one recent project, I used SpriteMe to create a sprite from ~25 images that were being used in the site’s navigation in about 15 minutes of work. The only additional step I took was to run the generated sprite through OptiPng to shave some filesize off (I’ll talk about OptiPng in another post).
So what are the downsides? Not too much. I’d love to see it do some optimization on the sprites it produces - in the example I gave above, OptiPng shaved ~40% off the file size off. I’d also be willing to bet that the tool produces a bit more whitespace than is optimal or necessary in it’s sprites resulting in more memory usage. To be fair, I haven’t tried to remove some of that excess whitespace to determine how big of a deal it really is.
All things considered, these are minor issues. I can easily run an image optimization tool on the produced sprite to fix that up, and it’s not like there’s THAT much whitespace - just probably a bit more than needed. These are, in my opinion, minor tradeoffs for the incredible convenience this tool provides.
]]>If you just want the highlights, I’d say that Neverwhere, Replay and The Road are at the top of my list as far as fiction is concerned. For non-fiction, Brain Rules, Blink and Trade-Off probably top the list.
Not to say the other books aren’t good - I typically only finish books I enjoy on some level. With as many books as I’d like to read, I don’t see the point in wasting time on a book that doesn’t manage to hold my interest on some level.
Shadows Linger by Glen Cook
The White Rose by Glen Cook
The Dad in the Mirror by Pat Morley
The Rosetta Key by William Dietrich
The Road by Cormac McCarthy
The Tipping Point by Malcolm Gladwell
Wikinomics by Don Tapscott
Blink by Malcolm Gladwell
Made to Stick by Chip and Dan Heath
Outliers by Malcolm Gladwell
Convergence Culture by Henry Jenkins
Unleashing the Ideavirus by Seth Godin
Neverwhere by Neil Gaiman
Strong Fathers, Strong Daughters by Dr. Meg Meeker
American Gods by Neil Gaiman
Mistakes Were Made (But Not By Me) by Carol Tavris
Glory Road by Dan Wetzel
Sway by Ori Brafman
Anansi Boys by Neil Gaiman
A Long Way Gone by Ishmael Beah
What Would Google Do? by Jeff Jarvis
Dune by Frank Herbert
Crowdsourcing by Jeff Howe
Fahrenheit 451 by Ray Bradbury
Elsewhere U.S.A by Dalton Conley
The Gone-Away World by Nick Harkaway
The Power of Less by Leo Babuta
Old Man’s War by John Scalzi
The Ghost Brigades by John Scalzi
The Last Colony by John Scalzi
Play by Stuart Brown
Replay by Ken Grimwood
Trade-Off by Kevin Maney
1984 by George Orwell
Dying Inside by Robert Silverberg
How We Decide by Jonah Lehrer
Do Androids Dream of Electric Sheep by Phillip K. Dick
Brain Rules by John Medina
First off was setting expires tags and turning gzipping on. Since I’ve done this in too many .htaccess files to count, this was simply a copy/paste job with very little tweaking necessary.
Then, I decided to optimize the images (what few there are). I ran Smush.It from the YSlow tool in Firefox. That compressed the images which I then downloaded to my computer and then promptly pushed right back up to the server. It took me 5, maybe 10 minutes and it cut the image size by about 33% total.
The icons for the RSS, Twitter and LinkedIn links in the footer were separate images - each requiring its own HTTP request. By using the SpriteMe bookmarklet, I was able to generate a new sprite and the necessary CSS in a few minutes and trim those 3 HTTP requests down into one. I also ran OptiPng from the terminal window on my Mac on the new sprited image to shave off about 40% of the size.
I then minimized the CSS and Javascript (what little I was using) using OMG, which shaved ~30% off the file size of each.
Without any of these optimizations, my YSlow grade was a C and my page size was about 150K. After the 1⁄2 hour investment (if that…I’m being generous here) my YSlow grade came in at an A with a page size around 80K. Not bad for such little work.
There’s much more I can, and will, do here (caching, data-uri’s, etc.) but the moral of the story is that not optimizing your sites because it takes too long is just no longer an argument you can use. Make use of the tools available to you, and it’s downright scary how quickly you can speed up a site.
]]>In addition to a new look, the underlying code has changed. I used to blog on a home-brewed VBScript based system. This incarnation of the blog, however, is built on Wordpress. It’s also built using HTML5 and makes use of some CSS3 selectors for presentation. The idea was to simplify the publishing process to encourage more writing, and as part of that, I also wanted to simplify the look a little bit. Using HTML5 and CSS3 just give me an excuse to play around with those technologies a bit.
Finally, with a new design, comes slightly different content. Historically, I’ve always treated the site as a place for web development related articles exclusively. The problem with this is that A) it’s not particularly personal and B) my interests have expanded greatly. So, going forward the site will no longer be limited to just web development - though there will still be plenty of that. Let’s just say if I find it interesting, and feel inclined to write about it, it’ll make it’s way onto the site. As a result, the blog will also reflect a slightly more personal tone than it historically has.
There’s still a few rough spots to iron out with the new look - so if you stumble upon anything missing or that obviously looks out of place, feel free to shoot me a line about it if you feel so inclined.
]]>Comfort, however, is not often equal to progress. When it comes to expanding your mind to new possibilities and advancing your knowledge and skills, a little dissonance goes a long way.
One popular phrase you hear thrown around is the “wisdom of crowds” - the many are smarter than the few. However, it is important to note that the wisdom of crowds does not equal crowd psychology (the power of people acting collectively). Instead, the wisdom of crowds is only true when the crowd consists of a variety of people with different viewpoints, opinions and backgrounds.
Why is it that we need this diversity to excel and grow? It’s because as we become certain that something is true, our mind locks onto that idea. We have a tendency to filter out any information that may conflict with our firmly held opinions, and only focus on those that support them. This behavior, of course, strengthens our existing opinions and sheds no light on alternate solutions and beliefs that may be superior to the ones we have chosen to latch onto.
As Jonah Lehrer says in How We Decide, “The only way to counteract the bias for certainty is to encourage some inner dissonance. We must force ourselves to think about the information we don’t want to think about, to pay attention to the data that disturbs our entrenched beliefs.”
The way to overcome our certainties, and to challenge ourselves to new heights of accomplishments and knowledge is to consider other perspectives than our own, to surround ourselves with people who will challenge our beliefs. And then we must listen. We must not filter out their commentary, we need to consider it and view our problems with a fresh perspective. That’s how we develop our skills and that’s how we create new, innovative solutions.
]]>Basically, for those of you who haven’t seen it, it shows how a typical person opens a banana using the stem. As we all know sometimes this method works just fine and sometimes the stem is tough and we have to struggle a bit to peel it, smashing the top of our banana in the meantime.
A monkey, meanwhile, simply pinches the other end of the banana, which causes a split in the banana peel, and then smoothly peels the banana. It works amazingly well.
What’s the point? The point is, this morning I thought I knew all there was to know about peeling bananas. I would never have expected that there was a better, more efficient technique I could be using. I was very confident that my method was the best one out there.
How many times do we take that approach with our design or development skills? We assume we know all we need to know about a topic, so instead of continually experimenting and reanalyzing our techniques, we plod along confident that our current method of work is the best. Even if there is a better way, we certainly wouldn’t find it by reading that book or that blog.
Meanwhile, there’s a better way out there. It might be only a minor improvement, or it might be something that completely alters the way we work. We won’t know it though, unless we continue to question our knowledge and show a consistent desire to improve. Sometimes, no matter how much we think we know, we can learn from even the most unlikely sources.
]]>The recent announcement is that Outlook 2010, like Outlook 2007, will use Microsoft Word for it’s rendering engine. No…you read that right…Word’s rendering engine. A rendering engine that doesn’t support simple CSS statements like float, width or height. Here’s their stance on why they’re opting to use the Word rendering engine again:
We’ve made the decision to continue to use Word for creating e-mail messages because we believe it’s the best e-mail authoring experience around, with rich tools that our Word customers have enjoyed for over 25 years. Our customers enjoy using a familiar and powerful tool for creating e-mail, just as they do for creating documents. Word enables Outlook customers to write professional-looking and visually stunning e-mail messages. William Kennedy Corporate Vice President, Office Communications and Forms Team
Now I understand their desire to allow their users to create “visually stunning” emails easily. However, the side effect of their current method of enabling that is that they force anyone creating HTML emails to tailor to their poor support of current HTML and CSS standards. Unfortunately, the team behind Outlook apparently doesn’t think these standards apply to email. They claim that there is “no widely-supported consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability.” Really?
In response to their announcement, fixoutlook.org was set up and encouraged Twitter users to retweet in protest of the decision. At the time of their statement, over 20,000 people had tweeted the message. I understand that Twitter users are only a subset of the people who will be using and developing for Outlook 2010, but still….how can 20,000 people, many of whom are developers and designers who work with these technologies daily, not demonstrate some sort of consensus? Even if not, wouldn’t the logical move be to try to support as much of the HTML and CSS standards as possible?
In the meantime, this really sends an inconsistent message about Microsoft’s willingness to play nicely with others. While the IE8 team made a commitment to improving their standards support significantly, the team behind Outlook 2010 is ignoring them and is forcing everyone else to make the effort to play along with them, making it very difficult for innovation across the board.
If you’re on Twitter, head over to fixoutlook.org and be sure to add your name to the list of people who realize how bad of an idea this is. If you’re not on Twitter, how about telling the Outlook team your thoughts on their article defending their choice to use the Word rendering engine. Let’s find out how many people equal a consensus.
More Information
]]>Let’s take an example. Let’s say that our client, Great Sprockets Inc., wants a design with a few rounded corners and semi-transparent backgrounds sprinkled in. We decide not to use progressive enhancement. Everybody should get these rounded corners and semi-transparent backgrounds.
So, we oblige. We create some 24-bit PNGs for the backgrounds. IE6 doesn’t support PNG24 transparency natively, so we add in a call to a script to fix that. We create a few images for the rounded corners, add a couple of extra element to our markup to position them, and we’re good to go.
Now our other client, Even Greater Sprockets Inc., also wants rounded corners and semi-transparent backgrounds. However, recognizing that neither is important to the actual branding of the site, they agree to practice a bit of progressive enhancement.
So, using two lines of CSS, we give rounded corners to all Firefox and Webkit-based browsers. Again, using CSS, we use RGBa to create semi-transparent backgrounds in the browsers that support them, and let others fall back to a fully opaque background color. And that’s it. No images, no extra Javascript calls, and no extra elements in our markup.
One client has ensured that every visitor to their site with a relatively modern browser, regardless of browser capabilities, gets rounded corners and semi-transparent backgrounds. As a result, they added time to the development of their site, and therefore money to their bill. In addition, they’ve increased the time it takes for their page to load by adding a few extra HTTP requests necessary to load the necessary images and scripts.
The other client has offered these embellishments only to browsers that support them natively. As a result, their development time is lower, and so is their bill and page load time. Their branding is still in tact, and their site still looks good, it’s just without a few added aesthetic touches.
Which option makes more sense to you?
]]>That was a pretty big announcement in its own right, and I was very pleased to see the new pattern approved and garnering a bit of buzz. That announcement, however, was trumped by Google announcing that they will be starting to index Microformats and RDFa and using that data to enrich their search results.
Microformats has been around since 2003, but the adoption has been a bit sluggish. While overall quite easy to implement, it can been difficult to demonstrate the value of using Microformats at times due to a lack of major support, and therefore, major incentive. That shouldn’t be a problem anymore, because Google is definitely providing that major incentive.
How They’ll Be Used
Google has a good vision for how to make use of the harvested data in their results. They’ll be providing these “rich snippets”, as they’re calling them, to provide both additional content and meaning about the pages in their search results. For example, a page featuring reviews will feature the average star rating, and number of reviews the page contains, right there in the search results.
In addition to reviews, this initial launch will also provide “rich snippets” for people. Using contact information parsed from sites like LinkedIn, for example, search results may indicate a person’s job title and location, to help users determine if the results they are looking at are likely to be associated with the person they’re looking for.
On top of that, Google plans for to use these Microformats to allow users to be a bit more specific with their searches. An example given was that a user could search for all reviews on a printer where the average rating was over 3 stars. This allows users to tailor the content they receive by the context in which they are interested.
Not only does this new feature enhance the user experience, but the use of Microformats and RDFa data should also provide signifcant value for sites smart enough to markup their content using them. According to studies done by Yahoo on similar enhancements to search results, they found these kinds of improvements resulted in a significant improvement in click-through rates, in some cases up to a 15% increase.
More Coverage
There’s been lots of great coverage on Google’s announcement, and I highly encourage you to have a look at some of the insights offered in the posts below:
Google Engineering Explains Microformat Support in Searches (Interview)
Google Search Now Supports Microformats and Adds “Rich Snippets” to Search Results
In addition, if you haven’t explored Microformats, I encourage you to do so now. With Google starting to index Microformats, and in turn leverage the harvested data to enhance search results, whether or not Microformats are valuable is no longer debatable.
]]>In addition, a strong DOM provides you with numerous attributes and elements that you can make use of to style the content to your hearts desire. This gives you much more power and control with your CSS, and helps to greatly decrease your usage of extraneous divs and classes
A very quick way to improve your markup skills, and therefore the value of your content, is to expand your knowledge of HTML elements, and start making use of a few you might not have been aware of.
and
The fieldset element is used to group related controls and labels within a form. They are a great way to help make your form easier to understand, and more accessible for speech-navigated user agents.
It should be used in conjunction with the legend element to provide an even richer and more usable experience. The legend element basically defines a caption for the fieldset. Here’s an example of how you could use these elements in your markup:
Team Captain
Name
Player #2
Name
An often forgot element that you can start using immediately in your text is the q element. The q element is very similar to the blockquote element, but should be used in a different context. The blockquote element is meant to be used for longer, block-level quotes. The q element, on the other hand should be used for short quotes.
All major browsers, other than IE6 and IE7, will automatically insert quotation marks around the content within the q element, according to the HTML spec. There’s a few ways to work around this, but my favorite solution is the one proposed by Stacy Cordoni in an A List Apart article from 2006. Her solution is to remove the quotes from around the q element using the :before and :after pseudo-classes to even the playing field. Then, with each of the major browsers not rendering quotes, you can insert them directly into your markup, ensuring that all browsers render your quote the same.
- “Remember that drover, the one I shot in the mouth so’s the teeth come out the back of his head?”, asked Munny.
The cite element is used to identify a reference or citation to another source, like a book or another website. By default, each of the major browsers render the cite element in italics. Making use of our q element example, adding the cite element would give us something like this:
- “Remember that drover, the one I shot in the mouth so’s the teeth come out the back of his head?”, asked Munny.
The dfn element is used to markup the defining instance of a word or phrase. It’s important to note that it is not intended to markup the actual definition itself, but instead the word or phrase being defined. Most major browsers will render the content within the dfn element in italics, though of course you can alter that as you wish with CSS.
- The dfn element is used to markup the defining instance of a word or phrase.
and
Here’s one for anyone who writes any sort of articles online. You can use the ins and del elements to identify content that has been either inserted or deleted since the content’s publication. The ins element, by default, is underlined, and the del element, by default, is striked through.
- SXSW’s parties are a great time to
booze it up withmingle with fellow developers.
The address element is slightly misleading. One would think you would use it to mark up a physical address, but that’s not necessarily the case. The address element is actually intended to contain the contact point for the document containing it. While this certainly could contain a physical address, it doesn’t have to. It could house any sort of contact information provided that it is contact information for the author and/or owner of the document in question.
For example, I could use the address element to provide my contact information since I am the owner of this article, like so:
By default, the address element is rendered in italics in each of the major browsers.
See For Yourself
If you’re interested, I put up a page with examples of each of the elements above. I did not apply any CSS, so you can see how each element is rendered by default in different browsers.
]]>Excuses, Excuses
There are several reasons why people tend to pass on acting on their own ideas. Some of the major ones are fear of criticism, fear of failure, self-doubt, or the feeling that there is not enough time. Based on these reasons, we can come up with a multitude of alibis for not pursuing these ideas.
One that I commonly use is that the “time is not right”. I often tell myself that I should wait on acting on my ideas until sometime where the circumstances are better aligned for it. Sometime in the future when I have more of the knowledge necessary or where I have more time available to me; that’s when I’ll move forward on my ideas.
The fact though, is that postponing ideas quickly becomes a habit. The truth is, there will always be more research that could be done, and there will always be distractions that make us feel like we don’t have the time to act. Acting on an idea will always expose yourself to criticism, and there is always the chance that the idea will not be perceived as a success. If we keep waiting until the circumstances are “just right”, we’ll be putting that idea in a perpetual holding state, until either we decide not to act on it at all, or someone else has already beaten us to it.
Make Your Ideas Count
At least by acting on those ideas, we make them count for something. They may not be met with an extraordinary amount of success, and occasionally they may even be met with flat out failure (though I believe that if you really go after your idea with vigor, true failure will occur very infrequently). For each non-successful idea you have and pursue, that is one more lesson you’ve learned and one more step you’ve taken towards making your next idea meet with greater success.
The conclusion then, is that ideas were meant to be pursued, not postponed. Quit coming up with excuses and start moving forward on those ideas and goals that you’ve been putting off. If you never try any of them, it is a certainty that none of them will work out. By pursuing them, at least you give them the chance.
]]>Object-Oriented Javascript is written by Stoyan Stefanov, a web developer at Yahoo. Stoyan’s thoughts on all things web can be found at phpied.com. He also runs a blog on iPhone development, and a site dedicated to Javascript design patterns at JSPatterns.com (it’s been quiet for quite awhile now, but I’m hoping to see it brought out of retirement).
What’s Covered?
Exactly what you’d expect given the title….object-oriented Javascript! Actually, the book covers a lot of information, starting with the basics (variables, loops, functions, etc.) all the way through to a few basic, albeit useful, design patterns.
The book is very well written and its discussions precise. Stoyan doesn’t take a lot of time going through complex examples. Instead, he gives bite-size chunks for you to play with and expand upon. If you’re someone who prefers playing with concepts yourself over going through expanded examples in books, this book is right up your alley.
While the book has chapters on the DOM and primitive data types, it is in the discussions of topics like closures and inheritance where the book really stands out. These sometimes confusing topics are presented in a very clear and concise way, helping to break down the learning barrier that so often stands in the way of truly understanding those subjects.
Should I Read It?
The book is intended to be accessible to even developers with no prior Javascript experience, and it does a reasonable job of doing so. Thankfully, the information is covered very well, making it likely that even the introductory chapters will be worth the read for more experienced Javascript developers.
I’m sure the refinement and overall quality of the information presented is in no small part due to the plethora of quality technical reviewers. It’s clear Stoyan took the task of accurately presenting this information very seriously, as his list of technical reviewers sounds a bit like a who’s who of web development.
Final Verdict
I really enjoyed Object Oriented Javascript and highly recommend it. For beginners, its not as soft an introduction as DOM Scripting, nor is it as exhaustive in detail as PPK on Javascript. That being said, its content is perhaps the most complete of any Javascript book accessible to beginners that I’ve read thus far, and it’s certainly one of the best written books I’ve read. Even more experienced developers will find very useful information in some great coverage of more advanced object-oriented techniques and very useful appendices.
Great…Where Do I Get a Copy?
]]>CSS Naked Day is a way of promoting web standards by stripping off all the CSS on a site to show that by structuring your HTML in a way that is semantic and makes sense, your content is still useful even without all the pretty design. If you want more information, you can check out the official CSS Naked Day site, and the almost 400 participants.
Don’t worry, this site will return to its regular design and layout on the 10th.
]]>SXSW 2009
If you’re going to talk about SXSW, the discussion will inevitably revolve around three topics: presentations, parties and people.
Presentations
This year, whether due to incredible content or the broadening of my interests, there were multiple presentations I wanted to see each session. Thankfully I feel like I chose wisely, as I can honestly say I enjoyed every presentation. Not equally necessarily, but each presentation had value that I could glean from it, and each held my attention.
Parties Events
As always, the parties were a fantastic opportunity to let loose and mingle with fellow web developers and designers. I always hesitate to call them parties though. For one, to be able to attend the last two years, I’ve needed funding from my employers, and it’s harder to sell a conference where the word “parties” is frequently used.
More seriously, I hesitate because they aren’t parties in the typical sense. These are parties geek style. Like most parties there is beer (Shiner Bock if you’re doing it right) involved, but conversations are about things like Javascript performance and new CSS techniques. In general attendees are intelligent, and the conversations reflect that.
People
That’s the beauty of SXSW - the conversations. You meet fantastic people from all over who are interested in the same kinds of things that you are. I’d heard people say that each year they come back it feels more like coming home. While I’m not ready to go that far, I will say that it was great to catch up with people I had met last year, and to meet new people to catch up with next year.
I’m hoping to make the AJAX Experience this year, but as of right now, SXSW is the only major conference I’ve attended, so I don’t really have anything to compare it to. However, it’s safe to say that the rewards of going greatly outweigh the costs of doing so, and SXSW should be one of the conferences on your yearly radar. If you can get to Austin next March, I highly recommend it, and hopefully, I’ll see you there.
]]>WaSP’s InterAct Curriculum was specifically developed to help take some of the pressure off current educators in creating and maintaining a curriculum based on current industry standards. Thanks to the work of numerous educators and industry professionals, the InterAct Curriculum accomplishes that. The current, and initial, release contains 11 courses that fall into one of six general tracks:
Foundations
Front-end Development
Design
User Science
Server-side Development
Professional Practices
A Complete Package
There’s a lot of work that went into the development of the curriculum. For each course there are assignments, core competencies, learning modules, recommend textbooks and additional recommended reading. The content in each course is carefully selected, the books include fantastic titles like Designing with Web Standards and PPK on Javascript, and the recommended reading contains some great writing from around the web, including articles from Opera’s web curriculum.
There are two other releases scheduled, one in March of 2010 and one sometime after that. Each will contain a few more courses, as outlined in their roadmap. The best part is, this is an ongoing project and community driven. That means that the curriculum will not become stagnant, but will continue to evolve with current industry standards.
Getting Involved
Being community driven, there are plenty of ways to get involved. Educators can contribute assignments and modules that they’ve implemented in their own courses and believe to be helpful. There is also a place to discuss the curriculum and input suggestions or criticisms to help fine tune the subjects addressed.
Get the Word Out
I doubt if many educators (if any for that matter) will argue against the value of having the curriculum available to them. Considering all the work that went into its development, and the fact that industry experts were envolved, ensuring that the curriculum lines up with current practices, it’s just too valuable a resource to pass on. I think the biggest challenge then, is to make sure and get the word out about the curriculum.
We need to go out and start sharing the information with local college professors and advisory teams. If we can start communicating the value of adjusting existing curriculum to model the roadmap laid down by WaSP, that would go a very long ways in speeding up the adoption of these courses and helping to increase the level of competence for new professionals fresh out of school. *[WaSP]: Web Standards Project
]]>The discussion that led to Budd’s little slice of wisdom revolved around how to get a company’s designers and developers together and interested in usability testing. One response was to entice a team with pizza and soda and make a day out of it. Budd’s response was that if you have to bribe your developers for them to take an interest in improving their products, then “hire better people”. As Budd said, “It’s everyones job to build better ****!” (Profanity excluded but I think it still makes the point!)
Budd’s passion on the topic was inspiring, and the point he raised was an excellent one. Continually improving your skills, and therefore the products you develop, should not be a chore; it should be the goal.
I can think of two reasons for not trying to continually improve your current set of skills:
You have no desire to improve.
You feel there is no need.
No Desire to Improve
If you have no desire to improve, find a different career. Sorry to be blunt, but I believe we are not intended to spend our lives working on something that we have no desire to be doing at the highest level of competence that we are able to obtain. If you’re not in a profession you truly enjoy, find one that you do.
No Need to Improve
If you feel there is no need to improve, that you know all you need to know about a topic…then you’re doing it wrong. Whether it be improvements to your speed, efficiency, quality or general knowledge base, there is always room to improve. If you think you know everything there is to know, you most certainly do not.
For truly great work, you must be passionate about what you do, and you need to surround yourself with other passionate people. If you’re not in that situation, whether it be the people who surround you or the career you’ve chosen, do whatever you must to get there. Life is too short to not spend it doing something you truly enjoy.
]]>As many of you who are on Facebook or Twitter no doubt already know, on February 7th my wife and I had our first child, a little baby girl. Little is a bit relative here…Naomi Adalyn was an ounce shy of 9 lbs and was 21” long. I wanted to get this post up a bit earlier, but as you can probably guess, she’s kept us quite busy.
She’s very healthy, and both my wife and I are very happy (and a little tired!). Hopefully as we continue to get accustomed to our new schedules, I can get back to fairly regular postings.
]]>So fantastic, in fact, that thanks to the generosity of my employer and a little good luck, I am going to be attending again this year. There were a few lessons I learned while attending last years conference that I’m going to keep in mind this year, to hopefully glean even more value from SXSW.
Plan Loosely
Last year, I made the attempt to plot out, hour by hour, every panel that I was going to attend. The truth is, of all the panels I attended, maybe 50% were those that I had planned on. The rest of the time, whether due to how I was feeling at that particular instant, recommendations of people I met, or interests kindled by earlier conversations, I attended panels that in some cases, didn’t seem like they would have interested me earlier. The result, was that I attended many panels that did a great job of pushing my knowledge in some areas that I really hadn’t explored before.
I do think there is some benefit in going through the panel listings and identifying some that really are must-attends. There were a few from last year, Secrets of Javascript Libraries comes to mind, that I had considered absolute musts to attend, and was sure to keep those time slots set aside. This year though, other than identifying this year’s must attends, I’m going to keep the rest of those time slots subject to change.
Be Ready to Network
One of the most amazing, and inspiring things about SXSW (and I would imagine other conferences as well), is being surrounded by so many people with similar interests, who are also passionate about the web. It should come as little surprise, that these people with a passion for the web are almost all hoping to strike up conversation with interesting people throughout their time there. With very few exceptions, this was true of everyone I met, regardless of any kind of pre-conceived ‘internet fame’.
I met some fantastic people with some great ideas, several of which I have kept in contact with throughout the year. Some of these people are attending again this year, and I am looking forward to meeting with them again. I am also looking forward to meeting a whole new flood of people, and hearing about their opinions and ideas for the web.
The value of these conversations and networking cannot be overstated, and in many cases, this networking can become even more valuable than the panels themselves. The added benefit is that networking, combined with continued conversations with these people throughout the year helps to maintain that ‘SXSW high’ that otherwise runs out all too soon.
Soak It All In
Five days seems like a long time for a conference, but it goes by quicker than you’d think. Once it’s done, you have to wait another year for March to roll around, and you never know whether expenses and circumstances will allow you to attend the following year. So, I’ll make every effort to soak up each day that I’m there…the panels, the conversations, and yes, the parties.
I believe I’m still young enough that I can attend panels all day, stay out at the parties and social activities till late at night, and still have plenty of energy to get up for the next morning’s first sessions. Seeing as how there’s plenty of Starbucks stands set up in the hallways, I can also rely on my good friend caffeine to help out a bit. I can’t imagine experiencing SXSW in any other way than cramming my days full of as many inspiring conversations and informative panels as possible.
Getting In Touch
If anybody else is going to be down in Austin for SXSW this year, feel free to look me up. I’ll be getting down there the morning of the 13th (Friday) and heading back the morning of the 17th (Tuesday). The best way to get ahold of me while I’m down there will be via Twitter, so feel free to ping me if you want to meet up. The conversations are half the fun, and I’m looking forward to meeting more people this year.
]]>SocialCorp is written by Joel Postman, currently the Chief Enterprise Social Business Strategist at Intridea, a social app development company. Joel frequently blogs on social media marketing and other related topics at www.socializedpr.com.
What’s Covered?
In today’s web, a company’s online presence extends far beyond their website. Thanks to the rise of social communities like Facebook and Twitter, and the incredible reach a simple blog post can have, consumers are discussing company products and services, both positively and negatively, all over the web. This presents an incredible opportunity for companies to interact with their customers in ways that can feel a bit foreign to people accustomed to traditional marketing.
That’s where SocialCorp comes in. Joel has written a fantastic book to give to anyone looking to get started in, or relatively new to using social media for marketing for corporate companies. Joel writes in a very engaging, conversational tone, and covers a lot of ground in this fairly short book.
SocialCorp introduces the reader to examples of companies that have had tremendous success with social media (like Dell, HSBC and GM for example) and to some examples of companies whose social campaigns missed the mark or backfired. In fact, that’s one of the best parts of the book. Joel is willing to discuss both how social media can help if used correctly, and how social media can hurt a company if used improperly.
The truth is, social marketing is a better fit for some companies than for others, and SocialCorp tries to help you understand the difference. There’s even a handy social media readiness quiz intended to help you identify if a company is ready to effectively use social media to interact with their customers online.
Should I Read It?
Like I stated above, at under 200 pages, SocialCorp is a short read. Don’t let that fool you, as there is no shortage of helpful information. SocialCorp provides numerous case studies of how companies today are making use of social media, and points you in the direction of numerous tools online to help you to best utilize and measure your social campaigns.
The book is intended to be a very practical introduction to social marketing, and it accomplishes this goal very well. Social marketing requires a significant change in mindset from conventional marketing, and as a result, it can be a bit difficult to grasp at first. SocialCorp provides a gentle way to help understand the how’s and more importantly, the why’s involved with social media.
The Final Verdict
SocialCorp is a great book to share with co-workers, bosses, employees, and corporate executives who are interested in social media, but could use a little extra information to get them truly engaged. There’s a lot of value offered by social media that is just waiting to be tapped into, and SocialCorp does a great job of explaining how you can do just that.
Great…Where Do I Get a Copy?
]]>To experiment with these methods and events, you’ll need to be running either IE8, Firefox 3 or the WebKit Nightlies. Opera 9+ provides support as well, but they use an older version of the spec which required the postMessage method to be called from the document object instead of the window object.
There are two key steps involved with HTML5 Cross Document Messaging. The first is posting the message. You do this by calling the window’s postMessage() function. The postMessage function takes two arguments: the message to be sent, and the target origin.
Then, to receive the message in the other window, you need to watch for the window’s message event using window.addEventListener or something similar. To help show how this works, I’ve set up an example for you to see here. In my example, both the sending window and the receiving window are located within the same domain (timkadlec.com), but as long as you have a reference to a window, you can communicate cross domain the same way.
Walking the Code
Take a look at the source code and you’ll see that I’m simply using one window (window A) to open the other (window B) so that I have a reference to it. Window B contains two buttons that when clicked, use the postMessage method to post a message back to window A, like so:
window.opener.postMessage('John Smith', 'https://www.timkadlec.com');
We use window.opener to get our reference to window a, and then call the postMessage function and send the message ‘John Smith’ back to it. We specify that the origin is timkadlec.com in the targetOrigin argument.
Now back in window A, we need to prepare to receive the message. To do so, we look for the message event.
window.addEventListener('message', receiver, false);
As you can see above, I’m using addEventListener to listen for the message event, and once the event occurs, we call the receiver function.
function receiver(e){
if (e.origin == 'https://www.timkadlec.com'){
if (e.data == 'John Smith') {
alert(e.data);
e.source.postMessage('Valid User', e.origin);
} else {
alert(e.data);
e.source.postMessage('FAIL', e.origin);
}
}
}
In the receiver function, we verify that the origin of the event is timkadlec.com (line 2). This is highly encouraged, as it ensures that we only receive messages from domains we are expecting to hear from. If you skipped this step, any domain could freely affect your page, and that could get a bit messy.
Then, we use the event’s data property to retrieve the message that was sent. Based on the message received, we then use the source property of the event to obtain a reference to the window that sent the message. Then, again using the postMessage method (lines 5 and 8), we send a message back to window A.
This is a pretty straightforward example, and is really just meant to demonstrate how easy it is to post messages back and forth between documents. There’s another good example up that makes use of iframes if you want to see another example of cross document messaging.
Some Security Considerations
Hopefully you can see that cross document messaging is both simple, and potentially quite useful for things like widgets, or authentication. However, there are some security risks if you don’t take the time to double check a few things. First, like mentioned before, you should always double check the origin of the sent message. You don’t want to be just accepting messages from anyone…that’s kind of the reason cross-site scripting isn’t allowed in the first place.
Secondly, it is possible to use the asterisk in the targetOrigin argument to allow your message to be posted to any domain. However, you should be sure to never use the asterisk symbol when sending confidential information. In those cases, you should be specifying the targetOrigin specifically so that you can guarantee that only the intended recipient gets the information.
]]>CSSDOC was an idea apparently conceived sometime in 2007, and the Second Public Draft of the spec was just released on November 16th. The intent behind CSSDOC is to provide a standardized way of commenting CSS making use of the very well known DocBlock way of commenting source-code.
DocBlock is a very common form of documenting source code in programming that has proven to be very popular for both PHP and Javascript. The beauty of the method is that it’s so simple to use DocBlock to organize your code and since it’s a standardized format, other developers will be familiar with it and tools can be developed to parse it and auto-generate documentation.
There are a great deal of tags being developed for CSSDOC that can provide a lot of great information. For example, here is a sample header from the CSSDOC draft:
/**
* Homepage Style
*
* Standard Layout (all parts) for Big Little Homepage
*
* This style has been designed by Mina Margin. It reflects
* the composition of colors through the years of the
* customers project as well as the boldness it implies.
*
* @project Big Little Homepage
* @version 0.2.8
* @package xhtml-css
* @author Mina Margin
* @copyright 2008 by the author
* @cssdoc version 1.0-pre
* @license GPL v3
*
* @colordef #fff; white
* @colordef #808080; standard grey
*/
Just by this simple header, we’ve already provided a great deal of information to both future developers and to a documentation parser. In our header we’ve provided the project we’re working on, a version number for the project, copyright and author information, and some definitions of recurring colors used in the project.
You then make use of the @section and @subsection comments to divide your CSS file into manageable sections of related styles. I’d love to see this implemented in editors like CSSEdit’s @group comment. For those of you unfamiliar, the @group comment in CSSEdit is parsed out and made into easy to navigate folders in the sidebar (see the image below).

I won’t go through all the available comments (the draft can give you that and does a great job of explaining), but suffice it to say there’s a lot of extremely useful comments available: @affected comment which describes what browsers are affected by a certain bug/workaround; @tested comments to specify what browsers a certain section has been tested on; @fontdef for font definitions, similar to the @colordef rule in the example above; etc.
I’m very excited by this project and think the team behind the spec has done a fantastic job thus far. The few concerns I had, they’ve either addressed, or are in the process of doing so. It’s very easy to get involved with the project as they have been very transparent in the development of the specification.
In addition, if you want to start playing around with CSSDOC a bit on your own CSS, there are bundles made already for editors like Textmate, EditPlus, and KomodoEdit just to name a few. You can keep up to date on new bundles and snippets at https://cssdoc.net/wiki/EditorIntegration.
]]>Oomph, much like Operator in Firefox, pulls microformatted information from a page and allows the user to make use of this data by offering options like being able to export contact information, map addresses, and add events to your calendar.
That would be enough in itself to get my attention…Microsoft has not typically been the most open of companies, and despite Gates’ declaration that the web needs Microformats, they really hadn’t done much to advance its’ use. Seeing their developer community get behind Microformats with the toolbar and a couple of nice Microformats articles is very encouraging.
However, there’s more to this story. In addition to the IE toolbar, a cross-browser Javascript implementation of Oomph was created. The toolkit, which makes use of JQuery, provides the same functionality of Oomph no matter the browser being utilized.
What’s so wonderful about the way the Oomph toolkit functions is that the useful data is all right there in the browser window. Without the visitor leaving the site, they can grab a vcard of your contact information, see a listing of upcoming events, or make use of Visual Earth and view a map of a location.
This is, I think, a fairly major move. The beauty of Microformats is how easy it is to make your content more meaningful, and more useful. By providing similar data in a specific format, it significantly decreases the effort necessary to extract that data, and then use it. Having a cross-browser implementation of a script that makes use of this data to enhance its’ functionality is really a nice feature and a great way to show off the value of using Microformats.
What’s best is that since the toolkit makes use of Javascript and CSS for the effects and layout, we can modify the functionality and appearance for usage on our own sites. Technorati already offered services to help us extract contact information or event information, but the Oomph toolkit expands upon that functionality and allows us to offer even more enhanced options for visitors.
All in all, I am very pleased by this development. Microformats is such a valuable technology that is long overdue for mainstream implementation. It’s nice to see yet another big supporter coming through to help it get there. Along with Technorati’s tools and the fantastic Optimus Microformats transformer by Dmitry Baranovskiy, the resouces are in place and it should be interesting to see the ways these tools are utilized to provide a better user experience for visitors.
]]>There are a few solutions currently being utilized across the web. The simplest, and also probably the worst option of them, is to use background images to display our custom fonts. This is not very ideal at all…every time we want to change the text on the site, we have to edit the appropriate background image. A couple less “needy” options are sIFR and FLIR. In both cases, Javascript is utilized to deliver the text in our desired font. sIFR uses Flash to make this happen and FLIR uses PHP. But we are still relying on Javascript to load the fonts onto the page, and there is some performance loss.
Thankfully, we are close to being able to make use of a CSS rule that makes font use so much simpler. The newest versions of all major browsers now support the @font-face rule, which gives us a lot of power over the fonts we can use in our sites. Unfortunately, some of them don’t play too nicely yet. Firefox 3.1 and Safari both implement the @font-face rule very well, but IE only supports EOT fonts and I’ve found Opera 9.6’s support is a bit unreliable. However, in the name of progressive enhancement, there’s little reason why we can’t start making use of this rule to improve our sites now.
Simply Powerful
The beauty of the @font-face rule is both how simple it is, and yet how powerful it is. The @font-face rule includes the rule and the font description like so:
@font-face {
}
The font description is made up of the what are called font descriptors. Simply put, they follow the same format of typical CSS style declarations. The most basic font description is composed of a font-family declaration and a src declaration that points to the font we will be using, like so:
@font-face {
font-family: "Benjamin Franklin";
src: url(BenjaminFranklin.ttf);
}
The url can be either remote or on your own site. However, for maximum browser compatibility, your font should reside in the same place as the page using it. Firefox, for instance, does not allow using fonts that don’t have the same origin of the page. Instead of using a url, we can also use a local path ( local(font-path) ) that would point to a font located on the user’s computer. We’ve now set the way for us to make use of the Benjamin Franklin font in our site.
h1 {
font-family: "Benjamin Franklin", serif;
}
More Than Meets the Eye
Simple and easy. Here’s where the fun stuff comes in though. Remember how those declarations, like the font-family declaration above, are called font descriptors? That’s because they describe the declaration that can be used to trigger that font use. Admittedly, that might not be very clear, so let’s extend our example:
@font-face {
font-family: "Benjamin Franklin";
src: url(BenjaminFranklin.ttf);
font-weight: all;
}
@font-face {
font-family: "Benjamin Franklin";
src: url(Hansa.ttf);
font-weight: bold;
}
h1 {
font-family: "Benjamin Franklin", serif;
font-weight: bold;
}
h2 {
font-family: "Benjamin Franklin", serif;
font-weight: normal;
}
In the example above, we changed the call to the font being used if the font-weight is bold. If the font-weight is anything but bold, then the Benjamin Franklin font is used. If the font-weight is set to bold, then the Hansa font is used. If you have Safari or the Firefox 3.1 beta you can take a look at the code in action. (NOTE: The only reason I was using two different fonts here is to make it obvious that the font is changed based on the font-weight. A more subtle example would be to use a variation of the base font, like a bold version.)
Descriptors give you significant power over the fonts you utilize. Instead of relying on the browser to fake a bold or italic version of the font, we can provide the more professional looking actual italicized, or bolded, version of the font, greatly improving the appearance of our design.
I highly encourage taking a look at the W3C’s information on the @font-face rule and playing around with it a bit. There’s some great examples of some of the powerful ways you can make use of the @font-face rules including an interesting example of redefining the basic serif font-family. Remember, for full functionality you’ll need to either get ahold of Firefox 3.1 or Safari.
Keep in mind that not all fonts are meant to be used in this way. Some font providers encourage you to make use of their fonts freely, other’s don’t. So make sure you’re allowed to make use of the font before embedding it on your site. Both fonts used in the examples, Hansa and Benjamin Franklin, are designed by Dieter Steffman who graciously allows his fonts to be freely used on the web.
Also, while there is a lot of power and control offered with the @font-face rule, still use it with some level of restraint. There is some time involved in downloading the font, so you should probably stick to using this method for headlines only…not body text. *[EOT]: Embedded Open Type *[sIFR]: scalable Inman Flash Replacement *[FLIR]: Facelift Image Replacement
]]>Mobile Web Development is written by Nirav Mehta, the head of Magnet Technologies a software development firm in India. He blogs about a variety of business and tech topics at www.mehtanirav.com.
What’s Covered?
Mobile Web Development covers a wide variety of topics related to…guess what….mobile web development. Nirav does a fantastic job of introducing a wide variety of technologies needed to begin mobile web development including sending and receiving SMS and MMS messages, optimizing your site for mobile devices and using AJAX on the mobile web.
The book, from Packt Publishing, takes a very solution-based approach. Each chapter, with the exception of the first and last, has a very specific task that it is concerned with accomplishing. Usually, I’m not too awful fond of the format. It often feels like such books aren’t teaching me a topic so much as giving me snippets of code I am comfortable with manipulating.
This book, however, is an exception to that rule. Each chapter, in addition to accomplishing the task at hands, takes the time to explain the possible solutions to the problem, and their pros and cons. The result is that once you’ve finished the book, you have a nice foundation of real working knowledge that will allow you to immediately get started with mobile web development. For those of us that may want a deeper understanding of the technologies, there are plenty of nods towards resources that will provide that information.
Should I Read It?
The book is intended for people with at least a basic understanding of CSS, Javascript and PHP. In particular, there is a fair amount of PHP code, so you should probably be comfortable with looking through it.
The book manages to cover a surprisingly large amount of information for being such a brisk read. The truth though, is that at least in the beginning, the basics of mobile web development are quite similar to the basics of web development, and you’ll be pleasantly surprised by just how easy it is to get started.
One of the things I enjoyed most about Nirav’s approach to the book is the emphasis on the user. Keeping the user in mind is always important, but particularly when the user needs to get the information quickly and needs to do it with a very small amount of screen real estate. Each chapter makes sure to mention how a given solution can help or detract from the user experience, ensuring that you have the understanding necessary to make good decisions that will benefit your users.
One Minor Complaint
The one and only issue I have with the book is that the editing could have been a bit better. Don’t worry though, the editing is no-where near bad enough to confuse you. There’s just a fair amount of a’s and the’s that are AWOL. Like I said, not enough to cause you trouble understanding the information, just enough that you’ll notice.
Final Verdict
Mobile web development is one of the most important new avenues for web developers to pursue. The amount of people making use of mobile devices to get their information on the run is growing very quickly. Minor editing issues aside, the book was a great introduction to getting started with these technologies. I would highly recommend picking up the book and giving it a thorough reading. It’s surprising how easy it is to get started in the mobile web, and after reading it you’ll have a solid base of working knowledge to allow you to start creating your own mobile web content.
Great…Where Do I Get a Copy?
]]>A Quick Look at Attributes
We’ve already seen how to set up the canvas element in HTML:
You’ve probably noticed that we’ve included an id attribute on our canvas element to make it easier for us to access the element in our Javascript. You can also apply other standard attributes like class, title or tabindex. Two other attributes, height and width, will also be used fairly regularly.
You can define the height and width as attributes in the canvas element, or you can use CSS to define the dimensions of your element. If you use CSS, however, your canvas will scale to meet the dimensions you define instead of simply resizing the area. Neither height nor width are necessary, however. If you choose to not define the size of the canvas element, then it defaults to a size of 300 pixels wide by 150 pixels high.
Roll Up Your Sleeves…
All of this so far has been pretty easy…but also boring. The canvas element’s real power, of course, is the ability to use Javascript to manipulate it. To do so, we have to get a rendering context using the getContext() function. The rendering context is what allows us to actually manipulate the content in the canvas element. The function is straight forward and easy to use:
var canvas = document.getElementById(‘canvas’);
var context = canvas.getContext(‘2d’);
Currently, “2d” is the only defined context that we can obtain. In the future, it is not unreasonable to expect to see that expand and include support for a three dimensional drawing context. Of course in a real-world setting you’ll want to check to make sure the browser supports the getContext method in the first place. The canvas element is still relatively new and there will be a fair amount of browsers that will not support it.
The One and Only
Now that we have a rendering context, let’s make use of it by starting to draw something to the canvas. The canvas element only natively supports one shape and that is the rectangle. Don’t panic….you’ll see later that there are plenty of methods available for us to create everything from a basic circle to very complex abstract shapes.
For now though, we’ll keep it simple and just make a rectangle. We have three functions that are available to use for this: fillRect(), strokeRect(), and clearRect(). The functions do pretty much exactly what you would think based on their names. fillRect() draws a filled rectangle; strokeRect() draws a rectangle with border, or stroke, around it; and clearRect() clears the area and makes a fully transparent rectangle. To make it even more simple, each of the functions takes the exact same parameters. Let’s take for example the following line of code:
- context.fillRect(0,0,50,75);
As you can see, the function takes four parameters. The first two define the starting point of the shape, the x and y coordinates. Thankfully the coordinates follow common sense. The origin or (0,0) is the top left of the canvas element. So (0,10) would be at the top and 10 pixels from the left.
The next two parameters are the width and height of the canvas element. In this case, I made a rectangle that is 50 pixels wide and 75 pixels high. So the result of the above line of code is a 50 pixel by 75 pixel, filled rectangle in the top left corner of the canvas element. To get a good idea of the results of each of the rectangle functions, we’ll use the following code (we’ve also set the height and width attributes on our canvas element to 125 pixels each) :
context.fillRect(0,0,50,50);
context.clearRect(25,25,50,50);
context.fillRect(50,50,50,50);
context.strokeRect(75,75,50,50);
The result, as you can see here, is four overlapping rectangles. Remember, you’ll need Firefox (1.5+), Safari, or Opera (9+) to view it. As you can see, the clear rectangle clears out the area it covers. The stroke rectangle, on the other hand, doesn’t clear out the area, so you can see the filled rectangle through it.
Next Time
Next time around, we’ll start to look at some of the other functions available, and how we can use those functions to start making a variety of shapes…not just simple rectangles. To wet your appetite a bit in the meantime, have a look at another great example of how the canvas element can be used.
]]>What Is It, and Who Supports It?
The canvas element was originally implemented in Safari, and then became standardized in the HTML5 specification. The element allows developers to dynamically draw onto a blank ‘canvas’ in a website. Thankfully, you don’t have to wait to play around with this element. Currently, you can find support for it in Firefox (version 1.5 and newer), Safari, or Opera (version 9 and newer). In addition, you can twist IE’s arm a bit thanks to Google and Mozilla. Google has created ExplorerCanvas, a script that allows your canvas scripts to work in IE. For more intensive applications, Mozilla created has created an ActiveX plugin for IE to bring canvas support to the widely used browser. So, there’s little reason why you can’t start using it today….Google Maps does!
Unfortunately, there is some discrepancy in the way browsers support the canvas element right now. For example, in Safari, the canvas tag works a lot like the img tag…it doesn’t require a closing tag. In Safari, you can close the element like so:
In Firefox, however, the canvas element requires a closing tag:
The problem comes in with alternate content. In Firefox, we can simply throw our alternate content in between the opening and closing canvas tags. If the browser doesn’t support the canvas element, then the alternate content displays. In Safari, the content displays regardless. There are a few ways you can hack around this however, including this one presented by Matt Snider.
Why It’s Cool
The canvas element is not meant for static images…though it can certainly be used to do that. The real power of it comes when we make use of Javascript to manipulate the canvas element and create dynamic visualizations like data charts and graphs, interactive diagrams and games. In fact, there are a couple impressive Javascript game recreations that have already been developed that make use of the canvas element. You can already play Mario Kart, Super Mario, and an incredibly addicting game called Ooze.
The canvas element is a great example of where implementation precedes standardization. Safari implemented it, then Firefox and Opera caught on, and now the WHAT-WG is incorporating it into the HTML5 specification. Once implemented, it provides us with a standardized, cross-browser means to dynamically display data and react to user events in a way that previously required Flash.
What’s Coming Up
Next time around, we’ll start to look at the canvas element in more detail including the attributes available. We’ll also start diving into some Javascript and some of the methods provided by the DOM to interact with the canvas element. *[WHAT-WG]: Web Hypertext Application Technology Working Group
]]>Some Background
There’s been a lot of talk in the Javascript community over the past 9 years or so about the development of ECMAScript 4, what was to be the foundation for what was being called Javascript 2. It was a controversial and fairly dramatic change from ECMAScript 3 (Javascript). There was going to be support for classes, inheritance, type annotations, namespaces…the whole flavor of the language was going to dramatically change.
ECMAScript 4 was being developed primarily by Adobe, Mozilla, Opera and Google and was primarily based on the features those organizations wished to implement. Others, including Microsoft and Yahoo, found the proposed changes in ES4 to be to dramatic, and instead wanted to implement minor changes and bug fixes to ES3, labeling it ES3.1 instead.
What Happened?
Obviously with division among such major organizations, something was going to have to give. So, at a recent TC39 meeting (the committee in charge of developing the ECMAScript standard) met and a resolution was met. The two sides would merge their ideas together with a new committed focus to ensuring simplicity in the new changes to the language…enter ECMAScript Harmony. In an email, Brendan Eich of Mozilla laid down the 4 primary goals of ECMAScript Harmony:
Focus work on ES3.1 with full collaboration of all parties, and target two interoperable implementations by early next year.
Collaborate on the next step beyond ES3.1, which will include syntactic extensions but which will be more modest than ES4 in both semantic and syntactic innovation.
Some ES4 proposals have been deemed unsound for the Web, and are off the table for good: packages, namespaces and early binding. This conclusion is key to Harmony.
Other goals and ideas from ES4 are being rephrased to keep consensus in the committee; these include a notion of classes based on existing ES3 concepts combined with proposed ES3.1 extensions.
What Does It Mean?
Well, for one, it means that the series of posts I had started on Javascript 2.0 are now completely worthless…and no…I will not reimburse you for the time spent reading them.
But far more importantly, it means that progress should speed up immensely. (There are ES4 proposals from 1999.) The plan appears to be to implement certain de facto standards rather quickly. For example, getters and setters are already implemented in each of the major browsers except for IE, and they will be thrown into the ES Harmony specification. (Which, by the way, brings up the interesting situation where implementation precedes specification, something that occurs often on the web and something you’ll probably see me talk more about in the future.)
Meanwhile, ActionScript 3.0 was already built to reflect the ECMAScript 4 proposed specifications…so it would appear they’re kind of left a bit high and dry. It sounds as though they are definitely committed to keeping in line with specifications, and that they plan on implementing new features laid out in the newly developed specification. They are also going to continue to keep their current features, like classes and type annotations, available in their language…think of it as an extension to the ECMAScript standard.
My Thoughts
All in all, I think the outcome is positive. ES4 had been taking forever to get going, and ES3.1 hadn’t been sounding like it would do much besides fix bugs. Now, we’ll have a new implementation soon and while it won’t have the same dramatic changes ES4 had in mind, there will be some interesting new features being added.
While I would’ve enjoyed some of the new features, Javascript gets to maintain a lot of flexibility here. ES4 would’ve tightened that up a bit, and I still haven’t really made up my mind on whether that would’ve been a good thing or not.
It’s also quite encouraging to see so many major players working together. Microsoft, Yahoo, Google, Mozilla, Opera and Adobe all working together in harmony. (I apologize for the pun…it was just there, and I decided to go with it.) One can only imagine having these organizations working together will ensure a high quality specification, and also lead to faster implementation of the spec.
Around the Web
This is certainly a huge development, and there has been no shortage of postings about it. There’s a lot of high quality discussion going on by some major players in the world of Javascript and ActionScript, and I highly recommend checking them out.
]]>It’s no secret that the web design industry is often not given the respect it deserves. People treat it as if it’s a much simpler task than it really is. Forgive me if I come off sounding a bit arrogant, but it seems like people seriously underestimate the work involved in creating a quality web site.
One issue, for example, is people expecting to see comps of work without payment. It happens quite a bit, but it’s a ridiculous request. Do people ask mechanics to make the first couple of repairs on their car for free so they can get a feel for how they like working with them and then, based on that, decide whether or not to go with that mechanic and pay them? So why ask a web design company to create a few mock-ups first before deciding to actually pay them for their work?
Then there’s beautiful journalism like the article posted yesterday in the Wall Street Journal telling companies how to build their own site with 8 hours of work and $10. Brilliant…because that’s all that goes into a quality site.
Really, there are some fantastic gems in the article like this one:
All you need to know is that a block of HTML essentially, a bunch of gobbledygook words and symbols can add extra features to your site.
And this isn’t some second-fiddle publication being read by 5 people, this is the Wall Street Journal. A highly regarded and professional publication.
So where does all this undermining come from? It think a lot of it stems from a lack of understanding. For almost as long as there has been the web, there have been “build-your-own site” tools easily and readily available. This gives people the feeling that that’s really all it takes…a couple clicks of a button, drag a few things, and you have a site.
But the reality, as we all know, is that there is so much more that goes into the design and development of a site. So much more planning and “strategerizing” goes into the process. Can you build your own site with these free tools? Yes. Should you? That really all depends on how serious you are about making your website a business tool. If you really want to maximize it’s impact, then the answer is probably no.
I understand I’m preaching to the choir a bit here, but after coming across Jeff Croft’s link to the WSJ article, I just had to vent a bit. It’s sad to see such a lack of understanding and respect of our industry come from such a well-known and highly regarded paper.
Manage Your Data
A custom data attribute is simply any attribute starting with the string “data-”. They can be used to store data that you want kept private to the page (not viewable by the user) in cases where there is no appropriate attribute available. Every element can have any number of custom data attributes.
For example, consider a form validation script. The script needs to know what form of validation is required for each field. Currently, many of these scripts will use the class attribute to signal that.
Making use of the new HTML5 custom data attributes, we might choose to store the information like this instead:
To gain access to the value of the data-validation attribute, there are two options. First, you can simply use the getAttribute() method. This method should be familiar to anyone who’s worked with the DOM in Javascript and is supported by all major browsers. The second method is to make use of the new dataset DOM attribute. Currently, no major browsers support the dataset attribute, but to be fair, here’s how you would use it:
var theInput = document.getElementById('myInput');var validationType = theInput.dataset.validation;
What I Think
Custom data attributes have been met with varied opinions…some think it’s fantastic while others either don’t get the value or simply don’t like the idea. Personally…I think it’s a good idea.
Currently there’s two popular ways of providing hooks for scripts in HTMl where no appropriate attribute exists:
Use an existing attribute even though it may not necessarily be semantically correct.
Create a new attribute and have the page no longer valid.
Where you stand personally along those lines, of course, varies. Some people don’t mind a page that doesn’t fully validate and would rather not clog up their id’s and classes. Others don’t mind adding an extra class to an element as long as the page is valid.
With the new data-* attributes, you can have the best of both worlds; your page can validate and you don’t have to add extra classes and id’s to make your scripts work. It’s also very easy to implement, and manages to keep all the data needed for scripting together in one dataset. What’s not to like?
You can actually start making use of custom data attributes right now. The page won’t validate for HTML4 of course, but once HTML5 rolls around you’ll be set. Just remember that to access the dataset values in Javascript you will need to use the getAttribute() and setAttribute() methods as the dataset DOM attribute is not currently supported.
]]>I do have what I feel are a few fairly good excuses for being quiet lately. I just recently purchased my first home, which my wife and I are quite excited by. It has kept us both busy moving and making some quick touch-ups and improvements. Even more exciting, we found out that my wife is pregnant with what will be our first child. Again…extremely excited, but I am finding that I am spending a fair amount of time ensuring that things are in place for when the little one arrives.
Finally…I started a new job in June. It’s much more development heavy than my prior position and I’m enjoying it greatly. That being said, there was some time spent ensuring that I get up to speed with the company’s existing projects and work styles. Nothing irritates me more than knowing that I could do something much quicker if I had a better handle on things. I don’t like that “introductory period” where I have to turn to ask questions about simple little procedures simply because I don’t know the company’s tendencies, so I like to get familiarized as quickly as possible.
So between those three things, all exciting as they are, I have had very little time for updating the site. Which is unfortunate…I’ve been working on getting the site switched over to PHP and have some cool and important improvements that will come with this (one of which is an improved comment spam filter….those spammers are amazingly persistent).
There are also numerous posts I have written up that I just need to take the time to add to the site. So the content is there…or, I guess, will be there. The resolution for August is to get back to a regular schedule for updating the site…so stay tuned.
]]>As a recently graduated student, I can reflect on both my training and the training of other people my age who attended other colleges for web development that I interacted with. Unfortunately, the majority of the people I’ve communicated with stated the same thing: standards based development was not presented as a priority. CSS was glossed over and there was little to no mention of the DOM and unobtrusive scripting techniques in the Javascript courses.
Why Colleges Can’t Keep Up
A large part of this is due to the fact that our industry moves so quickly. Progress is made at such an incredible pace and new technologies soon emerge while old ones fade away. In contrast, changing the curriculum at a college usually takes awhile, making it very difficult for schools to keep up.
Another issue is that some of the best candidates for taking on the role of instructor in these courses are overlooked due to a lack of degree. It would be great to have industry-tested professionals teach the courses…who better to teach a class about the techniques and tools that will be necessary in the field than those who are doing it, and have been doing it for some time.
That is not meant to be a criticism of all current instructors. As always, there are exceptions to the rule. There are industry professionals who have no place standing in front of a class and teaching technique, and likewise there are instructors who do a fantastic job of presenting their classes with quality information. And many of the other instructors simply have their hands tied by what the college allows them to do and not do.
One thing I do like seeing is that a few instructors who are pushing standards-based development forward in their courses have published their class information. Daniel Mall and William Craft are just two examples of people who are pushing forward with standards based development instruction and then sharing with others what they are doing. This opens the door for critiquing from industry professionals and provides an example of what other instructors might consider basing their coursework around.
How Do We Fix It
So what needs to be done? Universities and colleges need to adjust. Traditional methods of updating curriculum simply do not work when it comes to such a fast-paced industry. These institutions need to be making a concerted effort to keep their curriculum up to date with current industry standards, and as a result, the curriculum should be re-evaluated on a very regular basis.
In the mean time, a temporary fix may be to implement some sort of a rotating course, a generic web development study course. The course could be used to highlight emerging industry standards and could rotate on a semester basis. Again, just a temporary fix, but at least it provides a small level of attention to the techniques that the students will be needing.
I’d also like to see a few schools start taking a look at allowing existing professionals to instruct more courses, regardless of higher-education degree status. There is a lot of insight they can offer and it’s a shame that schools are not tapping into that.
Of course, that door swings both ways. I’d love to see us as professionals get more involved in helping colleges to evaluate and update their curriculum. I applaud Opera and the people behind their new Web Standards Curriculum. If you haven’t seen it yet, take a look. They are putting together a series of 50 articles or so highlighting areas in web design and development. This is exactly the kind of thing that can really help colleges by providing a guideline for what to build their new curriculum around.
Let Me Hear Your Thoughts
This is a topic that interests me very much. Eventually I would love to start teaching a bit myself…I love sharing what I’ve learned with others and find the teaching experience to be very rewarding. That is why I pay attention to what the current colleges are doing to try and stay ahead of the game a bit. I would love to hear any input you might have on the topic. Trying to improve web education in colleges is not an easy task and I think getting more opinions and discussion on the matter are exactly what is needed to come up with a better way to help get colleges up to speed and keep them there.
]]>There was a nice article at Sitepoint about Test Driven Development. The author, Chris Corbyn, walks through the TDD process using PHP examples, and describes some of the benefits he has discovered in the TDD process.
Firefox 3 Memory Benchmarks and Comparison
With both Firefox 3 and Opera 9.5 being freshly released, here’s a nice memory performance comparison of those browsers, as well as the IE Beta version 1 and Flock. The memory tests are computed by a custom built .NET application, and provide a good look at how these browsers compare in memory management on the Windows operating system.
Sketching in Code: the Magic of Prototyping
The folks over at A List Apart offer us yet another excellent write-up. David Verba, Technology Advisor for Adaptive Path, takes a look at using prototypes in the web application development process, and what prototyping can offer that wireframes cannot.
Same DOM Errors, Different Browser Interpretations
Hallvord R. M. Steen of Opera offers a very interesting look at how the different browsers interpret the DOM, and what tools each of them provide for debugging. Some attention is also given to Opera’s new debugging tool, Dragonfly.
]]>But sometimes it’s necessary to share information between different functions, so what’s a programmer to do? Global variables certainly make that possible, but they also create some problems. Heavy reliance on global variables makes it difficult to reuse code. Rarely can we uproot functions with a large dependency on global variables and insert them into different contexts or scripts without problem.
Debugging code also becomes difficult. When a variable is global in scope, it could be being instantiated virtually anywhere making it tough to track down. What’s even worse is coming back to that code after a year, or as it was in my case, trying for the first time to decipher code that relies on global variables.
Luckily there’s a much better way: the registry pattern. The registry pattern simply creates a class that servers as a central repository, or registry, of objects that we can now utilize throughout our code. It accomplishes this by utilizing static methods and properties, which means you’ll want PHP5 to accomplish it….you can use it in PHP4 but it requires some workarounds. This pattern is particularly useful when used to store a data connection.
Breaking It Down
The best way to explain this is to walk through the code.
class Registry{
private static $instance;
static function instance(){
if ( ! isset( self::$instance ) ){
self::$instance = new self();
}
return self::$instance;
}
}
The code above first simply creates our registry class, aptly called Registry (line 1). We then create a static function (line 3) and variable (line 2). This is how we will keep track of whether or not an instance of this class exists. If it doesn’t, then we create a new instance of the class.
Now we just need to create set and get functions for anything we want to use globally. In this case, we’ll create set and get functions for a very simple object that we’ll call a TestObject and a variable to contain that object.
class Registry{
//…..
private $testObject;
function getTestObject(){
return $this->testObject;
}
function setTestObject( TestObject $test){
$this->testObject = $test;
}
}
Our two new methods do exactly what they sound like…the get and set methods simply retrieve or save our testObject to the $testObject variable. That’s all there is to it…now we can use our registry class to make a TestObject instance that we can use globally (for this post, just accept that TestObject allows us to set and display a message…all the code is available for those who want a closer look). Since we are using a static property and method to ensure that all instances of that class have access to the same values, it doesn’t matter where we initially instantiate the class.
function setting(){
$myTest = new TestObject();
$myTest->setMessage ( “Registry Patterns Rock!” );
$reg = Registry::instance();
$reg->setTestObject( $myTest );
}
function retrieve(){
$reg = Registry::instance();
$myObject = $reg->getTestObject();
echo ( $myObject->getMessage() );
}
setting();
retrieve(); //outputs Registry Patterns Rock!
Now despite the fact that we create the two instances of the registry class within two separate functions, you can see that they both have access to the same values thanks to the Registry pattern, without all the mess that can be caused by global variables. To add more values to our registry class, we simply add more private variables and some set and get functions.
But Wait…There’s More!
Another handy way of utilizing the registry pattern, is to use our Registry instance to create and then cache an object, skipping the need for a set or get function. For example, if I have a data connection that I know I want to be standard throughout the project I can create a function like this inside of my Registry class:
class Registry{
//…..
function dataConn(){
if ( ! isset ( $this->dataConn ) ) {
$this->dataConn = new dataConn();
$this->dataConn->setHost( ‘localhost’ );
}
return $this->dataConn;
}
}
Now, instead of worrying about the set and get functions, I can just call the dataConn() function. If the data connection has already been created, then it returns the connection. If not, it first creates my connection, and then returns it. So I can safely call the function without concerning myself with whether or not I’ve already created the connection…it takes care of that for me.
The registry pattern is so incredibly useful and a definite improvement over using the dreaded global variable in your code. I highly encourage you to play around with the sample code and see the different ways that the registry pattern can manage your code.
]]>When I first heard Douglas Crockford was writing Javascript: The Good Parts (let’s just call it JTGP from here on out) I was anxiously awaiting the release. Crockford has been responsible for many highly regarded articles and presentations, as well as for his incredible work with JSON, JSLint and much more. While Brendan Eich may be the father of Javascript, Crockford is probably the Godfather. Even Eich himself called Crockford “the Yoda of Lambda JavaScript programming.”
What’s Covered?
JTGP does as promised…it brings to attention the best parts of the Javascript language. Topics like Objects, Inheritance, Arrays, Functions and Regular expressions are discussed throughout the book. While focusing on the “good parts” of Javascript, Crockford also points out the not-so good parts and explains why these other parts don’t fall into the good category by pointing out caveats and pitfalls.
I’ve seen it mentioned before that people complained about the book being a bit short. It weighs in at a very light 145 pages, 45 of which are appendixes. The information is quite dense however, and I thought the appendixes were extremely valuable. The appendixes include looks at what Crockford considers to be the “awful parts” and the “bad parts” of Javascript. They also include looks at JSLint and JSON as well as providing some helpful syntax diagrams.
Should I Read It?
As mentioned before, the book is short, but very dense. As a result, there is a lot of information covered, but not always a lot of explanation involved. The book seems to take a bit of a different approach than the typical Javascript book…it’s more focused on why than it is on how.
That is not at all a bad thing though. Assuming you have a nice handle on the language and it’s syntax, there is a lot to get out of reading this book. In fact, there is so much information crammed in here that it will probably take several readings to truly grasp all the information being delivered. Don’t make the mistake of assuming that because it is short it is an easy-read…this book covers advanced information and does so at a very rapid pace.
The Final Verdict
JTGP is a great book for anyone who wants a deeper understanding of the why behind the how. I would recommend it to anyone, though I would warn that you’ll want to have a decent understanding of the syntax before reading it…since the book focuses so much on why, there’s not a lot of explanation on how things work, and to get all that this book has to offer, you’ll want to know that. Overall, a very good book that is good enough to demand several readings.
Great…Where Do I Get a Copy?
]]>Any of you who have been reading my site since the beginning might remember I wrote a post about the importance of forcing yourself to reinvent the wheel. I still stand by that, but that doesn’t mean I am entirely against all frameworks. In fact, in the case of scripting libraries, I can definitely see the value in using them. The key point is to be able to tell when to use a library and when not to.
Why Should You?
One nod for using libraries and frameworks is that reusable code is a good thing. It saves you time and money, and I think most developers have at least some amount of code that they reuse on various projects. I am all for that. We programmers are lazy…errr…efficient. If we can create a solution that is flexible and reusable…then more power to us.
Libraries are also quite handy in large teams. They provide a common base for everyone on the team to start from, and if you stick to a certain naming or formatting convention, in combination with a library of some sort (either in-house developed or not), you can make communication very easy and eliminate a lot of the questions that can arise when passing code from developer to developer.
Large applications also provide us a good reason to use a library. In Javascript, for example, there are a fair amount of browser incompatibilities. These incompatibilities are taken care of by most Javascript libraries, allowing a developer to focus on developing the actual logic for the application not the browser differences that arise.
On the Other Hand…
There are many issues though that can arise from consistent use of libraries and frameworks. If you use a library exclusively, you can become quite dependant on it. It’s important to understand the underlying code that the library is using. If you continue to depend entirely on a library to cover all browser bugs for you, there will come a time when there will be a bug it doesn’t cover, and you aren’t going to know where to turn to troubleshoot it.
Also, and this is particularly true in CSS frameworks, you can become too attached to what the framework provides and start conforming your code to fit in with the framework or library you are using. There is a saying that goes “When the only tool you have is a hammer, everything looks like a nail.” If you are using a CSS framework to create your layout and there is a particular visual style that the framework doesn’t quite get right, the most common approach, unfortunately, is to conform to the framework. You start to look for ways to make the layout fit within the frameworks provided structure.
Continuing with CSS frameworks, there is also a definite lack of semantics. You end up with mixing content with presentation by providing HTML markup that uses classes like yui-gb or span-8. I understand that semanticity is not no everyone’s top 10 list of important things to do (though I do feel it should be a priority). But if semantics aren’t particularly important, why not just use tables to mark up the page. It would take less code and would work almost seamlessly browser to browser. (Please note, I am not in any way condoning the use of tables for layout…I’m just making a point.)
For those Interested
So, in case you’re wondering…here’s where I stand on using libraries and frameworks. For Javascript, I occasionally use a library for code, usually on larger apps. Most of my code though is hand-created, however I do try to build with reusability in mind. I guess you could say I have my own personal library that I use.
As far as CSS goes…I don’t use frameworks and can’t see a time where I will actually do so. The lack of semantics rubs me the wrong way and to be honest with you, most of my CSS is not that similar from site to site. That being said, I have used reset styles of some sort on most projects, and there are a few other basic styles that I tend to include on all of my CSS, but it’s no more than a few lines. It doesn’t take me particularly long to get a layout set up in CSS, so I don’t feel like a framework would help increase my efficiency in doing so.
That last paragraph sounds a bit harsh…but hopefully you understand that that’s just one man’s opinion. Again, I am not opposed entirely to frameworks and libraries; in fact as I said above I have used Javascript libraries in particular on more than one occasion. I just think it’s important that frameworks are used only when appropriate and as enhancements to your coding abilities…not the foundation for them.
]]>CSS transitions are definitely a cool idea. Using one simple line of CSS, we can specify how we want a particular style to change. For example, a very common thing to do with CSS is change a link’s color when hovered over. To do so, you just use the :hover pseudo-class like so:
a{
color: blue;
}
a:hover {
color: red;
}
Browsers make the color change immediately when a user hovers over the link. Using WebKit’s transition property, we can tell the browser to instead make a smooth transition. For example, to make the color slowly change from blue to red over the course of two seconds, we could do the following:
a{
color: blue;
-webkit-transition: color 2s linear;
}
a:hover{
color: red;
}
We tell the browser (line 3) what style we want to animate (color), for how long (2s) and what kind of transition we want to use (linear). If you have the WebKit nightly build, I set up an example using the CSS above. It works great, and the transition is super smooth. Better yet, the other browsers just disregard it and perform the color change as usual. Simple and cool right?
Mixing Things Up a Bit
The problem I have with it is that it I think it starts to put CSS in our behavior layer, which is not so cool. Remember, one of the major benefits to proper use of web standards is being able to separate our content, presentation and behavior onto their own separate layers. By using CSS to make these changes, we make it difficult to interact with these properties using Javascript. Will we have access to information regarding how far into the animation we are? Will the transition fire some sort of onFinish method? Javascript would be needed to add this level of flexibility…not CSS.
While looking around for more information, I was happy to run across a post from late 2007 by Jonathan Snook that shares my opinion on the matter. One thing Jonathan suggested was that while browser animation is not a bad idea perhaps it should have been an extension to the DOM instead. That would offer more robustness and flexibility, and seems to preserve the separation of concerns a bit better.
Don’t get me wrong…I’m very excited about most of the new features on display in the nightly build. I can also see how the line between behavior and presentation is a bit blurred already. After all, isn’t :hover a bit of a behavioral style? I just think that given the potential interactivity here, I have to agree with Jonathan and say that adding a method to the DOM would make more sense in this case.
]]>Eric Meyer has a great write-up about how diverse the rendering of line-height: normal is across browsers. Complete with a test page that allows you to see what happens to the value of line-height normal as different fonts and font-sizes are selected.
What’s Next in jQuery and JavaScript
John Resig of jQuery fame posted a nice 11 minute video where he talks about what is coming up for jQuery, Javascript, and some changes that are being made in browsers. Nice overview of what we have to look forward to on all counts.
Initial Impressions of Silverback
A nice review of Silverback, the new user testing development tool created by the geniuses over at Clearleft. Personally, I haven’t had a chance to play with it yet, as it is only available for Macs and I am still laboring away on a PC. From everything I’ve heard though, looks like a fantastic tool.
Content Inventory
Just a bit more Clearleft love here. Andy Budd continues his series of posts looking at design artifacts with a look at the value and importance of performing a content audit.
UserVoice
Finally, though I’m a bit late on mentioning it, I’d be remiss if I didn’t say something about the release of UserVoice, home grown here in Wisconsin. UserVoice allows you to provide a way for users of a site, application, anything really, to discuss and vote on changes and features. In addition to being useful right out of the box, there are some cool additions to the product being looked at like OpenId support and perhaps an API.
]]>The Opacity Property
To change the opacity of an element using the opacity property, you simply give it a value between 0 and 1 to determine the elements’ opacity. For example, if I want a div to be 50% transparent, I would give it the following style:
`
div {
opacity: .5;
color: #fff;
background-color: #000;
}
This works fine in Safari, Opera, and Firefox. Internet Explorer, however, doesn't yet support the opacity property. Instead, we have to use their proprietary property Alpha Filter. It's really not any more difficult than the opacity selector. One key thing to note hear though is that the Alpha Filter requires you specify the opacity on a scale of 0 to 100. There's even a catch to that though...the element is that you are applying the opacity filter to has to have ahasLayoutvalue of true. While there are many ways of making an element have layout, some of the most common are to set a width, or give the element a zoom value of 1. So now our declaration is as follows:
`
`
div{
background: #000;
opacity: .5;
filter: alpha(opacity=‘50’);
zoom: 1;
}
` Simple enough…but with one catch that may or may not present a problem, depending on your situation. When you use the opacity property, the opacity is set for that element, and any children of that element. This can cause problems in readability and general appearance. If you do have problems in this situation, you may not have to resort to a PNG just yet.
More Power
CSS3 also allows for an extended version of the RGB color model that includes a fourth value that is used to specify opacity. Again, like the opacity property, the alpha value in the RGBA model accepts a value between 0 and 1. We can use an RGBA value anywhere that colors values are accepted in CSS: borders, background, font colors, etc. This already offers a higher level of control than the opacity selector.
Even better yet, while the opacity property defines the opacity for an element and all of its children, using the RGBA value only applies that transparency to the given property of an element that we specify. For example:
`
div{
background-color: rgba(0,0,0,.5);
color: #fff;
}
Some white text.
` Using the background-color property and assigning an RGBA value to it, we are able to define the transparency for the divs’ background color. The transparency of any text or elements inside of the div is unchanged. In contrast, using the opacity property, the paragraph above would inherit the 50% transparency defined on the div.
Unfortunately, as is often the issue, browser support for RGBA is limited. Both Safari and Firefox 3 offer support for the RGBA color value system, but so far Opera and IE do not. The good news though, is that we can use the RGBA value without worrying about it breaking our design by also defining a fallback color.
`
div{
background-color: rgb(0,0,0);
background-color: rgba(0,0,0,.5);
color: #fff;
}
` In most browsers that do not recognize RGBA values, that declaration is simply ignored, as it should be. In IE though (I know, surprise, suprise), it appears that RGBA values cause IE to not display the background at all. A way to get around this would be to use conditional comments to reset the background to a solid color for IE. So we can just define a solid color for browsers that do not accept RGBA values and leave the transparency for those that can support it…a prime example of progressive enhancement.
I have set up a working comparison of RGBA versus using the opacity property for you to view in each browser. Remember, to see the effects of RGBA, you will have to view the page in Safari or Firefox 3.
]]>Traditionally, Javascript is a loosely-typed language, meaning that variables are declared without a type. For example:
`
var a = 42; // Number declaration
var b = “forty-two”; // String declaration
`
Since Javascript is loosely typed, we can get away with simple ‘var’ declarations…the language will determine which data type should be used. In contrast, Javascript 2.0 will be strongly typed, meaning that type declarations will be enforced. The syntax for applying a given type will be a colon (:) followed by the type expression. Type annotation can be added to properties, function parameters, functions (and by doing so declaring the return value type), variables, or object or array initializers. For example:
`
var a:int = 42; //variable a has a type of int
var b:String = “forty-two”; //variable b has a type of String
function (a:int, b:string)//the function accepts two parameters, one of type int, one of type string
function(…):int //the function returns a value with a type of int
`
NOTE: There has been some confusion about enforcing type declarations so I thought I’d try to clear it up. Enforcing type declarations simply means that if you define a type, it will be enforced. You can choose to not define a type, in which case the variable or property defaults to a type of ‘Object’ which is the root of the type heirarchy.
Type Coercion
Being a strongly typed system, Javascript 2.0 will be much less permissive with type coercion. Currently, the following checks both return true:
`
“42” == 42
42 == “42”
`
In both cases, the language performs type coercion…Javascript automatically makes them the same type before performing the check. In Javascript 2.0, both of those statements will resolve to a ‘false’ value instead. We can still perform comparisons like those above; we just need to explicitly convert the data type using type casting. To perform the checks above and have them both resolve to ‘true’, you would have to do the following:
`
int(“42”) == 42
string(42) == “42”
`
While adding a strongly typed system does make the language a bit more rigid, there are some benefits to this change, particularly for applications or libraries that may be worked with elsewhere. For example, for a given method, we can specify what kinds of objects it can be a method for using the special ‘this’ annotation. I’m sure there are many of you who just re-read that sentence and are scratching your heads trying to figure out what the heck that meant. An example may help:
`
- function testing(this:myObject, a:int, b:string):boolean
`
The method above accepts two arguments, an int and a string. The first part of the parameters (this:myObject) uses the this: annotation to state that the function can only be a method of objects that have the type of ‘myObject’. This way if someone else is using code we have created, we can restrict which objects they can use that method on, preventing it’s misuse and potential confusion.
Union Types
We can also use union types to add a bit of flexibility. Union types are collections of types that can be applied to a given property. There are four predefined union types in Javascript 2.0:
`
type AnyString = (string, String)
type AnyBoolean = (boolean, Boolean)
type AnyNumber = (byte, int, uint, decimal, double, Number)
type FloatNumber = (double, decimal)
`
In addition, we can set up our own union types based on what we need for a particular property:
`
- type MySpecialProperty = (byte, int, boolean, string)
`
One final thing I would like to mention is that in contrast to Java and C++, Javascript 2.0 is a dynamically typed system, not statically typed. In a statically typed system, the compiler verifies that type errors cannot occur at run-time. Statically typing would catch a lot of potential programming errors, but it also severely alters the way Javascript can be used, and would make the language that much more rigid. Because JS 2.0 is dynamically typed, only the run-time value of a variable matters.
]]>Static Pseudo-Classes
Pseudo-classes allow us to apply an invisible, or “phantom”, class to an element in order to style it. For example, let’s look at the element most often styled using pseudo-classes: the anchor tag (). Some anchor tags point to locations a user has already viewed, and some point to locations the user has not yet visited. Looking at the document structure, we can’t tell this. No matter if the link is viewed or not, it looks the same in (X)HTML. However, behind the scenes, a “phantom” class is applied to the link to differentiate between the two. We can access this “phantom” class with pseudo-class selectors, like :link and :visited. (Pseudo-classes are always prefixed by a colon.)
The :link pseudo-class selector refers to any anchor tag that is a link…that is any anchor tag that has a href attribute. The :visited pseudo-class selector does exactly what it sounds like…it refers to any link that has been visited. Using these pseudo-classes allows us to apply different effects to links on the page according to the visited state.
`
a {color:blue;}
a:link {color: red;}
a:visited {color: orange;}
` The above styles for example, will make any anchor tag that does not have a href attribute to be colored blue (line 1). Any link that has a href attribute, but has not been visited will be red (line 2). Finally, if a link is visited (line 3), then it is an orange color.
Another static pseudo-class is :first-child (The :first-child pseudo-class is not supported by IE6). The :first-child selector is used to select elements that are first children of other elements. This can be easily misunderstood. A lot of times, people will try to use it to select the first-child of an element. For example:
`
Here is some text
Say we want to apply a style to the paragraph element. It is not uncommon to see people try to do this using the following style:
`
`
- div:first-child {font-weight: bold;}
However, this is not how the pseudo-class works. If we think back to the concept of pseudo-classes essentially being "phantom" classes, then what we just did was apply a phantom class to the div like so:
`
`
Here is some text
Obviously that is not what we want. The :first-child selector doesn't grab the first child of an element; it just grabs any of the specified element that is a first-child. The correct way to style that would be with the following line:
`
`
- p:first-child {font-weight: bold;}
That's probably as clear as mud, so it may help to take another look at the "phantom" class:
`
`
Here is some text
`
Watch Your Language
Corny headings aside, we can select elements based on the language using the :lang( ) pseudo-class. For example, we can italicize anything in French using the following style:
`
- *:lang(fr) {font-style: italic;}
` Where does the language get defined? According to the CSS 2.1 specification, the language can be defined in one of many ways:
In HTML, the language is determined by a combination of the lang attribute, the META element, and possibly by information from the protocol (such as HTTP headers). XML uses an attribute called xml:lang, and there may be other document language-specific methods for determining the language.
Dynamic Pseudo-Classes
So far, what we have discussed are static pseudo-classes. That is, once the document is loaded, these pseudo-classes don’t change until the page is reloaded. The CSS 2.1 specification also defines three dynamic pseudo-classes. These pseudo-classes can change a document’s appearance based on user behavior. They are:
:focus- any element that has input focus:hover- any element that the mouse pointer is placed over:active- any element that is activated by user input (ex: a link while being clicked)
Usually, these pseudo-classes are applied only to links. However, they can be used on other elements as well. For example, you could use the following style to apply a yellow background to any input field in a form when it has the focus.
`
- input:focus {background: yellow;}
` The main reason this is not done a lot is because of a lack of support. IE6 does not allow any dynamic pseudo-classes to be applied to anything besides links. IE7 allows the :hover pseudo-class to be applied to all elements, but doesn’t let the :focus pseudo-class be applied to form elements.
Complex Pseudo-Classes
CSS offers us the ability to apply multiple pseudo-classes so long as they aren’t mutually exclusive. For example, we can chain a :first-child and :hover pseudo-class, but not a :link and :visited.
`
p:first-child:hover {font-weight: bold;} //works
a:link:visited {font-weight: bold;} //link and visited are mutually exclusive
` Again, there is a compliance issue here with IE6. The IE6 browser will only recognize the final pseudo-class mentioned. So in the case of our first style above, IE6 will ignore the :first-child pseudo-class selector and just apply the style to the :hover pseudo-class.
Looking Forward to CSS3
In addition to the pseudo-classes laid down in CSS 2.1, CSS 3 provides sixteen new pseudo-classes to allow for even more detailed styling capabilities. The new pseudo-classes are:
`
:nth-child(N)
:nth-last-child(N)
:nth-of-type(N)
:nth-last-of-type(N)
:last-child
:first-of-type
:last-of-type
:only-child
:only-of-type
:root
:empty
:target
:enabled
:disabled
:checked
:not(S)
`
For more information about the new pseudo-class selectors laid down in CSS3, take a look at the CSS3 selectors working draft, or the excellent write-up by Roger Johansson. Currently, very few have decent cross-browser support, but as Johansson says, they can still be used for progressive enhancement…and in such a quickly changing field, when we can stay ahead of the curve, we should take advantage of it.
]]>Whether you like the new features being proposed, think they’re silly and unnecessary, or have no idea what the heck I am talking about, I think it’s important to have a firm grasp on some of the changes being proposed. Doing so will help you to better understand both sides of the debate, and also help to prepare you for when Javascript 2.0 becomes available for use.
There’s far too many changes and fixes to discuss them in one post, so this will be an ongoing serious of posts. I’ll be taking a look at what the new language provides us and why. Hopefully by taking a closer look at all the changes, we can get a better feel for how those changes affect both web developers and javascript in general. First though, we should take a quick look at how Javascript got to this point, and the reasoning behind the changes beings suggested in Javascript 2.0.
Once Upon a Time…
Javascript has been around since 1995, when it was debuted in Netscape Navigator 2.0. The original intent was for Javascript to provide a more accessible way for web designers and non-Java programmers to utilize Java applets. In reality though, Javascript was used far more often to provide levels of interactivity on a page…allowing for the manipulation of images and document contents.
Microsoft then implemented Javascript in IE 3.0, but their implementation varied from that of Netscape�s and it became apparent that some sort of standardization was necessary. So the European Computer Manufacturers Association (ECMA) standards organization developed the TC39 committee to do just that.
In 1997, the first ECMA standard, ECMA-262, was adopted. The second version came along a bit later and consisted primarily of fixes. In December of 1999, when the third version rolled out, the changes were more drastic. New features like regular expressions, closures, arrays and object literals, exceptions and do-while statements were introduced, greatly adding value to the language. This revision, ECMAScript Edition 3, is fully implemented by Javascript 1.5, which is the most recently released version of Javascript.
Like ECMAScript 3, the proposed ECMAScript 4 specification will provide a very noticeable change in the language. As it stands now, Javascript 2.0 will be featuring, among other changes, support for things like scoping, typing and class support.
Let the Debate Begin
While some of the changes are bug fixes, the justification for the major revisions appears to be largely based on providing better support for developing larger-scale applications. With the growing popularity of AJAX, and the rise of RIAs, Javascript is now being used for much larger-scale apps than it was ever intended for. The proposed changes to ECMAScript 4 are intended to help make development of those kinds of apps easier by making the language more disciplined and therefore making it easier for multiple developers to work on the same application.
This is where the debate starts….how much do we need these revisions? Technically, we can implement a lot of the same kinds of structures using the language as it stands currently. The proposed changes are aimed at making that easier, but there are some people who worry about the effect this may have on what is currently a very expressive and lightweight language.
Which group is correct? Are the changes going to make our lives as Javascript developers easier, or force us to lose a lot of what makes Javascript such an attractive scripting language to use today? I think the only way to really judge how the changes will affect us is to take a closer look at the changes themselves and see both the good and the bad. *[AJAX]: Asynchronous Javascript and XML *[RIAs]: Rich Internet Applications
]]>First off, I’ve increased the focus on past posts. I decided to add a listing of the latest posts to each page, as well as a listing of the most popular posts on the site in terms of views. The idea here is to hopefully make it easier for you to find earlier posts that you may have missed that may still be worth a look.
I also decided to add full RSS feeds. I have heard a lot about the debate between partial and full feeds and wasn’t sure at first how to proceed with them. Up to now, I had just been offering partial feeds. I am still keeping those, for any of you who do prefer them but I am also offering an RSS feed with the full posts in them now for those of you who prefer to have the entire article in front of you.
One new area in the footer is a listing of the galleries that were gracious enough to link to my site. I was overwhelmed by the positive response to the design of the site, and those galleries are how quite a few of you first came across the site. I thought I would finally get around to returning the favor by supplying some links back to them.
Finally, I decided to embed by Twitter status into the site. I fought the Twitter urge until March, but once I finally gave in, I’ve become fairly addicted. It’s a makes it easy to keep connected with people who you may not converse with a whole heck of a lot otherwise, and if you are following the right people, it can be quite the news informant…lots of tech news seems to hit Twitter before anywhere else. To sum it up, you could do a lot worse for a networking tool.
Now, with all the new additions, something had to go to clear up some room. So, I decided to pull the “Things I Learned Online” section. Actually, I’ve wanted to do something different with that anyway, and this made for a good excuse. By only showing a couple links at a time in the side, I felt that some of the articles and tools I come across probably weren’t getting the attention they deserve given their quality. I wanted a way to highlight a few more at a time, and to do it in a way that brings a bit more attention to them.
So after much hemming and hawing I decided to occasionally post a small group of links to resources, articles and tools that I have found that I think may interest you. Don’t worry…I’m not going to turn into a site that posts nothing but lists of other sites, nor am I going to start doing a whole bunch of Digg-made top 10 posts. The bulk of the posts here are still going to be the same kind of content you’ve been seeing with focuses on technical, design, and theoretical articles…I’ll just occasionally throw some focus out to other articles that I think are worth a read.
Having said all that (that was a bit more long-winded than intended), I’m always interested in improving the quality of what my site has to offer. So, if anyone has any strong opinions about any new changes (good or bad), or has some other ideas they’d like to see implemented (content or just enhancements), let me know.
]]>Always being right means we’re not challenging ourselves enough. It means that either we’ve become comfortable and content with where we are at with our skills, or that there is no one challenging us to improve those skills. In either case, we’re not progressing.
If we’re wrong, it means we’re pushing ourselves to explore our limits, to continue to expand our skill set. Being wrong opens the door for constructive criticism, which in turn leads to opportunities to learn. People who are willing to tell us when we’re wrong are the kind of people we should be surrounding ourselves with…they’re the kind of people who challenge us to become better designers and developers.
One quote, that I believe sums it up pretty well, is by Bill Buxton a Principal Researcher at Microsoft. In his book “Sketching User Experiences”, Bill has the following to say:
People on a design team must be as happy to be wrong as right. If their ideas hold up under strong (but fair) criticism, then great, they can proceed with confidence. If their ideas are rejected with good rationale, then they have learned something. A healthy team is made up of people who have the attitude that it is better to learn something new than to be right.
While Bill’s quote is aimed at designers, I think the rule applies to both designers and developers. Making mistakes, getting constructive criticism, and learning from that criticism is a healthy thing. It allows us opportunities to expand our skills and grow in our field. Only through this kind of healthy criticism can our skills, and ultimately the products we produce, become finely tuned.
]]>Who Wrote It?
Pro JavaScript Design Patterns is written by Ross Harmes and Dustin Diaz. Ross is a front-end engineer from Yahoo! and blogs (albeit not for awhile) about random tech topics at techfoolery.com. Dustin works for Google as a user interface engineer. You can find Dustin’s musings about web development topics at dustindiaz.com. This is the first book by either author.
What’s covered?
Pro Javascript Design Patterns is about…well, applying design patterns in Javascript of course. Design patterns are reusable solutions to specific, common problems that occur in development. Design patterns are more popular in software engineering, but as web applications become larger and more robust, design patterns are starting to become a bit more well known in the web development world.
Dustin and Ross do a great job of explaining different design patterns and showing how to apply them in the world of Javascript. The book starts off by walking you through some object-oriented principles as they relate to Javascript. There are sections on such advanced topics like interfaces, encapsulation, inheritance and chaining. The second part of the book dives right into design patterns. For each pattern, you get to see how to implement it in Javascript, when to implement it, and the benefits you will see. Design patterns can also create difficulties if used inappropriately, so Ross and Dustin take a look at the disadvantages of each pattern so that you can accurately determine whether or not to use it in your applications.
Should I Read It?
The book definitely holds value for any person working with Javascript and front-end development. The ideas laid out in the book can help anyone working with the language to create higher-quality, efficient code. Particularly developers who work with large scale Javascript applications will benefit from the book, as that is what design patterns seem to be best suited for.
Make no mistake, the book’s title starts with the word ‘Pro’ for a reason…this is not a book intended for beginners. It is a very concisely written book that doesn’t take a lot of time setting the tone…the authors dive right into advanced concepts and code. If you are just getting rolling with Javascript or you don’t have a good grasp of object-oriented programming in Javascript, then you should probably pick up another book and come back to this later. On the other hand, if you are familiar with object-oriented programming in another language, you may find the book still manageable. That’s part of the beauty of design patterns…the theory works regardless of the language…it’s the syntax and implementation that can differ.
Final Verdict
All in all, I really enjoyed the book. It can take awhile to work your way through it (this is not a bed-stand book), but it is definitely worth it as the concepts addressed are invaluable to creating quality code. For anyone doubting the power of Javascript, this book is a real eye-opener. You will find that Javascript’s flexibility offers a lot of possibilities and by using it, along with industry-recognized design patterns, you can develop scripts that are both easy to communicate and easy to maintain.
Great…Where Do I Get A Copy?
]]>The 5S System is actually a Japanese improvement process originally developed for the manufacturing industry. Each of the five words when translated to English began with ’S’ hence we call it the 5S System. Like many good philosophies however, the 5S System can apply to a variety of topics. For example, the 5S System has been applied by the Hewlett-Packard Support Center in a business context and has resulted in improvements like reduced training time for employees and reduced call-times for customers. By using the system applied to coding, we can make our code more efficient and much easier to maintain.
Seiri (Sort)
The first ’S’ is Seiri which roughly translates to ‘Sort’. Applied to the manufacturing industry, the goal of sorting was to eliminate unnecessary clutter in the workspace. The idea here is that in a workspace, you need to sort out what is needed and what is not. If you eliminate all of the items that are not necessary, you immediately have a workspace that is cleaner and thereby more efficient
Applied to coding, this can mean going through our code and determining if we have any lines of code that are really just taking up space. This can be things like error checking that has already been done at a previous step, or if working in the DOM, retrieving the same element in more than one function, instead of simply passing a reference to the element. This definitely applies to CSS code as well. There are very few stylesheets in use that don’t have a line or two that are really just unnecessary because they either accomplish nothing different than the user agent’s default behavior, or are being overridden elsewhere.
Seition (Straighten)
The next ’S’ means to straighten or ‘sort in order.’ This step involves arranging resources in the most efficient way possible so that those resources are easy to access.
For coders, this means going through and making sure that functions and code snippets that are related are grouped together in some way. This can be done by a variety of ways. If you are working with server-side scripting, consider placing related code together in an include. In CSS, use either comments or imported stylesheets to separate style declarations based on either the section of page they refer to or the design function that they carry out. (Typographic styles in one place, layout styles in another). In object-oriented programming, organize your code into logical classes and subclasses to show relationships.
Seiso (Shine)
The third step laid down in the 5S system is the Shine phase. This involves getting and keeping the workplace clean. This is an on-going phase that should be done frequently to polish up anything that is starting to lose its luster.
As we go back and work on code, we can often start to get lazy and just throw things wherever and use messy coding techniques because it’s quick and dirty. The long term result of that though is unorganized code that is difficult to maintain. The phase requires a bit of discipline, we have to be willing to keep an eye out for portions of our code that are becoming a bit unwieldy and take the time to clean it up so 6 months down the road we aren’t pulling out hair trying to remember what the heck we were thinking there.
Seiketsu (Standardize)
The Standardize phase involves setting down standards to ensure consistency. We can apply this to our coding and make it much easier both for us in the future, and for new employees who may have to try and work with some of the code we have developed.
Standardization in code can come in a variety of forms. We’ve seen some standardization in coding for-loops for example. In a for-loop it is very typical for people to use the variable i as the counter variable throughout the loop. Coders of various levels of expertise recognize the variable i in those situations very quickly and easily, because it is used so frequently.
You can also standardize the way you format your code. Some people prefer to indent code inside of loops or functions for readability, others don’t. Whatever the case may be, be consistent with it. Having a consistent coding style makes it a lot easier to come back to that code later and be able to quickly locate where that new feature needs to be dropped in.
Shitsuke (Sustain)
Finally, our last step is to sustain the work we set down in each of the previous phases. This is perhaps the most difficult phase of all because it is never ending. There is a definite level of commitment to the process that has to be displayed here in order for us to continue using and utilizing this process when we code. We can’t be satisfied with doing this once or twice and then letting it go. If we work to continually implement this process, we help ourselves to create more manageable and efficient code from the start of the development process to the conclusion.
]]>The Acid test, for those unaware, is a test page set up by the Web Standards Project (WASP) to allow browsers to test for compliance with various standards. The test runs 100 little mini-tests, and to score 100, you need to obviously pass all 100 of the tests. The first Acid test was set up in 1998 and checked for some basic CSS 1.0 compliance. Acid 2 came around in April of 2005 and tested for support for things like HTML, CSS2.x and transparent PNG support. The new Acid 3 test checks for support for CSS3 selectors and properties, DOM2 features and Scalable Vector Graphics (SVG) among other things.
It should come as no surprise that Opera was one of the first to successfully pass the test. After all, they were the second browser to pass the Acid 2 test (Safari was first). What’s so impressive is the little amount of time necessary to complete the test. It took Safari about 6 months to pass the Acid 2 test, but it took Opera just under a month to pass the Acid 3 test.
Not that we can get too awfully excited about this. The two major players here (IE and Firefox) both have a ways to go. The last I saw Firefox 3 was up to a 71⁄100 score and IE 8 was at a frighteningly low 18⁄100. Let’s just hope that IE can get the gap closed quicker than the 3 years or so that it took them to reach Acid 2 compliance! It’s looking like Safari, who has their WebKit Nightly Build’s up to 98⁄100, will be the next to hit a perfect score.
In spite of the needed improvements in Firefox and IE, this is great news and I think that congratulations need to go out to Opera’s team of developers. They’ve done a great job of being proactive with their standards-support and it shows. I also think that WASP deserves a pat on the back for all of this….they are obviously doing a good job of pushing standards-compliance in browsers and giving vendors a goal to shoot for. We are starting to see some great improvements in compliance to standards across the web and I for one, am greatly looking forward to playing around with all the new toys!
]]>To create a lot of these dynamic interfaces, we often have to use (X)HTML elements outside of their semantic meaning. For example, navigation is marked up using list-items. That is all fine and well for a sighted visitor…we can see that the list is meant to be navigation. However, to a non-sighted user who is relying on a screen reader to determine the usage of elements on a site, it is difficult at best to determine that the list is used as a navigation structure.
That is where Accessible Rich Internet Applications (ARIA) come into play. ARIA offers attributes that we can use to add semantic meaning to elements. One of those is the role attribute.
Add Some Information
Roles provide information on what an object in the page is and help to make markup semantic, usable, and accessible. Using our previous example of a list used for navigation, by providing the role attribute, we can help the user agent to understand that the list is being used for navigation.
`
- …
`
Likewise, we can tell the user agent if a paragraph is being used as a button:
`
…
`
There are many different WAI roles to utilize. Nine of them where imported from the XHTML Role Attribute Module:
banner- typically an advertisement at the top of a pagecontentinfo- information about the content in a page, for example, footnotes or copyright informationdefinition- the definition of a termmain- the main content of a pagenavigation- a set of links suitable for using to navigate a sitenote- adds support to main contentsearch- search section of a site, typically a form.secondary- a unique section of siteseealso- contains content relevant to main content
The ARIA 1.0 specification also includes support for many more roles set down in the ARIA Role taxonomy. These include roles like button, checkbox, textbox and tree. There are many available there, so I am not going to try and show them all here For that you can take a look at the ARIA working draft.
Now For Some Meaning
In addition to the information provided by the role attribute, we can further add meaning about the state and relationship of elements with states and properties. Unlike roles, which are static, states and properties may change. For example, one state that is available is checked, which as you may guess is used with an element that has a role of checkbox. When a checkbox is unchecked the checked state is false. When the checkbox is checked, the checked state should change to true.
Using states and properties is rather easy to do:
`
- Add Some Style
In browsers that support attribute selectors in CSS, we can even use our new roles, states and properties to provide different visual effects to reflect an elements meaning. For example, we can target all items on a page that have an
aria-requiredstate with this:`
- *[aria-required=“true”] { background: yellow;}
`
In addition, some states have pseudo-classes that can be used to reflect the changes in state. Consider a list-item that is tagged with an aria-checked state. Using the
:beforepseudo-class, we can provide a different image with each state change. (Note: this example is used in the W3C Working Draft)`
*[aria-checked=true]:before {content: url(‘checked.gif’)}
*[aria-checked=false]:before {content: url(‘unchecked.gif’)}
`
There is a lot of value in using ARIA. It helps to give meaning to the usage of an element on a page, greatly increasing the accessibility of a site. It’s very easy to use, and doesn’t break in browsers that don’t support it. If you want to learn more about ARIA and how to start implementing it, I highly recommend checking out the W3C’s overview on the topic.
]]>
How very true. Respecting what we don’t understand is if not impossible then extremely hard to do. Without some sort of knowledge of the process and steps involved in arriving at the solution, how can we really respect the work required to make the solution? I think this comes into play when working with both clients and co-workers.
As far as clients go, the solution involves making sure good communication takes place between you and the client. I think involving the client early and often helps to build respect and knowledge of what you do. If we meet with the client about a project, then hand them a design some time later, they are not going to have any idea of the process involved. To them, it’s like delayed magic…they ask us to come up with a design, and viola, we come up with one.
However, if we go through a more involving process, they start to get a taste of all that goes into designing/developing the final product. We can start to show them our research, information architecture, wireframes and prototypes, all before actually showing them some sort of design. By walking through the project with them, a few things happen. First, they feel more involved. This can be great for clients…it’s always difficult to just blindly trust someone else with such a crucial part of your company’s marketing.
Secondly, by allowing the client to see a lot of these steps, they begin to gain a greater respect for what is involved. Let’s face it, a lot of people simply don’t realize how much goes into developing their site or application. The web is open to anyone, and it makes people feel like anyone can just jump in and throw together a website. That’s why you run into clients whose site was developed by their mothers’, brothers’, lawnmowers’, sons’ cousin! By letting them see a bit more of our process, we help them to gain a bit more respect for what actually is going on in the professional development of a site or application.
Clearly, this can be taken too far. You don’t want to involve the client too much. If you do, you may end up confusing the client, which leads to frustration. It’s important to remember that while you want to get them involved, this is not their expertise, and anything you show them should be a very general perspective, and should be explained in non-technical terms.
I also said that respecting what we don’t understand comes into play with co-workers. A co-worker with no knowledge of CSS is going to have a difficult time respecting your job of creating cross-browser compatible layouts. I think in this case we just need to try and remember just how involved our job can be, and should assume that so and so down the hall’s job is just as involved.
I think there is an excellent argument to be made here for the “Jack-Of-All-Trades” worker. Having at least a basic understanding of a variety of topics will help you to respect the work of the people using those languages or techniques (not to mention, at least in my opinion, make you a more attractive candidate for employment).
In the end, it all comes down to communication. If we can find ways to effectively communicate to our clients and peers throughout our working process, we can hope to achieve some level of respect.
]]>The Selectors API allows us to utilize CSS (1-3) selectors to collect nodes from the DOM. This is actually quite a common enhancement in a lot of Javascript libraries….CSS selectors are a very efficient and powerful way to quickly look up nodes, and since most people are familiar with CSS syntax, it is very user friendly. The Selectors API offers native browser support for CSS selectors using the querySelector and querySelectorAll methods.
The querySelector method as defined by the W3C returns the first element matching the selector, or if no matching element is found, it returns a null value.
The querySelectorAll method returns a StaticNodeList of all elements matching the selector, or if no matching elements were found, it returns a null value. For anyone familiar with DOM traversal, you are probably familiar with NodeLists. NodeLists are returned by methods like getElementsByTagName. The main difference between the StaticNodeList and a NodeList, is that if you remove an element from the document, a NodeList is also affected and therefore the indexes of the NodeList are altered. A StaticNodeList, however, is not affected…hence the Static part.
The querySelector and querySelector methods are very easily used:
`
//returns all elements with an error class
document.querySelectorAll(“.error”);
//returns the first paragraph with an error class
document.querySelector(“p.error”);
//returns every other row of a table with an id of data
document.querySelectorAll(“#data:nth-child(even)“;
In addition to calling the methods with a single selector, you can also pass groups of selectors seperated by commas, like so:
`
`
document.querySelectorAll(“.error, .warning”);
document.querySelector(“.error, .warning”);
` The first line above would return all elements with a class or error or a class of warning. The second line would return the first element with a class of either error or warning.
You can see the advantage of having native support for the SelectorAPI by taking a look at some test results. SlickSpeed runs the test cases using the popular Javascript libraries Prototype, JQuery and ext as well as by using the Selectors API and the results are substantially quicker using the Selectors API. To run the native support test, you will need to go grab the WebKit nightly build. If you don’t want to do that, Robert Biggs ran the test in various browsers and has the test results up.
]]>I can only imagine that being surrounded by that many people who are passionate about the web for 5 days will be quite inspiring and reinvigorating. While this is the first conference I will be attending, I hope that there will be many more.
In fact, ideally I’d like to go to several each year. Listening to the presentations and having the chance to mingle with other web-minded folk seems to me an incredible way to keep in tune with the trends of the industry, and an effective way to find new techniques or skills to pursue.
After having looked over the panels roughly 100 times, there are several that I am particularly excited to check out.
Secrets of Javascript Libraries
I was excited for this back when I thought only John Resig, he of JQuery fame, would be presenting. Now that I hear people who either created or contributed other major libraries like Dojo, Prototype and Scriptaculous are also going to be there, this panel has really shot up to the top of my list.
Browser Wars: Deja Vu All Over Again?
Finally, a question I have long wondered about will be answered: What happens when you stick major players at Firefox, Opera, and IE in a room together? Cage match anyone?
Design Eye for South By
I’ve heard nothing but great things about this panel from year’s past. Can’t wait to see what they come up with this time around.
Everything I Know About Accessibility I Learned From Star Wars
Honestly….Derek Featherstone had me at Star Wars. The fact that the presentation covers such an important topic like accessibility is really just gravy.
Design is In the Details
Actually not sure what to do here. Naz Hamid’s presentation sounds fantastic, but Slideshare is also talking about the lessons learned about AJAX and Flash while creating SlideShare.net during this time. Decisions, decisions.
I could name many more that sound great, but then you would just get bored and move on if you haven’t already.
In addition to the panel programming, from everything I hear, the networking opportunities are amazing at SXSW, and I am quite excited to have the opportunity to meet some people in person for the first time. I always enjoy running into another passionate web developer or designer. The discussions are always interesting.
I am amazed by the amount of social events currently scheduled. Should be a good time, but I am quite curious as to how people actually manage to stay at these things the whole time and then be ready to go again in the morning? I hope there is a Starbucks near by.
For anyone interested, I did break down and sign up for Twitter recently, in no small part because I hear last year it turned into quite an essential tool to stay in the loop as far as where to meet up with people and such. So, if any of you are going to be at SXSW, you should follow me on Twitter so we can meet up some place. I’d love the opportunity to meet some of you in person.
And for those of you who aren’t going to be there, but want to follow me anyway, feel free to. I’m going to do my best to keep up with the updates there and I may even have something interesting to say from time to time.
]]>Getting More Detailed with Compounds
So far, we haven’t dealt with any compound location paths…each of our expressions has just gotten nodes that are direct children of the context node. However, we can continue to move up and down the document tree by combining single location paths. One of the ways we can do this (and this should look quite familiar to anyone who has moved through directories elsewhere) is by using the forward slash ‘/’. The forward slash continues to move us one step down in the tree, relative to the preceding step.
For example, consider the following:
myXPath = “//div/h3/span”;
var results = document.evaluate(myXPath, document, null, XPathResult.ANY_TYPE, null);
The expression above will first go to the root node thanks to our ‘//’. It will then get any div elements that are descendants of the root node. Then, we use the forward slash to move down one more level. Now we are saying to get all h3 elements that are direct descendants of one of the div elements that was returned. Finally, we once again use our forward slash to move down one more level, and tell the expression to return any span elements that are direct descendants of the h3 elements we already found.
In addition, we can use the double period ‘..’ to select an elements parent nodes. For example, if we use an expression like ‘//@title’, we will get all title attributes in the document. Let’s say that what we actually wanted, is all elements in the document that have title attributes. Using the parent selector (..), we can do just that. The expression ‘//@title/..’ first grabs all title attributes. Then the double period tells the expression to step back up and grab the parent node for each of those title attributes.
This is a pretty handy little feature. We can use the double period to select sibling elements by doing something like ‘//child/../sibling’ where child is the child element, and sibling is the sibling element we are looking for. For example, ‘//h3/../p’ would get all p elements that are siblings of h3 elements.
Finally, we can use a single period ‘.’ to select the current node. You will see this become useful when we introduce the use of predicates.
Speak Of the Devil
Each expression we’ve seen returns a bunch of nodes matching criteria. Occasionally, we will want to refine this even further. We can do that using predicates, which are simply Boolean expressions that get tested for each node in our list. If the expression is false, the node is not returned in our results; if the expression is true, the node is returned.
Predicates use the typical Boolean operators, ‘+’, ‘<‘, ‘>’, ‘<=‘, ‘>=’, ‘!=’, ‘and’ and ‘or’. As promised, the single period becomes much more useful when combined with predicates. For example, we can grab all h3 elements that have a value of “Yaddle” by using the following expression:
//h3[.="Yaddle"]
The dot tells the expression to check for the value of that current node. If the value equals “Yaddle”, the h3 will be returned to us. Let’s take a look at another example, one maybe a bit more practical. Let’s say you have a calendar of events, and all you want to retrieve all the events that occurred between 2005 and 2007. Being the smart developers we are, we wrapped all the event years in a span with a class of year, like so:
<span class="year">2007</span>
Getting all the year spans where the value is between 2005 and 2007 is easy. We can simply do this:
//span[@class="year"][.<= 2007 and .>=2005]
Ok…granted, at first glance that is pretty ugly, so let’s break it down.
//span- Get all span elements[@class="year"]- Make sure the only span elements we grab have a class of ‘year’[.>=2005 and .<=2007]- Make sure the value of span is between 2005 and 2007. We use the ‘<=’ and ‘>=’ operators versus the ‘<’ and ‘>’ operators because we want to also return values in the years 2005 and 2007.
Making sense out of all the slashes and brackets can take some getting used to, so don’t be discouraged if it takes you awhile before you can make sense out of what is happening there. Once you get more familiar with the syntax used, you will find you can create some really robust checks in one line of code that would have taken numerous iterations using DOM methods.
]]>This is going to be a multi-post series, as there is just so much you can accomplish by using XPath expressions that if I tried putting it into one post, no one would have the time to sit and read the whole thing.
What is XPath?
Any of you out there who are familiar with XSLT will no doubt be familiar with the XPath language. For the rest of you, XPATH is used to identify different parts of XML documents by indicating nodes by position, relative position, type, content, etc.
Similar to the DOM, XPath allows us to pick nodes and sets of nodes out of our XML tree. As far as the language is concerned, there are seven different node types XPath has access to (for most Javascript purposes the first four node types will most likely be sufficient):
Root Node
Element Nodes
Text Nodes
Attribute Nodes
Comment Nodes
Processing Instruction Nodes
Namespace Nodes
How Does XPath Traverse the Tree?
XPath can use location paths, attribute location steps, and compound location paths to very quickly and efficiently retrieve nodes from our document. You can use simple location paths to quickly retrieve nodes you want to work with. There are two basic simple location paths - the root location path (/) and child element location paths.
The forward slash (/) servers as the root location path…it selects the root node of the document. It is important to realize this is not going to retrieve the root element, but the entire document itself. The root location path is an absolute location path…no matter what the context node is, the root location path will always refer to the root node.
Child element location steps are simply using a single element name. For example, the XPath p refers to all p children of our context node.
One of the really handy things with XPath is we have quick access to all attributes as well by using the at sign ‘@’ followed by the attribute name we want to retrieve. So we can quickly retrieve all title attributes by using @title.
Using XPath in Javascript
That’s all well and fine, but how do we use this in Javascript? Right now, Opera, Firefox and Safari 3 all support the XPath specification (at least to some extent) and allow us to use the document.evaluate() method. Unfortunately at this time, IE offers no support for XPath expressions. (Let’s hope that changes in IE8)
The document.evaluate method looks like this:
var theResult = document.evaluate(expression, contextNode, namespaceResolver, resultType, result);
The expression argument is simply a string containing the XPath expression we want evaluated. The contextNode is the node we want the expression evaluated against. The namespaceResolver can safely be set to null in most HTML applications. The resultType is a constant telling what type of result to return. Again, for most purposes, we can just use the XPathResult.ANY_TYPE constant which will return whatever the most natural result would be. Finally, the result argument is where we could pass in an existing XPathResult to use to store the results in. If we don’t have an XPathResult to pass in, we just set this value to null and a new XPathResult will be created.
Ok…all that talk and still no code. Let’s remedy that shall we. Here’s a very simple XPath expression that will return all elements in our document with a title attribute.
var titles = document.evaluate("//*[@title]", document, null, XPathResult.ANY_TYPE, null);
If you take a look at the XPath expression we passed in “//*[@title]“, you will notice that we used the attribute location step followed by the attribute we want to find, ‘title’. The two forward slashes preceding the at sign is how we tell the browser to select from all descendants of the root node (the document). The asterisk sign says to grab any nodes regardless of the type. Then we use the square brackets in combination with our attribute selector to limit our results only to nodes with a title attribute.
The evaluate method in this case returns an UNORDERED_NODE_ITERATOR_TYPE, which we can now move through by using the iterateNext() method like so:
var theTitle = titles.iterateNext();
while (theTitle){
alert(theTitle.textContent);
theTitle = titles.iterateNext();
}
Since each item in the results is a node, we need to reference the text inside of it by using the textContent property (line 3). You can only iterate to a node once, so if you want to use your results later, you could save each node off into an array with something like below:
var arrTitles = [];
var theTitle = titles.iterateNext();
while (theTitle){
arrTitles.push(theTitle.textContent);
theTitle = titles.iterateNext();
}
Now arrTitles is filled with your results and you can use them however often you wish.
This is just the beginning…as we continue to look at XPath expressions and introduce predicates and XPath functions, you will start to see just how truly robust XPath expressions are. At this point, IE doesn’t support using XPath expressions in Javascript, but with each of the other major browsers having some support, and major Javascript Libraries placing an emphasis on using them, it’s only a matter of time before we can begin using these expressions to create more efficient code.
]]>Media types can be extremely useful. For example, there is very little reason to display a site’s navigation on a print-out. Using the print media type, we can then set up a style that hides our navigation section. Handheld devices which have very small screens and often low-bandwidth, may benefit from not displaying a bunch of images.
CSS 2 offered us 10 media types as a way to designate which styles are applied depending on the device that accesses our site:
All - all devices (this is default)
Aural - speech synthesizers
Braille - Braille tactile feedback devices
Embossed - paged Braille printers
Handheld - handheld devices (usually small screen, low bandwidth, possibly monochrome)
Print - printing or print preview
Projection - projected presentations (projectors, printing on transparencies)
Screen - computer screen
Tty - media using a fixed-pitch character grid (terminals or teletypes)
Tv - television devices
If no media type is declared, the default is “all”. Using these media types, we can tell devices to only use certain sets of styles. There are three basic ways of doing this:
Using Inline Syles
<style type="text/css">
@media print{
body{ background-color:#FFFFFF; }
#heading{ font-size:28px; }
}
</style>
Inline style sheets are not a very good solution, as they do not separate content and presentation.
Imported Stylesheets
<style type="text/css" media="print"/>
@import "print.css";
</style>
Imported style sheets are a much better solution, and are fairly widely used. A distinct advantage of imported style sheets is that a styles sheet is only downloaded if that specific media type is being used. For example, if I defined the above styles to be associated with the handheld media type and someone using a regular computer came to my site, they wouldn’t have to download the styles.
Linked Stylesheets
<link rel="stylesheet" type="text/css" media="print" rel="print.css" />
This is the most widely supported. As you may have guessed, a user will download each stylesheet regardless of the media type, and then use the appropriate ones. A bit unfortunate, as it wastes a little bit of time downloading styles we’re not really going to use.
It is important to note that some styles only have meaning within a certain media type, and others are not applicable to certain media types. For example, the aural media type has no use for the font-size style while the page-break-before style is really only useful in the media types like projection, printing, and tv.
Unfortunately, the support for most media types is quite minimal. You can pretty much depend on all, screen, and print. However, at this point, only Opera supports the projection media type, and the handheld media type isn’t widely supported yet on handheld devices. Feel free to use them anyway, as even if the user agent doesn’t recognize the media type named, it will just ignore it.
Media Types on Steroids: Media Queries
Media types will eventually become even more useful. CSS3 will implement media queries, which will allow us to check for certain criteria. For example, with media queries we can do something like the following:
<link rel="stylesheet" type="text/css" media="screen and (color)" rel="print.css" />
What we are telling the user agent is to only use those styles if the device uses a screen media type AND the device is a color device, not monochromatic. The parentheses are required around the text expression to indicate that it is a query. Media queries will allow us to check for items like, width, height, max-width, max-height, min-width, min-height, color, resolution, etc.
Opera already has some limited support for media queries. You can check for height and width values using the pixel measurement in Opera. Hopefully other browsers won’t be to far behind. Actually, to try and push the concept forward a bit, media queries are one of the criteria being built into the new Acid 3 browser test.
You can check out a more detailed look at media queries by looking at the W3C candidate recommendation on the subject.
]]>I am a huge fan of basketball, and find the history of the game particularly enjoyable. One of the basketball figures from the past that I have always admired the most was John Wooden, who coached the UCLA basketball team to 10 NCAA national titles, including 7 in a row at one point. He had four 30-0 seasons, and at one point his team won 88 consecutive games. Point being…the man was quite good at his job.
Each year, Wooden started out his season by having all of his players come into the locker room for his first lesson. He’d sit them all down, then pull out a pair of socks and slowly demonstrate the proper way to put them on. He’d roll the socks over the toes, then the ball of the foot, arch and heel, and then pull the sock up tight. He would then have the players slowly run their fingers over the socks to make sure there were no wrinkles. Seems kind of trivial right?
However think about it for a second…if he put that much attention into ensuring that such a small task was carried out so precisely, wouldn’t it follow that each task his team performed would be given the same kind of thought and attention to detail?
It’s that way with programming and design as well. If we think details like semantic names, using progressive enhancement, and consistently formatting our code are important, won’t we also be concerned with much bigger details like making sure our code is efficient, our program is easy to use, and our design is effectively portraying the message we are trying to send?
And what if we do decide that some of these “trivial” details are not important enough to worry about? Where do we then draw the line between what matters and what we can just kind of ignore? If it’s ok to not use meaningful names for our variables, is it also ok if our code takes a few more seconds to load, or if one of our scripts is not unobtrusive? When does something become important enough to matter?
It may seem somewhat trivial to make sure that all our identifiers in CSS are meaningful names, and that in our programs we always format our functions the same way. However, if we put that kind of attention into all the little things that go into programming and design, just imagine the high quality finished product we will have. It is that attention to detail that separates the good programs from the great, a good looking design from a “wow” design.
That’s why we can never sit still. We need to always push ourselves to find better solutions…more efficient code, more effective design. Just because something works doesn’t mean it works well. Only by taking time to pay close attention to the “minor” details that go into our development process can we be sure that our final, finished product will be one of high quality and durability.
]]>Most of the time, stacking order just kind of works behind the scenes and we don’t really pay any attention to it. However, once we use relative or absolute positioning to move an object around the screen, we will end up with several elements occupying the same space. Which element is displayed on top is determined by the elements stacking order. We can adjust an elements stacking order by using the z-index property.
The z-index is so named because it affects an elements position along the z-axis. The z-axis is the axis that goes from front to back from the user. If we think of the x-axis and y-axis as height and width, then z-axis would be the depth. The higher the z-index of an element, the closer it becomes to the user, and the lower the z-index, the further back on the screen it appears.
If we do not specify any z-index values, the default stacking order from closest to the user to furthest back is as follows:
- Positioned elements, in order of appearance in source code
- Inline elements
- Non-positioned floating elements, in order of appearance in source code
- All non-positioned, non-floating, block elements in order of source code
- Root element backgrounds and borders
Based on the default stacking order above, you can see that any element that has been positioned, whether relative or absolute, will be placed above any element that is not positioned. Both positioned and non-positioned elements are of course, above the background of our root element.
Mixing Things Up A Bit
Now let’s say we want to move some of our elements around in the stacking order so different elements appear on top. We can use the z-index property on any positioned elements to adjust there stacking order. The z-index property can accept an integer, the auto value, or an inherit value. When using integers, the higher the positive number, the further up in the stacking order it will appear. You can use negative z-index values to move the element further down in the stacking order. If we do not use a z-index value on an element, it will render at the rendering layer of 0 and will not be moved. The stacking order now looks like this:
- Positioned elements with z-index of greater than 0, first in order of z-index from lowest to highest, then in order of appearance in source code
- Positioned elements, in order of appearance in source code
- Inline elements
- Non-positioned floating elements, in order of appearance in source code
- All non-positioned, non-floating, block elements in order of source code
- Positioned elements with z-index of less than 0, first in order of z-index from highest to lowest, then in order of appearance in source code.
- Root element backgrounds and borders
Stacking Context
An interesting thing happens though when we set a z-index value to 0 or auto: we establish a new stacking context. Let’s say we set #front to have a z-index of 5. Now, we have just established a new stacking context for any element descending from (contained in) #front. If #middle is contained within #front, and I set its z-index to 2, it should still appear above #front. Why? Because since we set a z-index value to #front, every descendant of #front is now being stacked in relation to #front. It may be helpful to look at this as a multiple number system (as demonstrated by Eric Meyer in CSS: The Definitive Guide):
#front 5.0
#middle 5.2
Since #front is the ancestor that sets the stacking context, it’s relative stacking level can be thought of as 0. Now when we set the z-index for middle, we are merely setting it’s local stacking value. Of course 2 is higher than 0, and therefore even though in our CSS it looks like #middle should be displayed behind #front, we can see that actually it should be displayed on top.
For an example, consider the following code:
<div id="one">
<div id="two"></div>
</div>
<div id="three"></div>
Now, using CSS we position these elements so that there is some overlap:
#one{
position: absolute;
left: 0px;
top: 20px;
z-index: 10;
}
#two{
position: absolute;
left: 50px;
top: 30px;
z-index: 15;
}
#three{
position: absolute;
left: 100px;
top: 30px;
z-index: 12;
}

The result is that #two shows up below #three, even though the z-index value we gave it (line 11) is higher than the z-index value we gave #three (line 17). This is because #two is a descendant of #one, which established a new stacking context. Which means if we use our numbering system, we would get the following stacking order:
#three 12
#two 10.15
#one 10.0
Firefox Gets It Wrong
Ok…that felt weird to say. We are all used to Firefox getting most CSS things right, but this is one area it gets wrong. According to CSS 2.1, no element can be stacked below the background of the stacking context (the root element for that particular context). What this means is if we adjust the CSS above to give our #two element a negative z-index, the content of #one should overlap over the content of #two, but the background color should not. The way IE renders this is correct. Both results are shown below:

You can see that in IE, while the content of #one is still set above the content of #two, the background color remains behind it, as specified in CSS 2.1. Firefox on the other hand, shoves the entire #two element, background color and all, behind #one. Until this is fixed, be careful about using negative numbers for the z-index of an element.
Go Forth and Experiment
Definitely take this and play around with it. This is a topic that is best understood by setting up some positioned and non-positioned elements and experimenting with different z-index values. If you are feeling bold, check out the W3C’s really detailed breakdown of the stacking order of not just elements, but their background colors, background images, and borders. As with most topics in CSS, there is more here to understand than we first realize.
]]>Unfortunately, this is often a total mess of a job. The code we have to work with is often quite long, poorly documented, looks like ancient Greek, and leaves us angrily spewing silent (perhaps in some cases not so silent) insults at whoever the poor person was who created this mess. Not only does this leave us frustrated, but it also can frustrate our employers, as projects that should’ve been easily taken care of now require much more time and effort.
Here then, are a few practices you can start using now to ensure that the next guy working on your application isn’t hoping for your demise.
Start Commenting
This is one practice that should be ingrained in your head from your early on in your development career. In addition to making the code easier for you to navigate in a month or two, effective commenting can also make it much easier for a new developer working with your code to understand what is going on. Any section of code that may require more explanation (functions in particular), should have a comment explaining what is going on there.
It can also be useful in some cases to explain why a particular solution was used instead of another one. If when developing, you found that one solution resulted in better performance than another, comment about it. A person just trying to understand your code may not realize that there is a performance benefit to your code, and ditch it in favor of something he/she is more familiar with.
Use Descriptive Names
Few things are more frustrating to someone trying to work with your code than coming across blocks like this:
var j = 0;
var a = getData();
for (var i=0; i < a.length; i++) {
var x = a[i].getName();
if (x === 'John') {
j++;
}
}
Granted, a simple for loop like above is not to difficult to follow, but what exactly are the variables, a, j, and x? This may seem to save you some typing initially, but coming back to this in a few months will drive you crazy. Variable names should make some sense.
var counter = 0;
var employees = getData();
for (var i=0; i < employees.length; i++){
var firstName = employes.getName();
if (firstName === 'John'){
counter++;
}
}
Just by using better variable names, we have made the code much easier to understand. Even to someone completely unfamiliar with your application, it is easy to tell that we are looping through a bunch of employees, and counting how many of them have a first name of ‘John’. Not so easy to tell in our first example.
Be Consistent
This goes for naming conventions as well as formatting. Come up with a set way of naming variables and stick to it. Don’t have my_variable on one line, and then otherVariable on the next. If you are going to use underscores, stick to underscores. If you want to use camel casing, then use camel casing on each of your variables. It makes it much easier to tell at a glance what values are variables in your code.
When you are declaring functions, decide how you want to display them. Some people like to use the following format:
function getName()
{
...
}
Others will use stick the opening bracket of a function on the same line as the initial declaration.
function getName(){
...
}
It doesn’t really matter which method you use, just so long as you continue using it throughout your code.
Utilize Common Design Patterns
Design patterns are solutions to specific programming problems that have been documented and allows developers not to have to solve the problem again and again. They provide us with a way of quickly communicating the method used to resolve a problem. Common design patterns, like the factory pattern or the singleton pattern, have the added benefit of being used by programmers of various different languages. Now anyone who recognizes the pattern used can tell right away what is being done, it’s just a matter of figuring out the exact syntax of the specific language being used.
Be careful with this one though. Don’t just use a design pattern to be using design patterns. If you make sure your code can benefit from the use of a design pattern, then go ahead and implement one. Otherwise, you will just end up with over-engineered code that is more complicated than it may need to be.
Make it Flexible
Make sure your methods are flexible and can be used in a variety of different ways. You never know the different uses that may be required of your application in the future. Make sure your methods are built in a way that the data returned is then able to be used in various solutions. For example, let’s look at some very simple Javascript that involves getting an employee’s name and outputting it to a div.
function getName(employee){
var myDiv = document.getElementById("divName");
var employeeName = employee.name;
myDiv.innerHTML = employeeName;
}
This works perfectly fine for our solution. What happens though if in 3 months, we decide that we actually want to use the name in an alert instead? Now we have to go back, find our getName function and rework it. Instead, if we make the getName function more flexible, we can allow future developers to use it however they choose.
function getName(employee){
return employee[i].name;
}
Separate the retrieval of information from the usage of it. It makes the code more flexible, and much easier to adjust in the future.
These are just five simple techniques you can use to ensure that your code is easier both to understand and to adapt for the next guy who comes along and has to modify it. It also has the added benefit of making your life a little easier when several months down the road, your boss tells you to change some of the functionality. It is now easy to both understand what the code is doing, and how to make it do what you want.
]]>What Is It?
Version targeting, as proposed by Microsoft, will use a X-UA-Compatible declaration, either via a META tag or as a HTTP header on the server, to determine which rendering engine the page will be displayed in. For example, the META tag below will tell IE to use the IE8 rendering engine to display the page:
If IE8 comes across a site that doesn’t have this declaration either in a META tag or as a HTTP header, than it will render the page using IE7’s rendering engine. This idea is not entirely new. DOCTYPE declarations have been switching IE browsers from ‘quirks mode’ to Web Standards mode since, I believe, IE6. There were some limitations with this. While using a DOCTYPE ensured standards mode, there is a definite difference in what standards mode is in IE6 versus in IE7.
The X-UA-Compatible declaration is meant to be more robust. Here, we can tell the browser exactly which version of IE to render the page in, thereby alleviating us from the headaches that may be caused by a different rendering engine in IE8 than in IE7 for example. We can also use the ‘edge’ keyword (which is apparently not recommended) instead of declaring a specific version. The ‘edge’ keyword is used below:
By using the ‘edge’ keyword, we are telling IE to always use the most current rendering engine available. This basically gives us the option of ignoring IE’s new feature. However, this seems like a flawed idea, because as Jeremy Keith said “…even if you want to opt out, you have to opt in.”
Some Problems
I agree with Keith in thinking that the idea was implemented wrong. The X-UA-Compatible declaration should be a tool to use, not a required feature. If I want my site rendered in the newest version of IE, I shouldn’t have to tell it that. It should assume that unless I tell it otherwise, I want my site rendered with the most current rendering engine, not the other way around. I guess I understand how from a business perspective this makes sense, this way everything works at least as well as before. However, for a community that puts so much emphasis on progressive enhancement, this doesn’t seem to fit the mold.
I am also not so sure that this is any better than using conditional comments. If I can develop for standards supporting browsers and then use conditional comments to “fix” the other ones, than what benefit do I really get from using the X-UA-Compatible declaration? Also, what happens years down the road, after IE9 and IE10 are released? If I am one of those people still using IE8 at that time, and I come across a site that declares it should render in IE10, how will IE8 handle that? I would like to assume it would just render it using the highest version it knows (IE8 can only render IE8 or lower so an IE9 declaration results in IE8). Of course that just brings us back to using hacks again to ensure the older browsers still show our site reasonably well, and then we’re back at the beginning.
I would also be interested to see if this is going to result in substantial code bloat for IE. If IE10 is potentially supporting four different rendering engines (quirks mode, standards mode in IE7, IE8, IE9) how is this going to affect the size of the browser code? I could see this potentially resulting in a pretty hefty amount of disk space being required in the future as more and more engines are being supported.
Not All Bad
The idea is not totally off base. It offers a nice feature, we don’t have to scramble to make sure our sites don’t break in the newest version of IE. I just think that it should be an optional feature…I either use the declaration and therefore ensure that my code will be rendered as always, or I don’t use it and allow progressive enhancement to work it’s magic.
I still say kudos to IE for trying a new idea out. If nothing else, this has gotten the community discussing the advantages and disadvantages of Microsoft’s proposed solution, as well as talking about other routes we could take. Even after looking at it in more detail though, I just don’t think this is going to help solve much of anything. I don’t know that there is that big of an advantage offered by it, and I just don’t think that other browser vendors will think it is worth their time. Who knows though? Maybe in five years, people will be looking at this post and remarking about how short-sighted I was. I guess time will tell.
Don’t Take My Word For It
This is a very opinionated topic that has generated some great discussion already across the web. I encourage you to check out some of the varying opinions and arguments presented in the posts below:
]]>One of those nice features we can add is to display a link’s href directly after the link on our print-outs. This will allow someone who has printed out the page to still be able to know where the links on the page are pointing to. We can do this with CSS by using the :after psuedo-class and some generated content.
a[href]:after{
content: " [" attr(href) "] ";
}
There are really four important parts of the statement above:
a[href]: Here, we use an attribute selector to select all links in our page with an href attribute.:after: The :after pseudo-class allows us to insert some content after the links and style it if necessary.content: This is what actually generates the content. We could just insert, for example, the letter “a” with a style call like content: “a”.attr(href): This gets the href attribute of the link currently being styled. This way, each link will display it’s own href.
If we put this style in our print stylesheet, all of our links that actually have an href will print out like this: TimKadlec.com [https://www.timkadlec.com]
Obviously, this is a pretty handy enhancement to our print stylesheets. Now, the links printed out actually have some meaning to them. The problem is, Internet Explorer doesn’t support the :after pseudo class, nor does it support the content style. So if a user is using Internet Explorer and tries to print our page out, they still won’t see any href’s displayed.
Javascript to the Rescue
We can use a little bit of browser specific Javascript to fix this problem. Internet Explorer (version 5.0 and up) has a little known proprietary event called onbeforeprint. Just like it sounds, this event fires right before printing a page or viewing a print preview of the page. Since IE is the only major browser that doesn’t create the effect using CSS, a proprietary event is the perfect fix. Now, we can draw up a simple function like so:
window.onbeforeprint = function(){
var links = document.getElementsByTagName("a");
for (var i=0; i< links.length; i++){
var theContent = links[i].getAttribute("href");
if (!theContent == ""){
links[i].newContent = " [" + theContent + "] ";
links[i].innerHTML = links[i].innerHTML + links[i].newContent;
}
}
}
Our function simply gets all the links on a page, and appends their respective href’s immediately after them, creating the same effect that we were able to do using CSS in other browsers. You might be wondering why we set the new content we created to be a property of each link. That’s because right after printing or canceling out of the print preview screen, we are now seeing the href on our actual web site. We obviously don’t want this, and it’s simple enough to get rid of with another IE proprietary function, onafterprint.
window.onafterprint = function(){
var links = document.getElementsByTagName("a");
for (var i=0; i< links.length; i++){
var theContent = links[i].innerHTML;
if (!theContent == ""){
var theBracket = theContent.indexOf(links[i].newContent);
var newContent = theContent.substring(0, theBracket);
links[i].innerHTML = newContent;
}
}
}
Here we just again, loop through all the links, find the position of the new content we added, and remove it from the link. This returns the appearance of our site to the original view before trying to print.
Obviously, it would be ideal if we could simply use CSS to manage this. However, as we’ve seen, there is no need to wait for IE to support this feature before we implement it. Some proprietary Javascript events allow us to replicate the effect until it is supported later on.
The script/css effect has been tested in IE7, Opera, Firefox, and Safari. If you are interested, the complete Javascript to create the effect in IE is here: printlinks.js
]]>A very common check to perform is whether a browser supports the getElementById() method, like so:
if (!document.getElementById) return;
var myContainer = document.getElementById('container');
That is just a very simple verification. We check to see if the browser recognizes the getElementById() method. If it doesn’t, we quit what we are doing and don’t go any further. If it does, we continue on with our code. It can be quite annoying to have to type out document.getElementById each time you have to use it, so let’s create a shorter helper function.
var id = function(attr){
if (!document.getElementById) return undefined;
return document.getElementById(attr);
}
var myContainer = id('container');
Above, we create an id function that checks to see if the browser supports the getElementById() method, and if it does, it returns the value for us. There are two major benefits here. First, our function does the check for us to ensure the method is supported. Secondly, it’s less typing; instead of having to type document.getElementById() each time we want to get an element, we can just type id().
However, let’s say that we have a pretty intensive script here and we have to use the id method let’s say 20 times. That means that 20 times over the course of our script, we are running a check to see if the browser supports the method, when we already know the answer after the first time we ran the check. Obviously, that isn’t very ideal.
Using branching, we can make the check once on runtime, and then return a function that doesn’t require checking anymore.
var id = function(){
if (!document.getElementById){
return function(){
return undefined;
};
} else {
return function(attr){
return document.getElementById(attr);
};
}
}();
The key here is the parentheses after our function declaration (line 11). This makes the function run right away as soon as the browser sees it.
So while loading the page, the browser comes across this function and runs it. If the getElementById method is supported, it assigns a function that returns the element to the id variable. If the browser does not support the getElementById method, than it assigns a function that returns an undefined value to the id variable.
It may help to look at it this way. By using branching in our function above, we have essentially applied one of two functions to our id variable:
// if getElementById is not supported
id = function(){
return null;
}
// if getElementById is supported
id = function(attr){
return document.getElementById(attr);
}
//Example usage
var myContainer = id('container');
So now, when we are getting the element using the id function, it doesn’t run the check to see if it is supported, because it doesn’t need to. If we use our id function 20 times, the browser support check is only performed once: initially as the script is being loaded.
It is important to note that branching is not always going to provide a performance increase. Using branching results in higher memory usage because we are creating multiple objects. So whenever you consider using branching, you need to be able to compare the benefits you will get from not having to run the comparison over and over versus the higher memory usage that branching requires. However, when used properly, branching can be a very handy tool for optimizing your Javascript performance.
]]>While he admits in his comments that he was mainly venting against as he calls them “weak build-it-yourselfers” I thought it brought up an unfortunate opinion that at times seems to rise up in this industry. Now please note, I am not trying to pick on Guy Davis at all. There are some very valid points raised in his post, and as I stated before, he does admit he was mainly venting. I am sure all of us have vented about such things before.
Developers sometimes give a cold shoulder to developers whose code is not up to par. This is an unfair judgment though. Particularly in an industry as fast moving as the web development industry is, you cannot afford to wait until you are an “expert” to code. You have to code using the best ways you know how, and continue to learn. This means that there are undoubtedly some projects you wish you could pretend you never touched. I know I have them.
I think that a better classification is to broaden it out a bit and say there are two kinds of developers. The first kind are those that are content just getting the job done. They are not particularly concerned with how it is taken care of, so long as it functions relatively well. If they have to use inline Javascript to create an effect, then so be it. They don’t push themselves to learn more and find better ways of doing things, they just accept that what they do works well enough and why would they ever need to progress further.
The second type of developer, however, is never satisfied with where they are at. Yes, they will use the knowledge they have to get the job done, which results in some unseemly coding at times. But they know that they have more to learn, and are constantly pushing themselves to find a better method of doing it. These are the types of developers that push the industry forward. They will take their bumps and bruises along the way, but they will continue to further their understanding and knowledge of whatever skill it is they are using.
This doesn’t mean that they always develop the solution themselves, or never use libraries or frameworks. Sometimes the best solutions are libraries. It simply means that this kind of developer is always wondering, is there a better way to accomplish this.
Therefore, I say if you are new to a language, be it CSS, Javascript, PHP, or whatever else, don’t be ashamed of the code you produce. As long as you are trying to learn more and come up with more fool-proof and efficient ways of developing, there is nothing to be embarrassed about.
If you asked some of the biggest names in web development today if they had ever done a project using coding methods they aren’t exactly proud of, I am sure the answer would be “yes”. If everyone thought that someone smarter out there would develop a better solution, the industry would become quite stagnant.
]]>Every rule in CSS has a specificity value that is calculated by the user agent (the web browser for most web development purposes), and assigned to the declaration. The user agent uses this value to determine which styles should be assigned to an element when there are more than one rule for that particular element.
This is a basic concept most of us have at least a general understanding of. For example, most developers can tell you that the second declaration below carries more weight than the first:
h1{color: blue;}
h1#title{color: red;}
If both styles are defined in the same stylesheet, any h1 with an id of ‘title’ will of course be red. But just how is this determined?
Calculating Specificity
Specificity in CSS is determined by using four number parts. Each type of value in the style declaration receives some sort of specificity rate:
Each id attribute value is assigned a specificity of 0,1,0,0.
Each class, attribute, or pseudo-class value is assigned a specificity of 0,0,1,0.
Each element or pseudo-element is assigned a specificity of 0,0,0,1.
Universal selectors are assigned a specificity of 0,0,0,0 and therefore add nothing to the specificity value of a rule.
Combinator selectors have no specificity. You will see how this differs from having a zero specificity later.
So going back to our previous example, the first rule has one element value, so its specificity is 0,0,0,1. The second rule has one element value and an id attribute, so its specificity is 0,1,0,1. Looking at their respective specificity values, it becomes quite clear why the second rule carries more weight.
Just so we are clear on how specificity is calculated, here are some more examples, listed in order of increasing specificity:
h1{color: blue;} //0,0,0,1
body h1{color: silver;} //0,0,0,2
h1.title{color: purple;} //0,0,1,1
h1#title{color: pink;} //0,1,0,1
#wrap h1 em{color: red;} //0,1,0,2
You should also note that the numbers go from left to right in order of importance. So a specificity of 0,0,1,0 wins over a specificity of 0,0,0,13.
At this point, you may be wondering where the fourth value comes into play. Actually, prior to CSS 2.1, there was no fourth value. However, now the value furthest to the left is reserved for inline styles, which carry a specificity of 1,0,0,0. So, obviously, inline styles carry more weight than styles defined elsewhere.
It’s Important
This can be changed, however, by the !important declaration. Important declarations always win out over standard declarations. In fact, they are considered separately from your standard declarations. To use the !important declaration, you simply insert !important directly in front of the semicolon. For example:
h1.title{color:purple !important;}
Now any h1 with a class of ‘title’ will be purple, regardless of what any inline styles may say.
No Specificity
As promised, I said I would explain the difference between no specificity and zero specificity. To see the difference, you need a basic understanding of inheritance in CSS. CSS allows us to define styles on an element, and have that style be picked up by the element’s descendants. For example:
h1.title{color: purple;}
<h1 class="title">This is <em>purple</em></h1>
The em element above is a descendant of the h1 element, so it inherits the purple font color. Inherited values have no specificity, not even a zero specificity. That means that a zero specificity would overrule an inherited property:
*{color: gray} //0,0,0,0
h1.title{color: purple;}
<h1 class="title">This is <em>purple</em></h1>
The em element inherits the purple font color as it is a descendant of h1. But remember, inherited values have no specificity. So even though our universal declaration has a specificity of 0,0,0,0, it will still overrule the inherited property. The result is the text inside of the em element is gray, and the rest of the text is purple.
Hopefully this introduction to specificity will help make your development process go smoother. It is not a new concept, or a terribly difficult one to learn, but understanding it can be very helpful.
]]>Prototypes allow you to easily define methods to all instances of a particular object. The beauty is that the method is applied to the prototype, so it is only stored in the memory once, but every instance of the object has access to it. Let’s use the Pet object that we created in the previous post. In case you don’t remember it or didn’t read the article (please do) here is the object again:
function Pet(name, species){
this.name = name;
this.species = species;
}
function view(){
return this.name + " is a " + this.species + "!";
}
Pet.prototype.view = view;
var pet1 = new Pet('Gabriella', 'Dog');
alert(pet1.view()); //Outputs "Gabriella is a Dog!"
As you can see, by using simply using prototype when we attached the view method, we have ensured that all Pet objects have access to the view method. You can use the prototype method to create much more robust effects. For example, let’s say we want to have a Dog object. The Dog object should inherit each of the methods and properties utilized in the Pet object and we also want a special function that only our Dog objects have access to. Prototype makes this possible.
function Pet(name, species){
this.name = name;
this.species = species;
}
function view(){
return this.name + " is a " + this.species + "!";
}
Pet.prototype.view = view;
function Dog(name){
Pet.call(this, name, "dog");
}
Dog.prototype = new Pet();
Dog.prototype.bark = function(){
alert("Woof!");
}
We set up the Dog object, and have it call the Pet function using the call() method. The call method allows us to call a specific target function within an object by passing in the object we want to run the function on (referenced by ‘this’ on line 10) followed by the arguments. Theoretically, we don’t need to do this. We could just create a ‘name’ and ‘species’ property inside of the Dog object instead of calling the Pet function. Our Dog object would still inherit from the Pet object because of line 12. However that would be a little redundant. Why recreate these properties when we already have access to identical properties inside of the Pet object?
Moving on, we then give Dog a custom method called bark that only Dog objects have access to. Keeping this in mind consider the following:
var pet1 = new Pet('Trudy', 'Bird');
var pet2 = new Dog('Gabriella');
alert(pet2.view()); // Outputs "Gabriella is a Dog!"
pet2.bark(); // Outputs "Woof!"
pet1.bark(); // Error
As you can see, the Dog object has inherited the view method from the Pet object, and it has a custom bark method that only Dog objects have access to. Since pet1 is just a Pet, not a Dog, it doesn’t have a bark method and when we try to call it we get an error.
It is important to understand that prototype follows a chain. When we called pet2.view(), it first checked the Dog object (since that is the type of object pet2 is) to see if the Dog object has a view method. In this case it doesn’t, so it moves up a step. Dog inherits from Pet, so it next checks to see if the Pet object has a view method. It does, so that is what runs. The bottom most layer of inheritance is actually from the Object.prototype itself. Every object inherits from that. So, in theory we could do this:
Object.prototype.whoAmI = function(){
alert("I am an object!");
}
pet1.whoAmI(); //Outputs 'I am an object!'
pet2.whoAmI(); //Outputs 'I am an object!'
Since all objects inherit from the Object.prototype, pet1 and pet2 both can run the whoAmI method. In short, prototype is an immensely powerful tool you can use in your coding. Once you understand how prototype inherits, and the chain of objects it inherits from, you can start to create some really advanced and powerful object combinations. Use the code examples used in this post to play around with and see the different ways you can use prototype to create more robust objects. With something like this, hands-on is definitely the best approach (at least I think so!).
]]>Custom classes make your code more reusable. If many of your applications use a similar functionality, you can define a class to help and facilitate that functionality. Now you can just use your new class in multiple projects to provide the common functionality. For example, let’s say you create a custom accordion effect. If you use classes to define the effect you can use the same code to provide the same effect on another page simply by utilizing the class you created.
Using classes helps to organize your code. If you are using classes, you will see that instead of just one really long piece of code, your code will become broken into smaller pieces of related methods and properties. This will make your coding easier to maintain and troubleshoot.
So what is this terrific sounding little tool and how do we use them? A class is used to define a common type of object that will be used in a given application. For example, let’s say that we are creating an application to keep track of animals in a pet store. Each animal will have a name, and a species. We could do the following:
var pet1 = new Object();
pet1.name = 'Gabriella';
pet1.species = 'Dog';
var pet2 = new Object();
pet2.name = 'Trudy';
pet2.species = 'Bird';
// and so on
As you can hopefully see, that is just going to get long and annoying very quickly. If I have twenty different pets then it takes 60 lines of code just to create the objects. There is also no good organization to this. We have no indication that pet1 and pet2 are actually the same type of object. A much better way is to declare a class.
function Pet(name, species){
this.name = name;
this.species = species;
}
var pet1 = new Pet('Gabriella', 'Dog');
var pet2 = new Pet('Trudy', 'Bird');
We have just created a custom Pet class. Each Pet object has two properties: a name and a species. Now we can tell at first glance that pet1 and pet2 are the same type of object, and our code instantly becomes more readable. It also takes only one line to declare an object, shortening the long code we would have had if we had created the objects each individually without a common class.
What About Methods?
We have seen how to set properties in classes, but we can also use these classes to define common methods to objects. We could do this by simply adding another line inside of our class declaration.
function Pet(name, species){
this.name = name;
this.species = species;
this.view = view;
}
function view(){
return this.name + " is a " + this.species + "!";
}
var pet1 = new Pet('Gabriella', 'Dog');
alert(pet1.view());
We just added a view method to any object that is a Pet. The call above would return “Gabriella is a Dog!”. There is one problem here though. If we have 20 pets, each pet is carrying a view function. That may not seem like much, but as this pet store grows, and we have more and more pet objects, each with the view function, we are going to start running into memory problems.
What we should be doing here instead, is use the prototype keyword. The prototype keyword allows us to have objects inherit the method from the class they are members of. The prototype keyword is a very powerful tool, and I will go into more detail on it in a later post, but for now some basic understanding should suffice. For example, take a look at the code below:
function Pet(name, species){
this.name = name;
this.species = species;
}
function view(){
return this.name + " is a " + this.species + "!";
}
Pet.prototype.view = view;
var pet1 = new Pet('Gabriella', 'Dog');
alert(pet1.view());
We have now dropped the view from the initial construction of our class, saving us some memory space. Now using the prototype keyword, we have set a view method to the Pet object. Since pet1 is a member of Pet, it has access to the function. Essentially, we have created the same effect as before, only now, the view function is only stored once, instead of once for each pet object declared.
As you can see, classes are a very valuable coding tool. They help to provide organization, and help to make our code more reusable. When used in conjunction with the prototype keyword, they can be extremely powerful and provide a lot of flexibility. This article really just touched the tip of the iceberg. There is so much you can do with this combination, and I highly recommend taking a deeper look. Once you start to use prototypes and classes in your applications, you will find them indispensable and wonder how you got along without them.
]]>Know the common bugs
Different browsers will handle CSS differently. This is something every CSS developer learns early on, sometimes painfully. Make sure when you come across a bug you force yourself to take a few minutes to look into it and gain an understanding of what is causing the problem. You will be surprised by how few fancy CSS hacks you will have to resort to if you know how to dodge the problems in the first place.
Check your work often
After every couple rules you put into your stylesheet, you should be checking each browser you have access to so you can see what effect the rule had on the layout. The worst thing you can do, in my opinion, is to create your CSS entirely and then check it in each browser. Now you have to wade through all your CSS and try to find where the problem is coming from. However, if you are checking your work after every couple rules, you will have a pretty good idea where the problem lies, and you will be able to fix it that much more quickly.
Know your resources
This may be the most important tip here. Like I said, with so many selectors, properties, bugs, etc. to try and memorize, you will undoubtedly have to turn for help on many occasions. It becomes important for you to know where you can find a solution, and where the solution will be explained in detail enough for you to understand it and be able to avoid it in the future. For example, when I run across a bug that I am not familiar with, the first place I turn to is Position Is Everything. They have wonderful write-ups on various bugs you will find in different browsers. If I just need to lookup a CSS property that I don’t use very often, then I turn to “CSS: The Definitive Guide”, by Eric Meyer. You need to know the places like this that you can turn to for answers.
Know how to troubleshoot
Knowing how to find the problem is half the battle. There’s plenty of ways to go about doing this, so you just have to find the techniques that work for you. While I can say that I haven’t ever used diagnostic styling quite to the extent that Eric Meyer posted in his 24ways article, I am a huge fan of using bright colored borders on my block elements to help me locate problem areas. Commenting out blocks of code at a time can also help a lot when trying to find out what elements have the troublesome styles applied to them. And I cannot recommend the Web Developer Toolbar extension for Firefox highly enough. I am so attached to that thing and its many useful troubleshooting features now that it pains me to work on a computer without it.
Show patience and have a sense of humor
Don’t worry if it seems like it is taking forever to get to the point where you don’t have to look up every little bug. Patience, young Padawan. There are a lot of bugs out there, and it can take awhile before you get to a point where you can recognize one right away.
No matter how much you know, how many books you’ve read, or how many designs you’ve developed, there will still come times where a problem comes up that stumps you for awhile. There is just too much information to digest for you to expect to never run into problems. That’s when you just need to grin and bear it. Keep plugging away and be willing to laugh at simple mistakes you may make along the way. If CSS wasn’t challenging at times, wouldn’t that take some of the fun out of it?
]]>For those of you who may be unaware, Acid 2 is a test page for web browser vendors set up by the Web Standards Project (WASP). The intention was for the Acid 2 test to be a tool for browser vendors to use to make sure their browsers could handle some features that we as web developers would love to use. It’s a pretty intense little test. If your curious, the WASP walks you through each of the items that Acid 2 tests for.
The timing for Microsoft couldn’t have been any better. This announcement comes right after Opera announced they were filing a complaint against Microsoft for their lack of standards compliance.
Now just because a browser actually passes the test doesn’t guarantee it will be standards compliant, but this is most definitely a step in the right direction. Add to this the rumor going around that hasLayout will be taken care of now in IE8, and I must say I am getting a little excited here. Of course, with the beta version coming out in the first half of 2008, it will still be quite some time before IE8 takes over the market share currently owned by other versions of the browser. Heck, IE7 still hasn’t passed IE6 as the dominant Microsoft browser.
Not to be outdone, BetaNews claims that Firefox 3 Beta also successfully passed the Acid 2 Test. Looks like we may have a pretty intense battle for browser supremacy starting up here in the new year.
]]>I can see their point, and in some situations, I agree. If you are on a tight deadline for a project, you often don’t have time to develop that functionality from scratch, and it therefore makes more sense to adapt the structure already developed by someone else.
I do feel, however, that web developers do need to try and create an effect from scratch when they have the opportunity. There are a couple reasons why I feel this is the case.
First off, by forcing yourself to create that layout using CSS, or that form validation script in Javascript from scratch, you force yourself to analyze and learn the intricacies of the language you are dealing with. This knowledge will help to increase your understanding of both the concepts and techniques involved in arriving at a solution for the task. And as far as I know, more knowledge and understanding is never a bad thing.
The other main reason for creating something yourself is because you never know how another point of view may help to create a superior solution to a common problem. Challenge yourself to see if you can improve the solution. I guess you could call this ‘modifying the wheel’. If you are going to try and develop a better solution, you should study the ones already out there. Try to see their strengths and weaknesses, and see how you can improve the weaknesses while not losing the strengths.
So over all, I say go ahead and reinvent the wheel. Challenge yourself to create a better solution, and in the process, increase your knowledge. Remember, the first wheels were stone slabs. I tend to think the wheels currently being manufactured for cars, bikes, etc. are probably a little bit better solution.
]]>There are some simple styles that will stay consistent throughout the examples:
#wrap{
border: 1px solid #000;
}
.main{
float: left;
width: 70%;
}
.side{
float: right;
width: 25%;
}
The problem is that when you float an item, you are taking it out of the normal flow of the document, so other elements on the page act as if the floated element is not there. You can see this below (I am using white on black in my examples so they stand out more):

As you can see, #wrap doesn’t see .main or .side because they are floated, so our border doesn’t extend down. There are numerous proposed solutions to this problem.
Extra Markup
One method that is tried and true, is to add another element inside of #wrap after both of the floats. For example, you could use a div with the class of bottomfix. Now you just set the style of bottomfix to be clear:both, and your wrap will now extend to contain the floats and the bottomfix.
Obviously, if we are shooting for separation of presentation and content (as we should be), this is not an ideal situation. We now have an element in place simply to create a presentational effect.
Instead, let’s take a look at some ways of creating the same effect using only CSS. To do so, you have to have a very brief understanding of how Internet Explorer handles floats.
IE Floats
So far it may seem that Internet Explorer (IE) handles floats the same as other browsers, but as we look a little closer, we see that is not the case. Internet Explorer has a proprietary property called hasLayout. For the purpose of this article, just understand that for an element to have “layout”, in more cases than not it will need either a width or a height. hasLayout can only be affected indirectly by your CSS styles, there is no hasLayout declaration.
Why is this important to know? Because if an element’s hasLayout property is equal to true, that element will auto-contain the float elements. What this means, is that to get IE to clear the floats, we really only have to add width: 100% to #wrap. Now #wrap’s hasLayout property is equal to true, and it will now automatically extend to contain the two floated elements.
It’s far from being ideal though. While #wrap will now extend properly, we have to be careful about our margins. Elements on the page may respect the containing element (#wrap), but they will not respect the floated elements.
To show this, let’s add another div with an id of next. We’ll give this div a 1 pixel pink border just so it stands out. Let’s also add a 10px bottom margin to the main element. The results in IE are shown below:

As you can see, by adding the width to #wrap, IE will now allow #wrap to contain the floats. You can also see that our 10px margin had no effect. In fact, the top margin of our first paragraph, and bottom margin of our last paragraph are also ignored in IE. So, if you want some space here, you need to use padding, not margins. You can also set a margin to #wrap…since it is the containing element, it’s margins are still respected.
Moving On
So now we have cleared the floats in IE, and we understand why. What about the other browsers though? Most will allow you to use the :after pseudo-class to add some content and have that content clear the floats.
#wrap:after{
content: ".";
display: block;
clear: both;
visibility: hidden;
height: 0;
}
What this does in the browsers that recognize it is add a period after the content of #wrap and have it clear the floats. We then use the height and visibility properties to make sure the period doesn’t show up. Remember, IE still needs to have “layout” on #wrap because it doesn’t recognize the :after pseudo class.
One problem…IE/Mac doesn’t auto clear, and doesn’t recognize the :after pseudo class. So we have to use some hacks to get IE/Mac and IE/Win to play nicely together. I won’t be getting into this, you can find a really nice article about it at Position Is Everything.
An Easier Way
Thankfully, there is an easier way that has been credited to Paul O’Brien. For most browsers to clear the float we simply need to add overflow: hidden to #wrap. Just make sure that there is also a width on #wrap so it has “layout” in IE and you are good to go. Our CSS ends up looking like this:
#wrap{
border: 1px solid #000;
width: 100%;
overflow: hidden;
}
.main{
float: left;
width: 70%;
}
.side{
float: right;
width: 25%;
}
No seriously….that’s it. #wrap will now fully contain both the floats. Just keep in mind that if you want to add some space around either of the floated elements, you will want to use padding instead of margins because otherwise IE will ignore it.
]]>Lucky for us, event delegation is not overly complex, and the jump from using event handlers to using event delegation can be made relatively easily.
Let’s start by creating a simple script using event handlers, and then recreate it using event delegation. What we want from our simple script, is that whenever a link is clicked on inside of a specified list, we will get the ‘href’ of the link alerted for us.
First, we will set up the markup. Nothing fancy to see here, just a list with an id of ‘links’ which will serve as our hook.
Now we can write a simple script that will go through and add an onclick event handler to each of the links in the list. (Note: for the purpose of simplicity, we will just have our functions below. In a real setting, you would want to do some scoping to protect your variables.)
function prepareAnchors(){
if (!document.getElementById) return false;
var theList = document.getElementById(“links”);
var anchors = theList.getElementsByTagName(“a”);
for (var i=0; i < anchors.length; i++){
anchors[i].onclick = function(){
alert(this.getAttribute(“href”));
return false;
}
}
}
Again, like I said nothing spectacular. We just grab all the links inside of the ul, loop through them, and assign a function to each individual link’s onclick event. (Note: At this point, if you are not able to follow the function above, you are probably not going to get anything useful out of this article. I would instead recommend DOM Scripting by Jeremy Keith.) Now let’s recreate the effect using event delegation.
function getTarget(x){
x = x || window.event;
return x.target || x.srcElement;
}
function prepareAnchors(){
if (!document.getElementById) return false;
var theList = document.getElementById(“links”);
theList.onclick = function(e){
var target = getTarget(e);
if (target.nodeName.toLowerCase() ===’a’){
alert(target.getAttribute(“href”));
}
return false;
}
}
This one probably requires a little more explanation. The getTarget() function simply gets the target of the event function, or according to Internet Explorer, the source of the event function.
In prepareAnchors() we get the ‘links’ list, and assign an onclick event handler to the list as a whole. Now, when anything inside the list is clicked, we simply use getTarget() to find the element that was clicked. If the clicked element was a link, we alert the ‘href’, if not, we just ignore it.
What are the advantages to using event delegation? Well, for starters, by using one event handler versus many, there is less memory being used to accomplish the same task. On a script this small, you won’t be able to tell a performance difference, but larger, more intensive apps will most certainly perform better. Also, by using event delegation, we ensure that our script works even if the DOM has been modified since page load. To see an example of what how modifying the DOM can alter the performance of a script using event handlers, take a look at the excellent comparison done by Chris Heilmann.
]]>What can you expect…well, there will be many conversations about what I have learned or come across in the world of web development. In particular, you should eventually see information on things like CSS, XHTML, Javascript, and so on.
There will be some personal updates mixed in I am sure, but I think it is fair to say that the vast majority of posts will be informative and educational in nature.
This is a custom built blog system, so if anything is quirky, or some feature that you feel is seriously important is not here (other than pretty permalinks, they’ll be coming soon), please feel free to let me know.
So stay tuned and hopefully I will have something a little more interesting to read for you soon.
]]>