Progressive Enhancement: Still Not Dead.

There's been a lot of focus on Javascript the last couple of years. Sometimes, the best way of building a web property is described as just slapping a JS-driven app on top of a REST-API, and I have some issues with that. More specifically, I think the benefits of progressive enhancement are still misunderstood, and progressive enhancement is Still Not Dead. Layering support for user-agents, performance and being robust against broken or blocked JS code are very good reasons.

Some background

A couple of weeks ago, I came across this post by Pamela Fox, where she discusses the pros and cons of various ways to structure application code in terms of division between front-end/back-end code for logic and rendering. This is not a criticism of that post (which was great, check out the accompanying slides), but rather a discussion of the topic in general. Seeing Nicholas Zakas’s excellent slides from his talk "Enough with the Javascript already" being tweeted around yesterday prompted me to finish writing this (very long) post that I had started.

Moar architecture!

I think it is great that we are talking about front-end architecture more and more. Gone are the days of the Wild West, "just through some jQuery/CSS hacks/user agent sniffing/!important/whatever at it ’til it sort of works."

I’m worried about the overarching nature of the discussion on JS-driven architectures though: sometimes it feels like the question we’re asking is "what kind of JS-for-rendering + API-driven solution should we use?" rather than first asking, and answering "what are the consequences of going the JS-for-rendering + API-driven route?".

There was a conference last year, where the creators of some of the most well-known JS frameworks discussed where they agreed and disagreed on how rich applications on the web should work. I read about it on Steven Sanderson’s blog, and found that all of them where in agreement that "Progressive enhancement isn’t for building real apps".

So… What’s a "real app" then? Tough question.

This is something that Jeremy Keith has written about recently in "By any other name": the definition of what is an app or not is very hard to answer. He concludes that something happens as soon as we identify what we’re building as a web app:

Progressive enhancement? Accessibility? Semantic markup? “Oh, we’d love to that, but this is a web app, you see… that just doesn’t apply to us.”

As soon something is labeled a web app, getting even a simple readable (not to say usable) page without JS seems like a no-go, these days.

JS always works everywhere, right?

There’s something fundamental and robust about being able to request a URL and get back at least an HTML representation of the resource: human-readable, accessible, fault tolerant. If the whole content of the page relies on JS to end up on the users device, there is so much more that can go wrong, even if you make sure to only put out pristine, unit-tested code. Just look at the Gawker-example. By the way, using JS for the initial rendering of a page turns out to be a big performance bottleneck—see examples from Twitter and AirBnB.

Just the other day, Google Analytics stopped working for me, and I got a blank screen. Sure, I know: wasn’t their fault, it was AdBlock’s, as I found after a good while of swearing. But how many users got blank pages that day? It wasn’t that the page was blank: it just didn’t contain anything the browser could render.

Like it or not, JS is built into the web as an optional technology. No no, I know what you’re thinking: "the hugely popular X, Y and Z sites are all depending on JS, it’s not feasible any more to support users with JS blocked/turned off". Yes, I know we often end up relying on JS in the products we build, but on a lower level of architecture the web does not require JS to work. Interactive web applications are fully possible without it. Browsers are still browsing without it.

P.E. is not just for older browsers

I’ve seen posts denouncing progressive enhancement where it’s described as simply "making it work in older browsers". That’s not quite correct: it’s about designing in layers from the bottom up, and making sure your architecture is able to fall back on a simpler but working model when something fails in an enhanced layer. This goes for both CSS, HTML and JS: in responsive web design, I’d say that using PE for all of these technologies is absolutely crucial, this post however focuses on the JS part.

Layering support

Or maybe you discover that this one particular browser/device has reported support for a feature you need in JS to run a specific part of your app, but it actually performs so abysmally in the "full" version that you’d rather have a simplified version for those users. If you have a layered architecture with a functioning server-side HTTP+HTML solution at its core, you could just expand the feature-test or (gasp!) browser sniff to give those users an acceptable way to perform their task—by falling back to the layer below. (Remember, we are not shutting anyone out in this scenario. User agent sniffing is still evil.)

Explosions expand in all directions

It used to be a commonly held belief that user agents where only getting more powerful: bigger screens, faster processors, better internet connections. Now, we know that’s not the case. There’s an ongoing explosion of different device sizes, capabilities and usage situations: cheap big Android tablets with crap processors on spotty 3G, small but powerful smartphones on wifi, black and white screens on e-readers with modern browsers but terrible CPU performance, etc etc. Assuming as little as possible about all these seems like a sound strategy to me: HTTP+HTML seems like a safe bet to start with, and using feature testing we can the dish out the enhancements as appropriate. We don’t need to set any other baseline, where some user agents are shut out.

An small example: the modal dialog

Say you have a web site with a modal dialog on it, transitioning into view, with some snazzy CRUD-functionality in it for some content on the page, all done beautifully via JS and communicating changes to the server via a JSON API. Then you ship a piece of bad JS-code, and it causes an error in some browsers. Would you rather have it that nothing happens when someone clicks the "edit"-link, or that they where taken to a simpler HTML page with a form that allows them to perform the intended action? Then they click "save" and go back to the previous page (preferably with a URL-fragment leading them back to the place on the page where they initiated the action).

The part where I acknowledge it’s not so easy.

Creating progressively enhanced applications that can use either JS or the server to let our users accomplish what they came for does not mean throwing out JS frameworks on the client-side. We should make use of a structured approach to creating applications that shift large parts of the work to the client side in order to deliver a better experience when we can and when it has clear benefits. The new frameworks and libraries at our disposal are great at handling this. But to be truly robust in the extremely shifting/hostile/flexible environment of the web, we need to go further and have the server ready to perform the work when JS cannot. This means more work for developers, more thoughtful application design and some amount of duplication of logic.

Handling the extra work

Whenever our features change, we need to have a workflow where we keep the client and server code in sync.

If using JS-templates for rendering, there is also a pain point when the HTML arrives on the client, and control of rendering is shifted to the JS side of things: usually, it means re-rendering with JS and replacing, which can be lead to a clunky page-load experience.

Hopefully, there can be implementations that alleviate some of these problems to some degree. Server-side JavaScript or language-agnostic template languages might allow us to fetch and reuse some of the same code on the client side as we use on the server. Projects like AirBnB:s "rendr"-library seems one way to do it, though the approach of tying the front-end architecture to the back-end architecture is probably a whole problem set in itself.

The bottom line, I think, is that a carefully thought out architecture should not recreate too much on the client, but rather have a server API that’s flexible enough to handle work that the client throws at it. With a measured approach, it’s a matter of creating different facets of parts of our apps rather than duplicating it across the board: sort like responsive web design is not about designing N different versions of your site, but the opposite.

You don’t need 100% parity

Having an implementation available in HTTP+HTML besides the whizz-bang JS-driven "app" can be a much simpler experience visually: stepping into an edit-flow page as described above needs only to have a basic form with some clear styling applied, and the other minimal UI elements that gives the user a sense of recognition (logo, navigation). No need to replicate every feature of the enhanced version server-side, just the relevant bits. There is also a point to be made about performance: a slimmed down version for situations where the browser or the network doesn’t cut the mustard needs to be even more fast, to minimize the effect of "ugly" page reloads.

Some things can be left out

There’s also obviously some things that are practically impossible without JS: Browser-based games. Image editing in the browser. Video chats. There’s plenty of cases. But let’s say your "web thang" does invoicing for small businesses. You probably can’t have the feature where they create their own letterhead available when JS fails. But the core of the application—listing, editing, sending and printing invoices: surely that’s doable, right?

Wrapping up

So, this is my pipe dream. Web properties robust enough to handle erroneous code, buggy browsers, CDN outages and choppy connections. Maybe that will never happen on a large scale (as I’m pretty sure not very many are doing this today). But I’ll still hold this as a gold standard, until the underlying nature of the web changes.

If I’ve taken the trouble to arrive at your URL, reachable on the open web with an HTML-capable browser, don’t just show me a blank page, OK?