#btconf Berlin, Germany 13 - 16 Nov 2019

Jeremy Keith

Jeremy Keith lives in Brighton, England where he makes websites with the splendid design agency Clearleft. You may know him from such books as DOM Scripting, Bulletproof Ajax, HTML5 For Web Designers, Resilient Web Design, and, most recently, Going Offline.

He curated the dConstruct conference for a number of years as well as Brighton SF, and he organised the world's first Science Hack Day. He also made the website Huffduffer to allow people to make podcasts of found sounds—it's like Instapaper for audio files.

Hailing from Erin's green shores, Jeremy maintains his link to Irish traditional music running the community site The Session. He also indulges a darker side of his bouzouki-playing in the band Salter Cane.

Jeremy spends most of his time goofing off on the internet, documenting his time-wasting on adactio.com, where he has been writing for over fifteen years.

Prefer to watch this video on YouTube directly? This way, please.

The Layers of the Web

The World Wide Web turned 30 this year. The web has changed quite a bit over that time. But progress has been evolutionary rather than revolutionary: changes have accreted on top of what has come before. This layered iterative approach also works for building websites. Together we'll uncover how to build resilient, performant, accessible and beautiful structures that work with the grain of the materials of the web.

Transcription

[Music]

Jeremy Keith: Guten Morgen, Berlin!

Audience: Guten Morgen!

Jeremy Keith: Guten Morgen. All right. I’m just going to get started because I’ve got a lot to talk about and I’m very, very excited to be here.

I’m excited to talk about the Web. I’ve been thinking a lot about the Web. You know, I think a lot about the Web all the time, but this year, in particular, thinking about where the Web came from; asking myself where the Web came from, which is kind of a dumb question because it’s pretty obvious where the Web came from.

It came from this guy. This is Tim Berners-Lee and he is the creator of the World Wide Web. It was 30 years ago, March 1989, that he wrote a proposal while he was at CERN, a very dull-looking proposal called “Information Management,” a proposal that had incomprehensible diagrams trying to explain what he had in mind. But a supervisor, Mike Sendall, saw the potential and scrawled across the top, “Vague but exciting.”

[Audience laughter]

Tim Berners-Lee starts working on this idea he has for a global hypertext system and he starts creating the world’s first Web browser and the world’s first Web server, which is this NeXT machine which is in the Science Museum in London, a lovely machine, the NeXT box.

I have a great affection for it because, earlier this year, I was very honored to be invited to CERN, along with this bunch of hackers, to take part in a project related to the 30th anniversary of that proposal. I will show you a video that explains the project.

[Video started - CERN Data Centre, 11 to 15 February 2019]

[Music]

Kimberly Blessing: So, we came to CERN this week in order to create some sort of modern-day interpretation of the very first Web browser.

[Indiscernible chatter]

Martin Akolo Chiteri: Well, the project is to restore the first browser which was developed by the inventor of the Web, and the idea is to create an experience for the people who could not use the Web in its early days to have an idea how it felt to use the Web at that time.

Angela Ricci: I think the biggest difficulty was to make the browser work in the NeXT machine that we had.

Kimberly Blessing: We really needed to work with an original NeXT box in order to really understand what that experience was like in order to be able to write some code and replicate that experience.

[Indiscernible chatter]

Remy Sharp: My role is code, so generating the code to create the interactive aspect of the World Wide Web browser, recreated browser. It’s very much writing JavaScript to kind of create all the NeXT operating system UI, making requests to servers to go and get the HTML and massage the HTML back into a format that looks good in the World Wide Web browser; and making sure we end up with a URL that goes into production that someone can visit and see their own webpages. The tangible software is what I’m responsible for, so I have to make sure it all gets done. Otherwise, we have no browser to look at, basically.

[Indiscernible chatter]

Jeremy Keith: We got together a few years back to do a similar sort of hack project here at CERN which was creating the world’s second-ever Web browser, which was the Line Mode browser. We had a lot of fun with it and it’s a great bunch of people from all over the world. It’s been really great to get back together and it’s always amazing to be here at CERN, to be at not just the birthplace of the Web, but the most important place on the planet for science.

Yeah, it’s been a lot of fun. I kind of don’t want it to be over because we are in our element, hacking away, having fun, and just soaking up the atmosphere, and we are getting to chat with people who were there 30 years ago, Jean-Francois Groff and Robert Cailliau, these people who were involved in the creation of the World Wide Web. To me, that’s amazing to be surrounded by so much World Wide Web history.

[Music]

Jeremy Keith: The plan is that this will go online and anyone will be able to access it because it’s on the Web, and that’s the beautiful thing about the Web is that anyone can visit a Website, and so everyone will have the opportunity to try using the world’s first Web browser and see what modern webpages would look like if they were passed through this first Web browser.

[Music]

[End of video]

Jeremy Keith: Well, spoiler alert. The project was a success and you can, indeed, look at your websites in a recreation of the first-ever Web browser. This is the URL. It’s worldwideweb.cern.ch.

Success, that was good. But as you could probably tell from that video, Remy was the one basically making this all happen. He was the one writing the JavaScript to recreate this in a modern browser. This is the first-ever Web page viewed in the first-ever Web browser.

As you gathered, again, I was really fascinated by the history of the Web, like, where did it come from, and the people who were there at the time and getting to pick their brains. I spent most of my time working on the accompanying website to go with this project. I was creating this timeline.

Because this was to mark the 30th anniversary of this proposal, I thought, well, we could easily look at what has happened in the last 30 years: websites, Web servers, formats, standards - all that stuff. But I thought it would be fascinating to look at the previous 30 years as well and try and figure out the things that were happening that influenced Tim Berners-Lee in terms of hypertext, networks, computing, and all this stuff.

But I’d kind of had given myself this arbitrary cut-off point of 30 years to make this nice symmetry of it being the 30th anniversary of the World Wide Web. I could go further back. I could start asking, well, what happened before 30 years ago? What were the biggest influences on Tim Berners-Lee and the World Wide Web?

Now, if you were to ask Tim Berners-Lee himself who his biggest influencers were, he would give you a straight-up answer. He will say his biggest influencers were Conway Berners-Lee and Mary Lee Woods, his father and mother, which is fair enough. Normally, when you ask people who their influences are, they say, “Oh, my parents. They gave me a loving environment. They kindled my curiosity,” and all that stuff.

I’m sure that’s true but, in this case, it was also a big influence in a practical sense in that both Mary and Conway worked on the Ferranti Mark 1. That’s where they met. They were programmers. Tim Berners-Lee’s parents were programmers on the Ferranti Mark 1, a very early computer. This is in the 1950s in Britain.

Okay, this feels like a good origin story for the Web, right? They were working on this early computer. But it’s an early computer; it’s not the first computer. Maybe I need to go back further. How far back do I go to find the first computer?

Is this the first computer, the Antikythera mechanism? You can see this in a museum in Athens. This was recovered from a shipwreck. It was recovered at the start of the 20th Century, but it dates back thousands of years, a mechanism for predicting the position of stars and planets. It does calculations. It is a calculating device. Not a programmable computer as such, though.

If you’re thinking about the origins of the idea of a programmable computer, I think we could start to look at this gentleman, Charles Babbage. This is half of Charles Babbage’s brain--

[Audience laughter]

--which is in the Science Museum in London along with that original NeXT box that the World Wide Web was created on. The other half is in the Computing History Museum in California.

Charles Babbage lived in the 19th Century, and kind of got a lot of seed funding from the U.K. government to build a device, the Difference Engine, which would do calculations. Later on, he scrapped that and started working on the Analytical Engine which would be even better - a 2.0 version. It never got finished, by the way, but it was a really amazing idea because you could see the architecture of like a central processing unit, but it was still fundamentally a calculator, a calculating machine.

The breakthrough in terms of programming maybe came from Charles Babbage’s collaborator. This is Ada Lovelace. She was translating documents by an Italian mathematician about Difference Engines and calculations. She realized that--hang on--if we’re doing operations on numbers, what if those numbers could stand for other concepts, non-numerical like words or thoughts? Then we could do operations on things other than numbers, which is exactly what we do today in modern computing.

If you use a word processor, you’re not processing words; you’re operating on ones and zeros. If you use a graphics program, you’re not actually moving pixels around; you’re operating on ones and zeroes. This idea of how anything could stand in for ones and zeros for numbers kind of started with Ada Lovelace. But, as I said, the Difference Engine and the Analytical Engine, they never got finished, and this was kind of a dead-end. It turns out, they weren’t an influence.

Later on, for example, this genius who was definitely responsible for the first working computers, Alan Turing, he wasn’t aware of the work of Babbage and Lovelace, which is a shame. He was kind of working in isolation.

He came up with the idea of the universal machine, the Turing Machine. Give it an infinitely long tape and enough state, enough time, you could calculate literally anything, which is pretty much what computers are.

He was working at Bletchley Park breaking the code for the Enigma machines, and that leads to the creation of what I think would be the first programmable computer. This is Colossus at Bletchley Park. This was created by a colleague of Turing, Tommy Flowers.

It is programmable. It’s using valves, but it’s absolutely programmable. It was top secret, so even for years after the war, this was not known about. In the history books, even to this day, you’ll often see ENIAC listed as the first programmable computer, but I think that honor goes to Tommy Flowers and Colossus.

By the way, Alan Turing, after the war, after 1945, he did go on to work and keep on working in the field of computing. In fact, he worked as a consultant at Ferranti. He was working on the Ferranti Mark 1, the same computer where Tim Berners-Lee’s parents met when they were programmers. As I say, that was after the war ended in 1945.

Now, we can’t say that the work at Bletchley Park was responsible for winning the war, but we could probably say that it’s certainly responsible for shortening the war. If it weren’t for the work done by the codebreakers at Bletchley Park, the war might not have finished in 1945.

1945 is the year that this gentleman wrote a piece that was certainly influential on Tim Berners-Lee. This is Vannevar Bush, a scientist, a thinker. In 1945, he published a piece in the Atlantic Monthly under the heading, “A Scientist Looks at Tomorrow,” he publishes, “As we may think.”

In this piece, he describes an imaginary device. It’s a mechanical device inside a desk, and the operator is allowed to work on reams and reams of microfilm and to connect ideas together, make these associative trails. This is kind of like hypertext before the word hypertext has been coined. Vannevar Bush calls this device the Memex. That’s published in 1945.

Also, in 1945, this young man has been drafted into the U.S. Navy and he’s shipping out to the Pacific. His name is Douglas Engelbart. Literally, as the ship is leaving the harbor to head to the Pacific, word comes through that the war is over.

Now, he still gets shipped out to the Pacific. He’s in the Philippines. But now, instead of fighting against the Japanese, he’s lounging around in a hut on stilts reading magazines and that’s where he reads “As We May Think,” by Vannevar Bush.

Fast-forward years later; he’s trying to decide what to do with his life other than settle down, get married, have a job, you know, that kind of thing. He thinks, “No, no, I want to make the world a better place.” He realizes that computers could be the way to do this if they could implement something very much like the Memex. Instead of a mechanical device, what if computers could create the Memex, this hypertext system? He devotes his life to this and effectively invents the field of human computer interaction.

On December 9th, 1968, he demonstrates what he’s been working at. This is in San Francisco, and he demonstrates bitmap screens. He demonstrates real-time collaboration on documents, working hypertext, and also he invents the mouse for the demo.

[Video started]

Douglas Engelbart: We have a pointing device called a mouse, a standard keyboard, and a special key set we have here. And we are going to go for a picture down on our laboratory in Menlo Park and pipe it up. It’ll show you, from another point of view, more about how that mouse works.

Come in, Menlo Park. Okay, there’s Don Anders’ hand in Menlo Park. In a second, we’ll see the screen that he’s working and the way the tracking spot moves in conjunction with movements of that mouse. I don’t know why we call it a mouse sometimes. I apologize; it started that way and we never did change it.

[Video ends]

[Audience laughter]

Jeremy Keith: This was ground-breaking. The mother of all demos, it came to be known as. This was a big influence on Tim Berners-Lee. At this point, we’ve entered the time cone of those 30 years before the proposal that Tim Berners-Lee made, which is good because this is the moment where I like to branch off from this timeline and sort of turn it around.

The question I’m sure nobody is asking, because you saw there was a video link-up there. Douglas Engelbart is in San Francisco, and he has a video link-up with Menlo Park to demonstrate real-time collaboration with computers. The question nobody is asking is, who is operating the video camera in Menlo Park?

Well, I’ll tell you the answer to that question that nobody is asking. The man operating the video camera in Menlo Park is this man. His name is Stuart Brand. Now, Stuart Brand has spent most of the ‘60s doing what you would do in the ‘60s; he was dropping acid. This was all kosher. This was before it was illegal.

He was on the Merry Pranksters bus with Ken Kesey and, on one particular acid trip, he literally saw the Earth curving away and realizing that, yeah, we’re all on one planet, man! And he started a campaign with badges called, “Why haven’t we seen a photograph of the whole Earth yet?” I like the “yet” part in there like it’s a conspiracy that we haven’t seen a photograph of the whole Earth.

He was kind of onto something here, realizing that seeing our planet as a whole planet from space could be a consciousness-changing thing much like LSD is a consciousness-changing thing. Sure enough, people did talk about the effect it had when we got photographs like Earth Rise from Apollo 8, and he used those pictures when he published the Whole Earth Catalog, which was a series of books.

The Whole Earth Catalog was basically like Wikipedia before the internet. It was this big manual of how to do everything. The idea was, if you were running a commune, living in a commune, you needed to know about technology, and agriculture, and weather, and all the stuff, and you could find that in the Whole Earth catalog.

He was quite an influence guy, Stuart Brand. You probably heard the Steve Jobs commencement speech where he quotes Stuart Brand, “Stay hungry, stay foolish,” all that stuff.

Stuart Brand also did a lot of writing. After Douglas Engelbart’s demo, he started to see that this computer thing was something else. He literally said computers are the new LSD, so he starts really investigating computing and computers.

He writes this great article in “Rolling Stone” magazine in 1972 about space war, one of the first games you could play on the screen. But he has a wide range of interests. He kind of kicked off the environmental movement in some ways.

At one point, he writes a book about architecture. He writes a book called “How Buildings Learn.” There’s a television series that goes with it as well. This is a classic book. The definition of a classic book being a book that everyone has heard of and nobody has read.

[Audience laughter]

In this book, he starts looking at the work of a British architect called Frank Duffy. Frank Duffy has this idea about architecture he calls shearing layers. The way that Frank Duffy puts it is that a building, properly conceived, consists of several layers of longevity, so kind of different rates of change.

He diagrams this out in terms of a building, and you see that you’ve got the site that the building is on that’s moving at a geological timescale, right? That should be around for thousands of years, we would hope. Then you’ve got the actual structure that could stand for centuries. Then you get into the infrastructure inside. You know, the plumbing and all that, you probably want to swap out every few decades. Basically, until you get down to the stuff inside a room, the furniture that you can move around on a daily basis. You’ve got all these timescales moving from fast to slow as you move inwards into the house.

What I find fascinating about this idea of these different layers as well is the way that each layer depends on the layer below. Like, you can’t have the structure of a building without first having a site to put it on. You can’t move furniture around inside a room until you’ve made the room using the walls and the doors, right? This idea of shearing layers is kind of fascinating, and we’re going to get back to it.

Something else that Stuart Brand went on to do; he was one of the co-founders of the Long Now Foundation. Anybody here part of the Long Now Foundation? Any members of the Long Now Foundation?

Ah… It’s a great organization. It’s literally dedicated to long-term thinking. It was founded by Stuart Brand and Danny Hillis, the computer scientist, and Brian Eno, the musician and producer. Like I said, dedicated to long-term thinking. This is my membership card made out of a durable metal because it’s got to last for thousands of years.

If you go on the Website of the Long Now Foundation, you’ll notice that the years are made up of five digits, so instead of 2019, it will be 02019. Well, you know, you’ve got to solve the Y10K problem. They’re dedicated to long-term thinking, to try to think in the longer now.

One of the most famous projects is the clock of the Long Now. This is a clock that will tell time for 10,000 years. Brian Eno has done the chimes. They’re generative. It’ll never chime the same way twice. It chimes once a century. This is a scale model that’s in the Science Museum in London along with half of Charles Babbage’s brain and the original NeXT machine that Tim Berners-Lee created World Wide Web on.

This is just a scale model. The full-sized clock is going to be inside a mountain in west Texas. You’ll be able to visit it. It’ll be like a pilgrimage. Construction is underway. I hope to visit the clock one day.

Stuart Brand collected his thoughts. It’s a really fascinating project when you think about, how do you design something to last 10,000 years? How do you communicate over 10,000 years? One of those tricky design problems almost like the Voyager Golden Record or the Yucca Mountain waste disposal. How do you communicate to future generations? You can’t rely on language. You can’t rely on semiotics.

Anyway, he collected a lot of his thoughts into this book called “The Clock of the Long Now,” subtitled “Time and Responsibility: The Ideas Behind the World’s Slowest Computer.” He’s thinking about time. That’s when he comes back to shearing layers and these different layers of rates of change; different layers of time.

Stuart Brand abstracts the idea of shearing layers into something called “pace layers.” What if it’s not just architecture? What if any kind of system has these different rates of change, these layers?

He diagrams this out in terms of the human species, so think of humans. We have these different layers that we operate at. At the lowest, slowest level, there is our nature, literally, like what makes us human in terms of our DNA. That doesn’t change for tens of thousands of years. Physiologically, there’s no difference between a caveman and an astronaut, right? Y

Then you’ve got culture, which cumulates of centuries, and the tribal identities we have around things like nations, language, and things like that. Governance, models of governance, so not governments but governance, as in the way we choose to run things, whether that’s a feudal society, or a monarchy, or representative democracy, right? Those things do change, but not too fast, hopefully. Infrastructure: you’ve got to keep up with the times, you know? This needs to move at a faster pace, again. Commerce: much more fast-moving. Commerce needs to -- you’re getting into the faster timescales there.

Then he puts fashion at the top. By fashion, he means anything that is supposed to be new and exciting, so that includes pop music, for example. The whole idea with fashion is that it’s there to try stuff out and discard it very quickly.

“What about this?”
No.”
“What about that?”
“Try this.”
“No, try that.”

The good stuff, the stuff that kind of sticks to the wall, will maybe find its way down to the longer-lasting layers. Maybe a really good pop song from fashion ends up becoming part of culture, over time.

Here’s the way that Stuart Brand describes pace layers. He says fast learns; slow remembers. Fast proposals, and slow disposes. Fast is discontinuous; slow is continuous. Fast and small instructs slow and big by a crude innovation and by occasional revolution, but slow and big controls small and fast by constraint and constancy. He says, fast gets all of our attention but slow has all the power.

Pace layers is one of those ideas that, once you see it, you can’t unsee it. You know when you want to make someone’s life a misery, you just teach them about typography. Now they can’t unsee all the terrible kerning in the world. I can’t unsee pace layers. I see them wherever I look.

Does anyone remember this book, UX designers in the room, “The Elements of User Experience,” by Jesse James Garrett? It’s old now. We’re going back in the way but, in it, he’s got this diagram about the different layers to a user experience. You’ve got the strategy below that finally ends up with an interface at the top.

I look at this, and I go, “Oh, right. It’s pace layers. It’s literally pace layers.” Each layer depending on the layer below, the slower layers at the bottom, the faster-moving things at the top.

With this mindset that pace layers are everywhere, I thought, “Can I map out the Web in terms of pace layers, the technology stack of the Web?” I’m going to give it a go. At the lowest stack, the slowest moving, I would say there’s the internet itself, as in TCP/IP, the transmission control protocol, Internet protocol created by Bob Kahn and Vint Cerf in the ‘70s and pretty much unchanged since then, deliberately dumb, deliberately simple. All it does is move packets around. Pretty much unchanged.

On top of that, you get the other protocols that use TCP/IP, like in the case of the Web, the hypertext transfer protocol. Now, this has changed over time. We now have HTTP/2. But it hasn’t been rapid change. It’s been gradual. Again, that kind of feels right. It feels good that HTTP isn’t constantly changing underneath us too much.

Then we serve up over HTTP are URLs. I wish that URLs were down here. I wish that URLs were everlasting, never changing. But, unfortunately, I must acknowledge that that’s not true. Links die. We have to really work hard to keep them alive. I think we should work hard to keep them alive.

What do you put at those URLs? At the simplest level, it’s supposed to be plain text. But this is the Web, so let’s say structured text. This is going to be HTML, the hypertext markup language, which Tim Berners-Lee came up with when he created the World Wide Web. I say, “Came up with.” He basically sold it from SGML that scientists at CERN were already using and sprinkled in one or two new tags, as they were calling it back then.

There were maybe like 20-something tags in HTML when Tim Berners-Lee created the Web. Now we’ve got over 100 elements, as we call them. But I feel like I’ve been able to keep up with the pace of change. I mean, the vast big kind of growth spurt with HTML was probably HTML5. That’s been a while back now. It’s definitely change that I can keep on top of.

Then we have CSS, the presentation layer. That feels like it’s been moving at a nice clip lately. I feel like we’ve been getting a lot of cool stuff in CSS, like Flexbox and Grid, and all this new stuff that browsers are shipping. Still, I feel like, yeah, yeah, this is good. It’s right that we get lots of CSS pretty rapidly. It’s not completely overwhelming.

Then there’s the JavaScript ecosystem. I specifically say the “JavaScript ecosystem” as opposed to the “JavaScript language” because the JavaScript language is being developed at a nice pace. I feel like it’s going at a good speed of standardization. But the ecosystem, the frameworks, the libraries, the build tools, all of that stuff, that feels like, “You know what? Try this. No, try that. What about this? What about that? Oh, you’re still using that framework? No, no, we stopped using that last week. Oh, you’re still using that build tool? No, no, no, that’s so … we’ve moved on.”

I find this very overwhelming. Can I get a show of hands of anybody else who feels overwhelmed by this rate of change? All right. Keep your hands up. Keep your hands up and just look around. I want you to see you are not alone. You are not alone.

But I tell you what; after mapping these layers out into the pace layer diagram, I realized, wait a minute. The JavaScript layer, the fashion layer, if you will, it’s supposed to be like that. It’s supposed to be trying stuff out. Throw this at the wall. No, throw that at the wall. How about this? How about that?

It’s true that the good stuff does stick. Like if I think back to the first uses of JavaScript--okay, I’m showing my age, but--when JavaScript first came along, we’d use it for things like image rollovers or form validation, right? These days, if I wanted to do an image rollover--you mouseover something and it changes its appearance--I wouldn’t use JavaScript. I’d use CSS because we’ve got colon hover.

If I was doing a form validation, like, “Oh, has that field actually been filled in?” because it’s required and, “Does that field actually look like an email address?” because it’s supposed to be an email address, I wouldn’t even use JavaScript. I would use HTML; input type equals email required. Again, the good stuff moves down into the sort of slower layers. Fast learns; slow remembers.

The other thing I realized when I diagrammed this out was that, “Huh, this kind of maps to how I approach building on the Web.” I pretty much take this for granted that it’s going to be on the Internet. There’s not much I can do about that. Then I start thinking about URLs like URL-first design, the information architecture of a site. I think it’s underrated. I think people should create a URL-first design. URL design, in general, I think it’s a really good place to start if you’re building a product or a service.

Then, about your content in terms of structure. What is the most important thing on this page? That should be an H1. Is this a paragraph? Is it a list? Thinking about the structure first and then going on to think about the appearance which is definitely the way you want to go if you’re making something responsive. Think about the structure first and then the appearance and all these different form factors.

Then finally, add in behavior with JavaScript. Whatever HTML and CSS can’t do, that’s what I will use JavaScript for to kind of enhance it from there.

This maps really nicely to how I personally approach building things on the Web. But, it is a testament to the flexibility of the World Wide Web that, if you don’t want to build in this way, you don’t have to.

If you want, you could build like this. JavaScript is a really powerful language. If you wanted to do navigations and routing in JavaScript, you can. If you want to inject all your content into the page using JavaScript, you can.

CSS in JS? You can. Right? I mean, this is pretty much the architecture of a single-page app. It’s on the internet and everything is in JavaScript. The internet is a delivery mechanism for a chunk of JavaScript that does everything: the markup, the CSS, the routing.

This isn’t how I approach building on the Web. I kind of ask myself why this doesn’t feel quite right to me. I think it’s because of the way it kind of turns everything into a single point of failure, which is the JavaScript, rather than spreading out those points of failure. We’re on the Internet and, as long as the JavaScript runs okay, the user gets everything. It turns what you’re building into a binary proposition that either it doesn’t work at all or it works great. Those are your own two options.

How, we’ll point out that, in another medium, this would make complete sense. Like if you’re building a native app, if you build an iOS app and I’ve got an iOS device, I get 100% of what you’ve designed and built. But if you’re building an iOS app and I have an Android device, I get zero percent of what you’ve designed and built because you can’t install an iOS app on an Android device. Either it works great or it doesn’t work at all; 100% or zero percent.

The Web doesn’t have to be like that. If you build in that layered way on the Web, then maybe I don’t get 100% of what you’ve designed and built but I don’t get zero percent, either. I’ll get something somewhere along the way, hopefully, closer to working great. It goes from not working at all to just about working. It works fine and works well; it works great.

You’re building up these layers of experience, the idea being that nobody gets left behind. Everybody gets something regardless of their device, their network, their browser. Everyone is not going to get the same experience, but everybody gets something. That feels very true to the original sort of founding ideas of the Web and it maps so nicely to our technical stack on the Web, the fact that you can start to think about things like URLs-first and think about the structure, then the presentation, and then the behavior.

I’m not the only one who likes thinking in this layered kind of way when it comes to the Web. I’m going to quote my friend Ethan Marcotte. He says, “I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it will change if, say, fonts aren’t available, or JavaScript doesn’t work, or someone doesn’t see the design as you or I might and is having the page read allowed to them.” That’s a really good point that when you build in the layered way, you’re building in the resilience that something can fall back to a layer a little further down.

This brings up something I’ve mentioned here before at Beyond Tellerrand, which is that, when we’re evaluating technologies, the question we tend to ask is, how well does it work? That’s an absolutely valid question. You’re about to try a new tool, a new framework, a new standard. You ask yourself; how well does it work?

I think the more important question to ask is, how well does it fail? What happens if that piece of technology fails? That’s why I like this layered approach because this fails really well. JavaScript’s no longer a single point of failure. Neither is CSS, frankly. If the CSS never loaded, the user still gets something.

Now, this brings up an idea, a principle that definitely influenced Tim Berners-Lee. It was at the heart of his design principles for the World Wide Web. It’s called the Principle of Least Power that states, “Choose the least powerful language suitable for a given purpose,” which sounds really counterintuitive. Why would I choose the least powerful language to do something?

It’s kind of down to the fact that there’s a trade-off. With power, you get a fragility, right? Maybe something that is really powerful isn’t as universal as something simpler. It makes sense to figure out the simplest technology you can use to achieve a task.

I’ll give an example from my friend Derek Featherstone. He says, “In the Web front-end stack--HTML, CSS, JavaScript, and ARIA--if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.” Again, he’s talking about the resilience you get by building in a layered way and choosing the least powerful technology.

It’s like a classic example being ARIA. The first rule of ARIA is, don’t use ARIA if you don’t have to. Rather than using a div and then adding the event handlers and the ARIA roles to make it look like a button, just use a button. Use the simpler technology lower in the stack.

Now, I get pushback on this because people will tell me, like, “Well, that’s fine if you’re building something simple, but I’m not building something simple. I’m building something complex.” Everyone likes to think they’re building something complex, right? Everyone is convinced they’re working on really hard things, which makes sense. That’s human nature.

If you’re at a cocktail party and someone says, “What do you do?” and you describe your work and they say, “Oh, okay. That sounds really easy,” you’d be offended, right?

[Audience laughter]

If you’re at a party and someone says, “What do you do?” you describe your work and they go, “Wow, God, that sounds hard,” you’d be like, “Yeah!”

[Audience laughter]

“Yeah, it is hard. What I do is hard.” I think we gravitate, especially when someone markets it as, “This is a serious tool for serious, complex sites.” I’m like, “That’s me. I’m working on a serious, complex site.”

I don’t think the reality is quite like that. Reality is just messier. There’s nothing quite that simple. Very few things are really that complex. Everything kind of exists on this continuum somewhere along the way. Even the simplest website has some form of interaction, something appy about it.

Those are those other two terms people use when talking about simple and complex is website and Web app, as if you can divide the entirety of the whole World Wide Web into two categories: websites and Web apps. Again, that just doesn’t make sense to me. I think the truth is, things are messier and schmooshier between this continuum of websites and Web apps. I don’t get why we even need the separate word. It’s all Web stuff.

Though, there is this newer term, “Progressive Web App,” that I’m kind of. I kind of like it. Who has heard of Progressive Web Apps? All right.

Who thinks they have a good handle on what a Progressive Web App is? Okay. See, that’s a lot fewer hands, which is totally understandable because, if you start googling, “What is a Progressive Web App?” you get these Zen-like articles. “It’s a state of mind.” “It’s about rich, native-like interactions, man.”

[Audience laughter]

No. No, it’s not. Worse still, you read, “Oh, a Progressive Web App is a single-page app.” I was like, no, you’ve lost me there. No, it’s not. Or least it can be, but any website can be a Progressive Web App. You can elevate a website to be a Progressive Web App.

I don’t mean in some sort of Zen-like fashion. I mean using technologies, three particular technologies. You make sure that website is running HTTPS, you have a Web app manifest that’s a JSON file with metadata, and you have a service worker that gets installed on the user’s device. That’s it. These three technologies turn a website into a Progressive Web App -- no mystery about it.

The tricky bit is that service worker part. It’s kind of a weird thing because it’s JavaScript but it’s JavaScript that gets installed on the user’s device and acts like a proxy. It intercepts network requests and can do different things like grab things from the cache instead of going to the network. I’m not going to go into how it works because I’ve written plenty about that in this book “Going Offline,” so if you want to know the code, you can go read the book.

I will say that when I first came across service workers, it totally did my head in because this is my mental model of the Web. We’ve got the stack of technologies that we’re building on top of, each layer depending on the layer below. Then service workers come along and say, “Well, actually, you could have a website like this,” where the lowest layer, the network, the Internet goes away and the website still works. [Mind is blown]

[Audience laughter]

It took me a while to get my head around that. The service worker file is on the user’s device and, if they’ve got no Internet connect, it can still make decisions and serve up something like a custom offline page.

Here’s a website I run called huffduffer.com. It’s for making your own podcast out of found sounds. If you’re offline or the website is down, which happens, and you visit Huffduffer, you get this offline page saying, “Sorry, you’re offline.” Not very useful, but it’s branded like the site, okay? It’s almost like the way you have a custom 404 page. Now you can have a custom offline page that matches your site. It’s a small thing, but it can be handy.

We ran this conference in Brighton for two years, Ampersand. It’s a Web typography conference. That also has a very simple offline page that just says, “Sorry. You’re offline,” but then it has the bare minimum information you need about the conference like where is this conference happening; what time does it start?

You can imagine a restaurant website having this, an offline page that tells you, “Here is the address. Here are the opening hours.” I would like it if restaurant websites had that information when you’re online as well, but--

[audience laughter]

You can also have fun with this, like Trivago. Their site relies on search, so there’s not much you can do when you’re offline, so they give you a game to play, the offline maze to keep you entertained. That’s kind of at the simplest level of what you could do, a custom offline page.

Then at the other level, I’ve written this book called “Resilient Web Design.” A lot of the ideas I’m talking about here are in this book. The book is a website. You go to the website and you read the book. That’s it. It’s free. You just go to resilientwebdesign.com. I mean free. I don’t ask for your email address and I’m not tracking any information at all.

This is how it looks when you’re online, and then this is how it looks when you’re offline. It is exactly the same. In fact, the moment you visit the website, it basically downloads the whole book.

Now, that’s the extreme example. Most websites, you wouldn’t want to do that because you kind of want the HTML to be fresh. This is never going to get updated. I’m done with this so I’m totally fine with, you go straight to the cache; never even go to the network. It’s absolutely offline first. You’re probably going to want something in between those two extremes.

On my own website, adactio.com, if you’re browsing around the website and you’re reading things, that’s all fine. Then what if you lose your internet connection? You get the custom offline page that says, “Sorry, you’re offline,” but then it also shows you the things you’ve previously visited.

You can revisit any of these pages. These have all been cached, so you can cache things as people are browsing around the site. That’s a nice little pattern that a lot of websites could benefit from. It only suffers from the fact that all I can show you is stuff you’ve already seen. You have to have already visited these pages for them to show up in this list.

Another pattern that I think is maybe better from a user experience point is when you put the control in the hands of the user. This website, archive.deconstruct, this is what it sounds like. It’s an archive. It’s conference talks.

We ran a conference called Deconstruct for 10 years from 2005 to 2015. Breaking news; we’re bringing it back for a one-off event next year, September 2020.

Anyway, all the talks from ten years are online here as audio files. You can browse around and listen to these talks.

You’ll also see that there’s this option to save for offline, exposed on the interface. What that does it is doesn’t just save the page offline; it also saves the audio offline. Then, when you’re an airplane or at the bottom of the ocean or whatever, you can then listen to the things you explicitly asked to be saved offline. It’s effectively a podcast player in the browser.

You see there’s a lot of things you can do. There are kind of a lot of layers you can build upon once you have a service worker. Then, at the very least, you can do caching because that’s the stuff we do anyway, like put this file in the cache, your CSS, your JavaScript, your icons, whatever.

Then think about, well, maybe I should have a custom offline page, even if it’s just for the branding reasons of having that nice page, just like we have a custom 404 page. Then you start thinking, well, I want the adding to home screen experience to be good, so you’ve got the Web app manifest. You’re thinking about how the site -- you implement one of those patterns there allowing the users to save things offline, maybe.

Also, push notifications are now possible thanks to service workers. It used to be, if you wanted to make someone’s life a misery, you had to build a native app to give them push notifications all day long. Now you can make someone’s life a misery on the Web too.

[Audience laughter]

There’s even more advanced APIs like background sync where the website can talk to the Web server even when that website isn’t open in the browser and sync up information -- super powerful stuff. Now, the support for something like service worker and the cache API is almost universal at this point. The support for stuff like background sync notifications, you know, spottier, not universal, and that’s okay because, as long as you’re adding these things in layers, then it’s fine if something doesn’t have universal support, right? It’s making something work great but, if someone doesn’t get that, it still works good. It’s all about building in that layered way.

Now you’re probably thinking, “Ah-ha! I’ve hoisted him by his own petard because it’s service workers who used JavaScript. That means they rely on JavaScript. You’ve made JavaScript a single point of failure; exactly what I was complaining about with single-page apps, right?

There’s a difference. With a single-page app, you’re relying on JavaScript. The user gets absolutely nothing if JavaScript doesn’t work. In the case of service workers, you literally cannot make a website that relies on a service worker. You have to make a website that works first without a service worker and then add the service worker on top, because, think about it; the first time anybody visits the website, even if their browser supports service workers, the service worker is not installed, so you have to build in layers.

I think this is why it appeals to me so much. The design of service workers is a layered design. You have to have something that works first, and then you elevate it, you improve the user experience using these technologies but you don’t rely on it. It’s not a single point of failure. It’s an enhancement.

That means you can take any website. Somebody’s homepage, a book online, this archived stuff, something that is more appy, sure, and make it work pretty much like a native app. It can appear full screen, add to the home screen, be indistinguishable from native apps so that the latest and greatest browsers and devices get the best experience. They’re making full use of the newest technologies.

But, as well as these things working in the latest and greatest browsers, they still work in the first Web browser ever created. You can still look at these things in that very first Web browser that Tim Berners-Lee created at worldwideWeb.cern.ch. It’s like it is an unbroken line over 30 years on the Web. We’re not talking about the Long Now when we’re talking about 30 years but, in terms of technology, that does feel special.

You can also look at the world’s first webpage in the first-ever Web browser but, almost more amazingly, you can look at the world’s first Web page at its original URL in a modern Web browser and it still works. We managed to make the Web so much better with new APIs, new technologies, without breaking it, without breaking that backward compatibility. There’s something special about that. There’s something special about the Web if you build in layers.

I’m encouraging you to think in terms of layers and use the layers of the Web. Thank you.

[Applause]

Speakers