Paul Bakaus: Everybody, good morning. I’m very happy to be here, and I’m very honored that I can open the day and hopefully keep up the energy. So let’s do this.
Yes, so progressive Web AMPs, that’s not a typo. I actually work on AMP most of the time right now. Previously I worked on Chrome DevTools for Google, and I did a few other things before that, but right now I’m spending most of my time fixing the mobile Web, making it fast again. This talk is about both PWA and AMP, so I’m going to go into all of those topics.
First, who of you have heard about progressive Web apps? Can I see some hands? Okay. I would say maybe 60%, something like that. Okay, so I’m going to do just a quick intro.
They’re to help you to turn your site into a reliable, fast, and engaging experience. It fixed the gap to native by adding a bunch of missing device capabilities like on the screen you’ll see Web payments and, yeah, actually only Web payments. Yes, but there’s a bunch of other things. One of the most exciting tech to come under the umbrella term of progressive Web apps is a way to fix this nasty problem the daownasaur, as we call him. You might not like to see him that often.
Now we fixed that with a feature we call service worker. The service worker is sort of a proxy, a Web worker that sits on the browser level. It’s on the client, and sits within your website and the actual backend of your page. What it can do is it can intervene any time the Web page makes a request to your backend. It can say, “Hey, I want to handle that request instead of the browser. I want to do; I want to decide what I’m going to offer to the user if he requests that image.”
And so it allows you to build a number of cool things. One of them, for instance, being the application shell pattern where you can cast your application shell, the actual UI in the layout of your page. Instantly, lazily during the first page load, and then on repeat visits get that shell already. It work similar to the native application model. The only thing you then continue to load on demand is the content, not the actual UI. That dramatically speeds up pages.
But what if you build the most amazing progressive Web app and nobody discovers it? It doesn’t wait long enough until the service worker has installed your app shell to make the subsequent load snappy. Keep in mind that the service worker is quite awesome, but it doesn’t help at all with the first load.
Initially, again, you visit the page. The service worker installs. Then only after you’ve seen the first page the service worker can cache all sorts of things.
Now even though the service worker API allows you to cache away all of your site assets for almost instant subsequent load, like when meeting someone new it’s the first impression that counts. If the first load takes more than 3 seconds, our latest DoubleClick study has shown that more than 53%, that’s half of your audience, will drop off after 3 seconds. Imagine that, three seconds, right?
Now three seconds, let’s be real, is already a brutal target. On mobile connections, that’s often average around – you get the latency of averaging around 300 milliseconds. It comes with other constraints, as you can see on the screen, such as limited bandwidth, establishing a signal, so you might be left with a total load performance budget of less than a second to actually do the things you need to do to initialize the app. Oh, by the way, you really want to load in under a second says RAIL, deriving that number itself from a book on usability by Jakob Nielsen. If your site does more than three round trips to the server, well, yes, you probably failed.
Okay, so that sounds pretty grim, but it gets worse. Don’t feel bad. The overall landscape of today’s Web actually looks a lot grimmer. The average mobile page loads in about 19 seconds with 77% taking longer than 10 seconds and doing 214 server requests of which over 50% are ad related. Imagine that, 214 server requests on a mobile connection. The bad thing here is, you know, that 77% of the pages, right, it turns out that the actual bounce rate of 100% starts at 10 seconds. What that means is that nobody would ever wait to see your site in 77% of those cases.
Is it simply that most developers are stupid? Not really. No. Of course, you as a developer would probably never do this, but your manager needs positive, short-term results to keep their job, so they force you.
You end up with a perfect conflict of interest: content versus ads and then user experience versus monetization. Publishers are desperate to attract and monetize and prioritize it above all and the user experience suffers. It’s actually a quite tragic downward spiral of doom.
And the always great Brad Frost has created this page. If you don’t know it yet, it’s called Death by Bullshit [sic]. It’s basically a perfectly fine page until you turn the bullshit on, and then it becomes a normal Web page. And so you’re probably used to things like that.
Paul Bakaus: This is the Internet today. And so what are the solutions for that problem? Well, one: ad blockers. Right? Now they kind of work. They sort of fix the problem on the user side. They definitely don’t fix the problem for the publisher because most publishers are dying if they have too many people using ad blockers. Then the site shuts down, and then big bold letters say, “This is why you can’t have nice things.”
The other concept are walled content distribution platforms that lock you into a specific platform, but they require contracts. They’re quite hard to navigate. They don’t really use the qualities of the open Web. They don’t really have URLs. There’s no way to link to them usually.
They really don’t seem like legit solutions, so we’ve decided to attack the problems that today’s Web pages. The ones – please don’t do this, by the way.
Paul Bakaus: The ones with the most to most Web pages in two different ways. It was the first time to look beyond the tellerrand. Option A was what we were already doing: advocate perf matters and convince publishers that faster websites make happier users and drive standardization of more performant APIs with the W3C.
Now the idealist in me still prefers that option, right? But that’s quite an illusive unicorn, and here’s why. The only people we could convince that perf matters were you right here in this audience. That’s great until these developers aren’t given the opportunity to fix their shitty websites by their managers. There’s always something more important to do.
Then building an extensive set of faster APIs with browsers in the W3C would have taken years. But, frankly, we didn’t feel we have years. The mobile Web did already look like the death by bullshit site, and no one would want to use it any more. So we saw a high rise to native apps for things that really are of ephemera quality, of things you only want to visit twice a year, and it doesn’t really make sense. The mobile Web was really shutting down.
We didn’t want to get to that, so we went with Option B. Option B was to iterate as fast as we can on a stop gap measure that would fix the performance and usability issues of the mobile Web today before it’s too late and, when the patient is stable, we can still continue with Option A because we love the mobile Web, so we choose to go with Option B.
We call it Accelerated Mobile Pages. Now AMP, in fact, has a few more goals other than just to fix the load time of a page. It also addresses runtime performance and usability. This is showing how the ecosystem looks like. It’s short for accelerated mobile pages. It’s an ecosystem consisting of Web components library. It allows you to declaratively write HTML we call AMP HTML because it’s both a superset and a subset of HTML. Then AMP caches them up that are basically CDNs or, more technically correct, reverse proxies that accelerate the delivery. And that turns author pages into highly portal, fast, and user friendly units that platforms like Google, Bing, or Pinterest can safely and quickly embed.
Just to highlight a few performance optimizations AMP is doing here free to boot. One is that it only requires a single HTTP request to fetch an entire document - really a single HTTP request. That’s because, for instance, we force all CSS to be in line and at a maximum of 50 kilobytes. Then we only allow GPU optimizable animations, and this is really important for the runtime of the page, not just the load time.
We have a pretty complex resource prioritization model in AMP. This is why, on this HTML screen previously, you’ve probably seen that we replaced the image tag with AMP image. You might think, you know, I mean the image tag works. It displays images. Why are you going to reinvent the wheel? That’s a fair question. The answer to that question is because, with AMP image, we can control the load pipeline much more efficiently. We can understand what assets are on a page and can prioritize them accordingly.
For instance with ads, ads is something, and this is kind of curious because obviously we’re making a lot of money with ads, like Google. But in the AMP project itself, ads are de-prioritized because the number one requirement for AMP is that it makes the mobile Web awesome, usable, and user friendly. And so ads load with a lower priority than the content. The content is always king.
On top of this, and this is probably the most misunderstood part of AMP, is the AMP cache. A lot of the baked in performance comes from that cache. You could probably do it; you could probably do a lot of optimizations that are part of the AMP library, the ones before yourself if you’re an experienced developer. But the AMP cache is actually a very important component, not just because it’s a free, super fast CDN. And, in this case, actually, in the case of the Google AMP cache, it’s using the Edge server infrastructure of Google Search, so those servers are pretty good.
The AMP cache works tightly together with a prioritized loading and static layout system of AMP. Documents served from the AMP cache are much cheaper to pre-render because AMP knows where each page element is positioned even before any assets are loaded, allowing you to load just the first viewport without any low priority third party stuff. The actual site owner won’t ever know about the preload. It’s super important for privacy reasons as the site could otherwise write cookies and mark the page as seen. If you’re searching on Google search and you’re searching for diarrhea, you might not want each of the preloaded pages to know about it because you might not ever click them. You don’t want them to write cookies on your behalf before you actually look at them.
The on the other side you have the open source library. It’s the same everywhere. It always is the same URL. It’s highly cachable and always evergreen. It defines behaviors for custom elements and manages rendering and resource loading to optimize performance.
Then we have all the ecosystem around it, so the ecosystem is growing, the platforms that include AMP on pages. A lot of them are jumping on the train, and it’s super exciting to see companies like Bing on this slide. As a Google employee, frankly, I never thought I’d have the opportunity to give a shout out to Bing at a conference talk. I think that’s pretty amazing.
All right, this was a little bit of context around AMP and PWA as well. But now let’s come to the exciting part here. AMP or PWA?
In order to be reliably fast, you need to live with some constraints when implementing AMP pages. You won’t get the biggest progressive Web app benefits on that first click, as we just saw, as your AMPs are usually loaded. Then your AMPs are usually loaded from an AMP cache, as I just explained.
Now some case studies here. For instance with the Washington Post, AMP has been really great for retention for them. Traditionally, 51% of mobile search users return to the Washington Post within 7 days. For users who read stories published in AMP, this number jumps to 63%, so they had some great turnaround with AMP.
Now with progressive Web app–here you go-we’ve seen similar results. Some people have done really amazing progressive Web app experiences, and Housing.com, for instance, saw a pretty decent increase in conversions and page load time.
Then if you compare the two, you really have on one side AMP with instant delivery because of the preload mechanism for the cache. You have optimized discovery, no user scripts, static content. Those are the downsides. Then on the progressive Web app front, advanced platform features, everything you want to do, really. It’s highly dynamic, so it allows you to build single page applications. Slower delivery on the other side because of that first hop, and it’s not easily embedded in platforms.
What if there was a way to reap the benefits from both? In the end, what I think matters is the user journey and not really the technology that we use. The first hop to your site should always feel almost instant, and the browsing experience should get more and more engaging afterwards. AMP and progressive Web apps are both critical components to make this happen. AMP for the first navigation and then the PWA for the onward journey.
Let’s first talk about AMP as a progressive Web app. Now I won’t comment in detail in this talk, but it’s important to know that many sites won’t ever need things out of the boundaries of AMP. AMPByExample.com uses both AMP and progressive Web app. It has a service worker, so therefore it allows you offline access and more. It has a manifest prompting the “add to home screen” banner.
And when a user visits AMPByExample.com from search, then clicks on another link on that site, you never get away from AMP, from the AMP cache to the origin. The site still uses the AMP library, of course. But since it now lives on the origin, it can use a service worker, prompt to install, and so on. In the second click you can do all of those things.
AMPByExample uses that technique to do just that, but I had some fun on AMPproject.org, the actual homepage of AMP, and thought, “What if we already have the service worker intercepting?” We just insert more stuff into the page because service worker can do that. With service worker you can decide, really decide, hey, I’m going to take that thing from server and I’m going to modify it.
I’ve been feeling a little nostalgic, so I thought, hell, why not throw in some ‘90s DHTML magic? And I found this brilliant animated mouse trail code that I think is like a good wine, really a complement to a meal. It would probably complement my website completely.
And so here we go. This is the page with the service worker in place. Now the pages still validate since the AMP cache doesn’t see the service worker. A click out reveals this majestic upgraded experience, which I think is a huge improvement.
First of all, don’t do that at home. It’s not good. Second, this basically covers how to go from AMP to PWA basically just using AMP as a PWA.
But now the more interesting bit: transitioning a user smoothly from AMP to progressive Web app. There are two ways of combining the two: steps I personally call AMP up and AMP down. AMP up is the background bootstrapping of your progressive Web app shell while they user is enjoying your AMP page. Then AMP down describes reusing AMP as a data source for your progressive Web app.
Now let’s start with AMP up. The basics with AMP up are that the first click will always be an AMP, usually served from the AMP cache, but any links on that page will navigate to your progressive Web app. The concept is relatively simple. Normally that second click would still be considerably slower than the instant feeling of a preloaded first click to your AMP page, but there’s a powerful component baked into AMP, and that’s called AMP installed service worker.
This is what I call AMP up. But now you’re in the progressive Web app, and chances are most are using some ajax driven navigation that fetches content via json. You can certainly do that, but now you have this crazy infrastructure needs for two totally different content back-ends: one generating AMP pages and one offering a json based API for your progressive Web app.
Think about it for a second what AMP really is. They’re not just websites. They’re ultra-portable content units. It’s a data format.
The AMP team has asked themselves the logical next question. What if we could dramatically simplify backend complexity by ditching the additional json API that you would use for progressive Web app and instead reuse AMP as a data format? We started with a proof of concept many months ago and iterated on it for quite a while, and rewriting many parts of AMP to make this a reality. This is actually the first time we’re truly talking about it.
How did we do it? Well, of course one easy model would be to simply use iframes and load AMP pages into iframes in your progressive Web app. But iframes are really slow, and now you need to recompile every time you load a new iframe. Right? You need to recompile the AMP library. Now the browser needs to recompile at compile time, so over and over and over.
Today’s cutting edge Web technology offers a better way and that’s shadow DOM. In all the world the AMP libraries world view was actually pretty simple. You had one window, one AMP library instance per that window, and then one document, so one, one, one. If you go to anther AMP page, you get the same thing over again.
In the new world there’s one window, one instance of the AMP library, and multiple documents. The results, this results in super fast transitions between AMP documents, and the library has only to be compiled once. The process looks about this. The progressive Web app hijacks navigation clicks, then does an ajax request to fetch the requested AMP page, puts the content into a new shadow route, and then calls attached shadow doc on the AMP runtime and library on that AMP instance. This is how that experience then could look like.
Even cooler, we have added a conditional CSS class if you want to keep is super simple on shadowed AMP documents, so you can automatically hide stuff like headers in that embedded node. So the inner AMP document would get that class in the body, so you can remove stuff like headers and things.
Now here’s something I call AMP Konami code, maybe. It’s a combination of AMP down, side left, all the things, so a more advanced pattern that is kind of a wrap up to the technique. Now we have a pretty good experience now, but if you’re in the progressive we app, copy a link and share it on Twitter, that link will open the progressive Web app directly. For a new user who doesn’t have a warmed up service worker cache, obviously it won’t feel instant because the whole cache is not cached on that other user’s device. The app shell is not cached yet.
That too is a problem we can solve in the final step of that development journey. Instead of creating a separate URL space for the progressive Web app, like you see here, but usually you have your AMP pages on a domain, and then you link to other pages on your progressive Web app. Pretty straightforward.
Instead of creating that like we did before, we reuse the existing AMP URLs to load the progressive Web app on your site’s origin, which means the first request would go to the AMP cache. Then the second request would actually happen on the origin. What this means is that we can intercept the second request even during navigation. The service worker can actually see that request and say, “Hey, I’m just going to replace it all. I’m going to replace the entire AMP page with a progressive Web app.”
To actually look at some code, all we need to do with this is to listen for navigation requests in the service worker. Then instead of serving that cached AMP page, we serve the cached progressive Web app shell, which then does an XHR to fetch the requested AMP doc. What we’re doing here as well is an additional step where as soon as I’m giving the page the progressive Web app shell, I’m instantly starting to XHR to fetch the requested AMP document that was part of the link. Once the whole progressive Web app shell is initialized on the client, chances are that the browser is almost working like HTTP push. Chances are the browser has already finished loading the AMP page.
What does this mean? This means that in our entire setup now we have this. It means we have one AMP, one progressive Web app, and a single request to get to everything, a single request to get to the AMP cache version of your page, and a single request to get to the progressive Web app with the new AMP already loaded with the new content. I really like this.
Best of all, we now progressively enhance our AMP pages with our progressive Web app ensuring that no matter what, your users will get a super fast experience. For browsers that don’t support service worker, they’ll simply see AMP pages. Now you might think, okay, this is kind of cool in this bit of the Web, but what does this tell me about mobile Safari? Does that mean my progressive Web app will never be seen on mobile Safari?
Well, we actually have a solution for that as well. It’s called URL rewriting, URL fallback rewriting. What this means is that you will write on an AMP page and then, if we detect that there’s no service worker available in that browser, we’re going to rewrite all the links on the page that would point to the same URL space originally to a fallback, legacy URL that says something like pwa.mydomain.com. Then that page, of course you won’t get the same benefits of the service worker, but we will still iframe that progressive Web app in advance in a hidden iframe to just use the browser cache to cache the application shell. Again, you won’t get the same performance benefits, but it’s way better than nothing, and any subsequent click on that page will still lead to the progressive Web app.
Now this says “Coming Soon,” but the cool thing is that I literally just saw this morning that one of our co-engineers has just created the pull request for this, so it’s coming very soon.
Now I’ve shown this before on a screen, and this is actually a pretty smooth React-based demo that one of the AMP team members has built. I want you all to try it out and just give it a spin to get inspired. But also look at the code, really. Look at how the whole shadow DOM is used in that case. Grab a photo if you’d like and check it out later.
All right, so to wrap up, PWAMP is pretty great, I think. That was the original name I came up with for it. No one was able to pronounce it, so I now call it Progressive Web AMP, but I like PWAMP. It has a nice ring to it.
We’ve successfully combined AMP with a progressive Web app, and now the user always get a fast experience no matter what. Your site is progressively enhanced. You have less backend complexity because of that because you only need one backend, one data source, and profit from the built-in performance of AMP everywhere, even in a progressive Web app.
If you want to learn more, here’s another screen to take a photo of, if you’d like. We have that React sample app. You can learn a lot more about everything that’s in progressive Web apps in developers.google.com/Web where we just write about all this stuff, and MDN is a pretty great resource as well, and then AMPproject.org for everything AMP specific.
Please, and this is actually important to say. I didn’t have this in any one of my slides, but this is another common misconception that AMP is a Google project. It’s effectively Google led right now because most core contributors are Googlers, but it’s completely open source, and we’ll listen to your feedback. Like with progress Web apps, which is a standardized approach, AMP is really just a library, and everyone can engage, so please file an issue, a back request, or a feature request, anything you have to improve it. Even the documentation is open source.
I really can’t wait to see what you build. This is the first time we talked about this pattern, and this was kind of the world premiere of it. I really hope it sticks. I really hope it works. I want to hear from all of you if you can make use of it, if it actually works in practice, so thank you and stay in touch.