#btconf Munich, Germany 15 - 17 Jan 2018

Vitaly Friedman

Vitaly Friedman loves beautiful content and does not give up easily. From Minsk in Belarus, he studied computer science and mathematics in Germany, discovered the passage a passion for typography, writing and design. After working as a freelance designer and developer for 6 years, he co-founded Smashing Magazine, a leading online magazine dedicated to design and web development. Vitaly is the author, co-author and editor of all Smashing books. He currently works as editor-in-chief of Smashing Magazine in the lovely city of Vilnius, Lithuania

Want to watch this video on YouTube directly? This way, please.

New Adventures in Responsive Web Design

With HTTP/2, Service Workers, Responsive Images, Flexbox, SVG and Font Loading API now available in browsers, we all are still trying to figure out just the right strategy for designing and buildings responsive websites just in time. We want to use all of these technologies, but how can we use them efficiently, and how do we achieve it within a reasonable amount of time?

In this talk, Vitaly will be looking into a strategy for crafting fast, resilient and flexible responsive design systems by utilising all of those wonderful shiny web technologies we have available today. We'll also talk about dealing with legacy browsers and will cover a few dirty little techniques that might ensure that your responsive websites will stay relevant, flexible and accessible in the years to come.

Transcription

[Music]

Audience: [Applause]

*Vitaly Friedman:** Good morning, everyone. Oh, you have no idea how intimidated I feel right now. Toby is awesome. Marc is awesome. The speakers are awesome. You are awesome. I feel weird, to be honest with you.

What Marc didn’t mention is that the two events that I spoke at beyond tellerrand were the worst speaking engagements I had, not in terms of because Marc is horrible; because these were the worst talks I’ve ever given. Right?

Male: I don’t….

Vitaly: Yes, they were pretty bad, so we can get only better from there. It’s not because I try to be worse and worse every time. But, for some reason - things. It doesn’t matter, but let’s do it better this time. I really want to wake you up.

Today, we’re going to look into some of the interesting new adventures in responsive design. More specifically, I want to cover some of the things that many of us might be doing already and many of us will be doing for sure by the end of this year, maybe even tomorrow. All right?

Some of the things that I’m going to look into are actually coming from the work that we did when coming and redesigning the Smashing Magazine. It was a very long and painful, painful and very long, and very painful journey, but we actually got there. Again, just in case you don’t like the red, you can always turn it off. Just saying. Just saying. But, today it’s not about that. It’s about some of the lessons that we learned along the way.

I wanted it to be a little bit more fun, so I decided maybe it would be interesting to make it a game. Who is up for playing a game in the morning? Anybody?

Audience: Whoo!

Vitaly: Very exciting audience. I get that.

Audience: [Laughter]

Vitaly: That’s okay because I brought some stuff. I brought chocolate.

Audience: Whoo!

Vitaly: Oh, now we’re talking. [Laughter] That’s way better already.

Audience: [Laughter]

Vitaly: I’m going to pose some challenges. The game is going to be called Responsive Adventures.

[Jeopardy theme music]

Vitaly: I’m going to pose some challenges or questions. And, if you think that you know the answer, you can just shout. Right? If you shout, you have a chance, a really good chance of getting a chocolate from Lithuania. They’re not particularly great, to be honest, but they’re exotic. All right? How often do you eat a chocolate from Lithuania? Yeah. Or, I can also throw a book at you, but that’s really up to you at this point. All right?

Now, are you ready to play?

Audience: Yes.

Vitaly: Yes. Now, perfect timing, isn’t it? I really did time it. Let’s choose how dirty do we want to play.

[Stranger Things theme music]

Audience member: Awesome!

Audience member: Awesome!

Vitaly: I know it’s morning and we had a party yesterday. Who wants to take it easy? It’s okay. It’s a safe place.

Audience: [Laughter]

Vitaly: Easy? Anyone? Nobody wants to take it easy. All right. Everybody wants a chocolate now. Medium? Cool. Really, this is. Hardcore? Wow, all right, so let’s go hardcore. We want to go all the way. You know there is no way back, and it’s not really I can press buttons and stuff. It’s the video, you know. But, that’s okay. It gives me an input of where we’re going to go.

Let’s start. Let’s start with something very simple. What can be more simple than text, right?

[Rocks falling]

Vitaly: Plain text. If you think about text, what can you do with text on the Web today? There is not much you should be able to do. When you think about text, what would be the best way of compressing the text? What would be the ultimate way? Not talking about a way. Like, if you had a task to optimize the landing page like hell, making sure it’s as fast as humanly possible, or machine possible, what would you do?

Audience: Subset….

Vitaly: Subset. Well, we’re not talking about fonts here; just plain text. Let’s just talk about simple, plain text.

Audience member: GZIP.

Vitaly: GZIP. What else? Let’s go deeper, like way deeper.

Audience: (Indiscernible)

Vitaly: Sorry?

Audience: Zopfli.

Vitaly: Zopfli. All right. This is -- we’re getting somewhere. Who was that? All right. Ready?

Audience: [Laughter]

Audience member: Aim….

Vitaly: I don’t take responsibility here.

Audience: Whoa!

Vitaly: Whoa! [Laughter] Anybody else wants a chocolate now just like that? No? Okay. Well, maybe next time.

All right, so there aren’t many things we can do. One of them is, of course, we can use reliable things that we used to all use in the past like GZIP, but we can also go a little bit deeper.

In fact, GZIP is the most common compression format, and it’s pretty much everywhere. We all use it, I’m sure. It’s most common implementation is zlib, and it uses a combination of LZ77, which was invented in 1977, and Huffman encoding. Of course, each compression library like zlib has also preset quality settings, so we can say we could have the highest level of compression, like let’s say nine and then one, which is going to be faster, but then it’s not going to compress as good.

Now, as developers, we care about two things. We don’t care just about the size. We care about the compression/decompression speed as well. Obviously, the transferred file size is the critical part. But, if something is taking too much time to compress and we need dynamic resources to be loaded on the fly, this is not going to work, right?

There are two new options, and one of them is Zopfli and the other one is Brotli, conveniently presented by our dear friends in Google and CSS, who like cheese and bread. Zopfli can be thought as a way to do a very good, but slow, deflate or zlib compression. Backwards? Yes, it’s backwards compatible, which means if you’re using GZIP, you can actually turn it on or use Zopfli instead, and it’s going to be backwards compatible, so nothing needed. If you take something away from it in terms of text, it’s probably that we can actually go ahead and turn on Zopfli today.

But, if you want to go a little bit further than that, you can also use Brotli, which is a whole new compression method or format. For that, we need browser support for that as well. It’s future compatible, but it’s not backwards compatible. If you want to support Brotli, you can’t just leave it go. You also need to fall back to GZIP. If you think about browser support, it’s getting there. It’s not like, wow, it’s somewhere out there. It’s probably something that we will be using by the end of the year. Well, most of us.

If we look into the benefits of these two things, we’ll find out that very often anything that has plain text--HTML, CSS, JavaScript, SVG--can only benefit from using all of this, well, both of them. It’s really quite simple, actually, if you think about it. You just advertise that you’re supporting Brotli. Then you need to obviously compress with Brotli. Then you need to provide a fallback with CSS and GZIP. For example, for CSS, we’d have a GZIP version and a Brotli version.

In terms of strategy, because not everybody can just go ahead and turn it on because we need to support all the browsers as well, we need to know one thing. Namely that, of course, Brotli and Zopfli are better in terms of compression, but they’re much, much slower in compression speed. That’s something that we always have to care about.

If we are looking for something like jQuery or any kind of application, we will find out that savings are quite fundamental. We can save up to sometimes 30%, 40% in terms of transferred file size, but we will also find out that a lot of time will be spent on actually compression. If you have dynamic assets, this is not going to fly. But, overall, Zopfli and Brotli are definitely much, much better.

In terms of strategy, what can we do? Well, we have two kinds of assets: the statist assets and the dynamic assets. We can pre-compress static assets with Brotli and GZIP at the highest level because we have time. When we serve them, we’re going to compress HTML on the fly with Brotli at level one to four, so it’s going to be quite fast. Of course, the main reason, the main area where we can benefit from it is CDNs as well. We need to look if Brotli is supported on the CDN level, too. Then we need to use some sort of content negotiation to find out if Brotli is not supported, then we’re going to fall back to GZIP.

That’s pretty much the best we can do unless we want to go dirty because what we also can do, of course, is start to compress the text by maybe replacing class names with random emoji or some sort of random class names a little bit shorter - things like that. We can also use a save data client header to say we want to show maybe less -- maybe not necessarily for text, but for images -- we want to serve less data to browsers. Let’s say for example on low-end devices. This is kind of boring. This is the best we can do today.

[Stranger Things theme music]

Vitaly: Now, images. All right, so we’re going to flip through for the first two levels or three levels quite quickly because I think at this point it’s not that exciting.

What can we do with images today? What if you had to design or to build a landing page that has to perform so fast, that it would be blazingly fast, and nobody would be able to even blink when the page starts loading? What if you need to serve that particular hero image as fast as you humanly could? How deep are you willing to go? What can you do?

Audience: (Indiscernible)

Vitaly: Reload? Sorry? Inline it. Well, it depends if it’s a one-megabyte JPEG. Source set.

One of the things that all of us, I think, will be using, of course, is responsive images. In fact, if we look in the state of art, we’ll find out that most websites -- like not most sites -- the 19th percentile will have 5.4 megabytes of data sent and 70% of it will be images. Of course, responsive images are wonderful, like wonderful, wonderful work. Responsive Images Community Group has got us somewhere. In fact, there are tools now, for example like responsive image breakpoints generator, where we can basically upload an image and then it’s going to generate some of the meaningful breakpoints for you and different images. You can even define a size step and things like that. You can download all the images in the appropriate size, and it also generates a picture markup for you, a picture and mark it for you. That’s pretty wonderful.

They even have a really fancy API where you can have gravity defined so it can automatically crop an image, for example, focusing on the face, focusing on a building, or something of your choice, which is getting us somewhere, but we still need to go a bit deeper. We can because, what if you have a problem like this? Some of you might have had it before when you have an image, and it’s a big image, it’s a fancy, wide, huge, big image and it has a bad drop shadow. If you need to display it on a landing page, you have a couple of choices. You can use WebP. You can use JPEG. You can use PNG. You can use GIF if you feel like it, but please don’t. [Laughter]

Audience: [Laughter]

Vitaly: But, what can you do then to really make sure it loads super-fast because, if you’re thinking about PNG and JPEG, we have a problem. With JPEG, because of a drop shadow that you might not see -- can anyone realize now? The drop shadow, the image will be quite blurry. The gradient in the back will be quite blurry because JPEG is not designed for this kind of effect. It’s photography. The thing about PNG, yeah, you can make it work, but it’s going to be a really big image. What do you do?

Audience member: (Indiscernible)

Vitaly: Yes. Who said it first? I don’t know. Who wants a chocolate? All right. Here we go. [Laughter]

Oh, sorry. I’m not that good at that.

Audience member: …and I still missed it.

Vitaly: Oh, that’s okay. One thing that we all probably would do here at this point is just separate the responsibilities. We can say, JPEG for JPEG and PNG for PNG. Transparency on the shadow, we’ll be using PNG for it, and then we’ll be using the image on JPEG. Then we’ll put both of them inside of an SVG container and serve it. Why? Because it’s going to be smaller. If you think about these two images combined, this would be like 271 kilobytes against potentially one megabyte if you choose to serve PNG. Then put one of them as a mask on top of the other. But, that doesn’t go deep enough. Everybody can figure that out.

Sarah mentioned the contrast swap technique yesterday, which goes in this area where we compromise the quality of an image for the sake of performance. We can say some things compress fairly well. For example, if we remove the contrast and then increase it with CSS, we can actually save on the bandwidth. Basically, we just take an image. We remove the contrast. We save it with the proper compression. Then we serve it. Then we increase the contrast via a filter. This will help us shave off potentially--it depends on the image, of course--300, 400, 500 kilobytes. Of course, if somebody chooses to save that image, they will end up not with what they expect.

Of course, you can also go a little bit deeper with that. We can say, well, we can go do something else because there is another technique, which is also quite similar. If you have two identical images that are displayed at the same size and one is much smaller than the other, it might actually be bigger than the original image. What does that mean?

If you take this image here, which is beautiful, pixelated, and horrible to look at--I hope, at least--and it’s 600x400 pixels. It has pretty much the worst quality you can get. But, I display the 600x400 here. The file size is 7K. Now, we could go ahead and shrink it down to 300x200, and we can compare it against an image which is natively 300x200 saved in Photoshop with a decent quality, let’s say, 70%, 60%, 80%.

You’ll find out that most people, if they just see, look at the images on their mobile screen or anywhere else, they will not be able to tell the difference. But, this one on the left is 21K and the one on the right is 7K. You can’t go ahead and do it for everything because, basically, you’re misusing the browser’s memory at this point. If somebody chooses to save that image, of course, they will not be particularly happy with that image either. But again, if you have a landing page, if you want to deliver that image, this is how far you can go.

It can go much deeper because there are other things that also compress very well. If you compare this image against this image, you’ll find out that the other one, the second one, is approximately 75 kilobytes smaller. Why is that? What’s the difference?

The difference is blurring. Blurring compresses really well. If you want to really optimize that image, you might blur out unnecessary details. But, it can go deeper because there are two kinds of images in the Web, right? There is sequential JPEG--well, in terms of JPGEGs--and progressive JPEGs.

If you think about sequential JPEGs, these are the ones when you start loading an image and it comes from top to bottom like dit-dit-dit-dit - dit - dit - dit -- done. But then, you also have progressive ones where you just see an image right away and then it gets better over time. This happens because we have scan levels encoded inside that image.

Tobias Baldauf actually spoke; he spoke at BT Conf also like three or four years ago. He said, “Well, we can actually go ahead and try to play with scan levels for that particular image that you want to ship.” We can say, instead of just using default scan levels, which might not be optimal for that particular image because, essentially, we want to ship that image fast just to make sure it’s recognizable fast enough, we can just go ahead and have a little weekend fun and play with mattresses. Everybody likes playing with mattresses over the weekend.

We can say, let’s take a look at the coefficient and figure out what would be the right choice for that particular image because, basically, what we want is not this, but kind of show something that’s meaningful; not just anything, but want to show something that’s meaningful. And so, we can play with scan levels. We can make sure that even on the second scan level, already, not on the fifth, not on the sixth, we can see the structure. Of course, the colors are a bit off, and the only difference between these images that are coming up now is that the colors are clearer and sharper. This can also save us, depending on the image, 30, 40 kilobytes.

There are two options that we can use for compressing images these days. There is Adept and there is JPEP or most JPEG, which would be my default choice when it comes to compression. Another one that Google has come out recently because they like bread and cheese, as we found out, is Guetzli, which is again an open source JPEG compression that you can go ahead and use, which is wonderful.

I have good news. We have a bonus level coming up. Who is excited about the bonus level?

[Rocks falling]

Vitaly: Now, we all love design systems, we all love pattern libraries, and we all love looking at these beautiful artifacts like that. We can create beautiful, dynamic systems where things just scale up beautifully and scale down beautifully. As we are working on Smashing, this has become one of the most crucial things that we had to deal with. If you think about building something like this where you want to make sure that every single component grows naturally and scales up and down beautifully without you having to do too much work, how would you make it work?

Now, there is this common thing that many of you, if you’re doing CSS and JavaScript stuff, will know. We can just sneak in a Trojan horse on the root of the component. Let’s imagine we want the component to grow and shrink. How would we do that? Let’s say we just set a font size on the root of that component and every single subcomponent within it is going to be defined using em units. Once you change the font size in a media query, or in any other way, for example with calc, it will actually just grow beautifully.

This is how it’s going to work then, right? You define that component once. You define the border-radius, you define the padding and everything in em units, and then it’s going to scale up if you just change the root or the font size of the root, right? That’s nothing groundbreaking here.

Please talk to me.

Audience: Yes.

Vitaly: Yes, right? But I thought this is not good enough because, I mean, yeah, but we still need to change it all the time. Wouldn’t it be beautiful to create some kind of self-scoped component, which would just grow naturally like with element media queries, but without them - in a way?

Audience member: Tell us.

Audience: [Laughter]

Vitaly: You get the chocolate. You know that?

Audience: [Laughter]

Vitaly: I like you. Another one.

Audience member: In the middle; throw it in the middle - somewhere else.

Vitaly: Anybody wants a chocolate? All right. Here we go. Gees. Okay. That’s okay. I have more later.

But, the point is this has become part of my journey for the last one and a half years. What if we could define some sort of a function for a component to make sure it grows and shrinks beautifully on its own? But, we never want an uncontrolled growth. We don’t want, like I say, a font size of a heading to become one pixel on a small screen and then grow to 500 pixels on a large screen. We want it to be a bit more controlled, more like this, right? You start growing somewhere, and then you stop growing somewhere too.

If you wanted to kind of map font sizes per breakpoint against the viewport width, you’ll find out that, for every component, you can define a certain curve that would actually define that experience. For example, you could have a heading, which would have 22 pixels of 600 pixels, and 24 pixels of 800 pixels and things like that. If you wanted to define that experience and define that behavior, you would need to do something like this. This is a curve that describes this behavior. This could be expressed like this.

There is no frickin’ way you can do it with CSS. That’s -- I think--

Audience: [Laughter]

Vitaly: Well -- hmm. There should be a way, but we don’t want to go there. It’s just probably crazy. Well, we’re getting there.

You could potentially go ahead and say, “Okay, I can’t describe that particular perfect curve, but I could try to approximate it by breaking down that line into intercepts or into paths which are close enough when put together, basically by doing this. Right? Because, why not? Right? It makes perfect sense.

Again, the idea being you define once the behavior of a component and then it just does what it’s supposed to do. It would take a little bit of work to make it work, but you could using these kinds of strange things. But, we don’t want to go there, I think.

But, we thought maybe we could do something similar after all. Particularly, I wanted to avoid media queries as much as I could. Like this, right? [Laughter]

Why? You might be wondering, “What the hell? What did media queries do now?” There is nothing wrong with media queries, but every single time you have a media query, that also means that you have to maintain a media query because we not only look at the spectrum of experiences. As we’re working with designers, or a designer is working with us--depends on who we are--we will have these states where we have to interpolate between ideal states where we have maybe an ideal ratio where everything is perfect on the small screen and everything is perfect on a large mockup screen as well, but then we have to interpolate the in between. Basically, whichever website we take, every time we make a breakpoint it’s because this is where these things break. That means we will need to add more maintenance long-term.

As you see here, this area here where things are just a little bit off, here things start dropping, so what do we do? We add a breakpoint, and then we add a breakpoint. Depending on how perfectionist we are, we add more breakpoints. Then it becomes really difficult to maintain.

Now, Mike Riethmuller from Australia came up with this formula that actually describes the behavior of a component that could grow between two different viewport widths. That’s maybe not straightforward, but it actually does the job because you’re using, basically, calc on font size. You say, “Well, this is my minimum, 16 pixels, so even on a smaller screen, I’m going to have 16 pixels. But then, I want to define that growth, the perfect growth that I want to have.” What you say is, “Well, if this is the difference between maximum and minimum, so this is the maximum if you have all the space to grow, then it’s going to be small. But, if we don’t have any space to grow, then it’s going to be the maximum, so there’s going to be 24.”

We are basically defining the growth, the intercept over that area that’s left to grow over. This is what 100vw - 400px stands for. Divide it by the entire viewport range that we have. You don’t have to get it right away, but it actually does make sense once you start playing with it.

I thought, “Okay, so we can do crazy things like this, for example.” Sorry. Let me go back. We can do crazy things like this. That’s, well, not even crazy. You just start growing somewhere; you stop growing somewhere, too. That’s pretty cool, and that doesn’t require that much effort.

What if we take it to the next level? Of course, you can apply it to everything. Just in case you’re wondering, calc -- of course calc is well supported, except you know who. We’re used to it at this point, I guess.

We can also apply it to things like, obviously, line effect, of course, which is a technique called CSS locks. We could say, “Okay, I don’t want this uncontrolled difference in terms of line height being very wide or being very long, tall, and then being very small. I want it to be a bit more controlled.

You can say, “All right, I want to grow automatically, naturally between 1.3 and 1.5, and specifically in terms of viewports, between 21 and 35 ems, using exactly the same formula,” and you do the same thing. You end up with this where things just grow and shrink beautifully. Most importantly is the line height grows and shrinks beautifully.

You can take it to the next level because you can say, “What if we use everything, use it for everything?” You could have some sort of dynamic layout, which of course has its downside because it’s not controlled anymore. Every single component kind of grows a little bit on its own.

There are two use cases we had to consider. One of them is fluid behavior, when we want to turn it off and when we want to turn it on. You could, of course, go ahead and say, “Let’s just use calc on body font size or HTML font size. Then everything is going to be defined in rem units, and so it’s going to be scalable. Then if you need a fixed container, you, of course, can set it for fixed container with the font size within pixels, for example, and then everything inside could be defined in em units as well. And, so, you end up with this where you have a fixed container, which doesn’t change, and everything else just scales down and up beautifully.

That’s great, but what if you want it the other way around because this is kind of risky because you basically have everything then scalable? You can say, “Okay, I’m going to put font size 100% in HTML, and then I’m going to create this fluid container in here with this calc madness, if you like. Then everything inside of it is going to be defined in em units. You end up with another scenario, which actually works just fine too.

I thought, what if we take it and use it everywhere, and I mean everywhere? If we look into the layout, every single thing in here defines using calc, every single thing: the border-radius, the image with the image height.

Also -- oh, this is the wrong video, I think, to play. I’m sorry. It must be here somewhere.

Basically, if you look into this layout and how it grows, every single thing, including even the size of the SVG icon on the border and the top and the spacing and margin in headings is all defined using it. Potentially, if you’re looking to fluid components, you don’t have to do it like this large scale, but you could go ahead and use this technique as well.

[Glass breaking]

Vitaly: Everybody is awake, I hope. Web fonts, all right, everybody is using Web fonts, or most are using it.

[Loud metal clashing]

Vitaly: It’s a bit too much, maybe. I’m sorry.

We all are using, I think, or we’re using Web fonts at some point. The question is, how can we optimize the experience of serving fonts? There are so many things we can do today with Web fonts because it used to be simple, and it turned into this just to be able to just put a link.

Audience: [Laughter]

Vitaly: That’s it. It was that easy. Now, you really need to care about how you’re actually serving fonts. These are the options that we have. Now, what if you really had to take it to the next level, going all the way to make sure that the fonts are available or that the content is displayed right away? In Web font; in a Web font, of course.

Asleep?

Audience member: Sub-setting.

Vitaly: Sub-setting, but I think that many of us will be doing sub-setting anyway. Let’s go deeper.

Okay, so there are in fact quite a number of options. First of all, we need to see what’s actually happening behind the scenes, so we will define. We used to have this bulletproof font face index, which we probably can drop now because we don’t really need to serve Web fonts to eighths, I hope, please. We can really drop EOT for sure. We also had this IE fix, which was just disgusting, but we can just go ahead and, of course, use woff2. That’s probably something that we all will be doing anyway. We need to find out, of course, how we’re going to lower fonts.

Now, if you look at the experience of how things are, we’ll find out that we have two kinds of experiences by default. We have the FOIT experience and we have the FOUT experience. FOIT is the flash of invisible text where you just sit in and you see nothing, and then fonts kick in. Then FOIT is a flash of unstyled text where you see the contents right away in a fallback font and then it switches.

Basically, to slow it down, we’ll find out that there is a timeout that some browsers implement, and others just don’t implement. Safari, of course, just -- you know. While Chrome and Firefox wait for three seconds for the Web content to be downloaded and, if it doesn’t get in, they just display a fallback font. Safari is like, “I’ve got time.”

Audience: [Laughter]

Vitaly: [Laughter] I have nowhere to go. I mean, right? You have nowhere to go, so you just wait and see.

In fact, when it comes to Safari, we just need to wait for the font to download. But, if you slow it down, this is kind of the experience. Three seconds pass and then we actually see the fallback font, and then eventually things start showing up. The Web font the first, the second, the third, and so on.

This can be quite weird because this feels so horrible to me because I see the content. I want to see the content. Then I go into view source to read an article because I can’t wait for the Web fonts. Three seconds is a lot of time. Right? Okay, that’s just me. But, that’s fine.

If we want to kind of trigger it and make it a bit better, there are a couple of options we can go ahead and use. Now, one of them is, of course, to use font loading API where we can just define new FontFace objects and then we can have some events. We can say, “If fonts have loaded, then we’re going to add a class. If they haven’t loaded or failed to load, then we’re going to add a class as well.” All right?

Then we also have these new properties, which are kind of supported differently, but they give us kind of some sort of control about how fonts are going to be rendered. To have the font rendering property, we can imitate or eliminate FOIT or FOUT depending on what you want. Essentially, what we want, of course, is to display the contents right away and then switch to Web font when the font is available, but not block rendering.

In fact, it’s really, really well supported, but we also can do something else. Now, one of the options, it has been used for quite some time, is this critical FOFT with the data URI where you have a two-stage render. The first renderance happens in the subset version of the fonts, maybe the minimum, the A to Z, lower case and upper case, and numbers and punctuation. That’s pretty much it.

Then we use session storage. You may be using service focus to actually make sure that they are properly cached. Then we’re going to serve them on the second visit. On the first visit, we showed the content right away. This way it’s critical. It’s like critical CSS, but for fonts.

But, I thought, “That’s not good enough.” Let’s create a new acronym because everybody likes acronyms, right? What about C2SFOFTRWDURISW? That sounds like a memorable thing to have, right? It’s a critical, two-stage FOFT render with data URI using service workers because you can, right? [Laughter]

We have this two-stage render. I think this is, if you really want to go deep, I don’t think you can do better than that. Again, this is something that we developed with Zach Leatherman together. Well, he actually worked on it for Smashing. We have this two-stage render. We have Roman first, and then we have the rest later. We kind of load the rest later. We subset everything to minimum. Then we load the subset font in line first, and then we load the full font and put it in service worker. Then on the next visit, you can actually get the font straight from a service worker, so you don’t have a delay.

Of course, this will require for you to run on HTTPS and actually use a service worker. In fact, it really does help to eliminate that first waging thing. But, of course, things get even better. But, you might ask yourself at this point, “Why do we worry so much about fonts?” What’s the big deal? They are stored in the HTTP cache, and so it’s good. That’s good.”

Well, how many of you remember this article? Okay. I’m really old. I’m sorry.

This is an article from 2007. When I read it, I was really kind of shocked in a way because it tells us that caches don’t stay populated very long. It’s an article for Yahoo in 2007. Actually, this said that 40% to 60% of Yahoo’s users have an empty cache experience although assets have been cached because many users are accessing many Web pages and we know, when we access a page, something gets out of the cache while something gets added to the cache.

Facebook repeated that exercise in 2016, I believe, in late 2016. They found out that nothing changed. The things that we put in the cache, in the HTTP cache, don’t stay populated in the cache for a long time. What they found out is, on average, 4.6% of users are getting an empty cache. That’s one thing.

Another thing is the cache heat rate goes to 84.1%. However, caches don’t stay populated for a long time. So, if somebody accesses a page, yes, the items will be, the assets will be, in HTTP cache. But, it’s very unlikely that they’re going to stay there for the second visit. Very unlikely, meaning that 60% of the time they’re not going to stay there. It’s the case of caching. We need to figure out how to cache better.

If you look into Chrome’s cache heat rates, we’ll find out that some things are staying in cache longer than others. When it comes to CSS, for example, there’s a good chance it is going to stay in cache a bit longer. But, one of the first things that are going to be dropped are fonts, which means that if somebody comes to your site quite a lot, they will see the refresh and kind of fallback to the fallback font and then switch to the Web font quite a lot.

Luckily, we have options today. One of them is that we can use the new, the shiny new--just a second--font display optional property. These slides are from Monica Dinculescu. Her talk is incredible. She’s basically speaking about everything you can do with Web font performance, loading performance, today.

And so, when it comes to font display optional, this is kind of a new property that allows us to control granularly how exactly we want the fonts to load in CSS. Normally, if you just use auto, it will be just falling back to whatever the browser decides to do. But, we can also use font display block, which basically imitates FOIT, when we have nothing for three seconds and then we have a fallback. Then eventually, when Web font kicks in, when it gets in the cache or it’s downloaded, we are going to have a switch.

But also, we can have a swap, which means don’t wait. It basically imitates FOUT. Don’t wait at all. Just display the content right away in the fallback. Then when the font kicks in, go ahead and switch.

But, you can also have fallback, which kind of makes that waiting period very short, giving us a chance to actually get the font from the cache, like 100 milliseconds or so. If it doesn’t happen for some reason within three seconds, we’re going to fall to fallback. Then, after that, if fonts do get in, we can actually display the Web font.

Then, finally, there is optional, which basically says, well, we can’t keep it invisible again for 100 milliseconds, but if Web font hasn’t downloaded yet, we’re just going to display the fallback. We’re not going to switch to Web font at all. The Web font is going to be downloaded. It is going to be the in cache, but it’s going to be displayed; the content is going to be displayed in the Web font on the next visit. We have control of all these behaviors. Of course, optional, in many ways, would be prioritized.

But, another thing you can also do there to minimize this effect of switching from fallback to Web font, because this could be quite jarring if you see it quite a lot again, would be to go ahead and to use Monica’s font style measure, which allows you to make sure that when you display the content in the fallback font and then the Web font that the difference is not that huge. You can play with letter spacing and things like that to make sure it’s a bit less jarring.

Of course, the big thing that everybody is speaking about these days is this monster. Everybody talking about variable fonts love this idea of cubes. Whenever you look into variable fonts, everybody is talking about cubes. With this in mind, we can, of course, make sure that the font itself doesn’t need, let’s say, four, five, six ways that it will be downloaded separately. It will all live in one single -- well, it is living in one single file. But then, we can use everything between regular to extra, extra bold, which is really wonderful.

Cubes, many cubes, and this is something that’s really looking fantastic. Variable fonts are getting there. It’s not like it’s out there. We will be seeing them this year as well.

[Loud metal crashing]

Vitaly: When it comes to this kind of stuff of what we can do today, there is one thing that always goes missing, and it’s third-party scripts.

[Loud clashing]

Vitaly: Okay, that was a bit loud.

Third party scripts: who likes third party scripts? It’s like friends you invite who trash your house.

Audience: [Laughter]

Vitaly: Right? You invite them, and you say, “Do whatever you want.” You feel welcome. After all, they also paid to get to you. They bring booze and stuff.

But, in the end, they also are mean, aren’t they, because they’re not just coming along and say, “All right. I’m going to sit here and display a little advertising here.” They also bring their drunk friends with them. They bring a lot of friends with them and, usually, they bring even people you don’t know and resources you don’t trust.

Third party scripts can be so damaging on so many levels because if you even have a team which is responsible and has a dedicated performance effort, third-party scripts can just ruin it. Their metrics aren’t really influenced by the user experience, so it’s not like they care about the actual time the user spent on a page. They care about if it’s visible and if it’s clicked. That’s pretty much it.

There is a problem, though, where you often can’t really control it. There are a couple of things we can do, of course. We can defer loading either done with the synch or defer, which is probably to prioritize defer in most cases. We also can use resource things like pre-load to pre-load or, like, pre-connect to some of those hosts. But, the problem is, because they bring along many friends we don’t know, we can’t really warm up the connection because we don’t know what resources are going to be downloaded.

We need to understand the impact of scripts first, and this is where we can create, first, a request map to kind of find out what is actually happening on the page because OneScript and JavaScript is actually really, really costly and expensive not only in terms of parsing, downloading, executing, and everything, but also in terms of memory use and also blocking and rendering, obviously.

There are tools. We can create maps like this to really see what is happening and what exactly is actually affecting page load. If you look into a regular, average website like AutoGuide where you will find quite a lot of drunk friends coming along, if you look into what is actually happening behind the scenes, you can, of course, use dev tools to find out the network, what’s happening in the network. But, it’s not really clear at this point because I need to go through all of this. Of course, as you can see, some of them are loaded with high priority, which is just a no-go.

And so, the request map allows you kind of to dive into it, all right, where you can actually see what is happening. It can also look on the left and find out and filter things, which is really great. This is a tool on requestmap.webperf.tools, which is a really cool thing to just generate it out on the site.

Of course, you can use dev tools for it. Harry Roberts actually wrote a really brilliant article on third-party scripts and how to deal with them. He also mentioned this bottom-up view in Chrome in dev tools where you can say, okay, I want to really look what is happening and what scripts are actually causing troubles. As you can find out here, double-click causes one second, one full second goes on double-clicking here. This could help you make a case about why performance is important and kind of the limit the scope of third-party scripts.

But, your hands are often tied. There is a new business requirement coming in, and you have to deal with this somehow. What are you going to do with it? Well, we need to know one thing first. We need to know what happens if that script doesn’t work for some reason or that server doesn’t respond for some reason and, also, we need to limit the scope of what happens when it does.

Luckily, some browsers help us with that or some tools help us with it. I’m really happy to see that Chrome is going to introduce a native ad blocker starting from February 15th. It’s horrible for our business, by the way, because we still have advertising. But, from February 15th, Chrome is going to have its own ad blocker, which means that many of the ads that kind of appear and do strange things -- [clears throat] -- strange things--

Audience: [Laughter]

Vitaly: Okay, got there. All right? [Laughter] Strange things will just be disabled by default. In fact, it’s all, you know, “Just be cool, chief,” and you don’t what exactly is going to happen. But, native ad blocker will block ads deemed unacceptable by the Coalition of Bad Ads, an industry group that counts Google and Facebook, so you probably will see Google ads. I’m pretty sure about that.

Audience: [Laughter]

Vitaly: That sounds plausible, right? It will be a default starting from February. Okay, but that doesn’t really help us because we still will have to deal with some third-party scripts and widgets like Fantastic Weather you have to display on a publishing site and stuff like that.

Now, the first thing we can do is to say, okay, what happens if that thing fails? But, not meaning 404 fails, but what if it just hangs, it’s just out there? We can block some requests, of course, to see the impact of things, but we can also use a black hole. Everybody likes a good black hole.

There is, in fact, an endpoint that made requests disappear. It’s 72.66.115.13, which was set up by Patrick Meenan, so you can just root all the fancy things on your manager’s computer--that would be a fun exercise--to that dead hole, and so everything is going to just time out forever. It’s going to take 100 seconds to reply. You can actually see what happens. Are those third parties a single point of failure for you or not?

All right, so that’s good for testing, but what can we actually do to limit the scope of what third-party scripts can do? We can use our good old friend, and we should be using our good old friend, iframe. Who likes a good iframe? Anybody? I will give you a chocolate.

Audience: [Laughter]

Vitaly: Here we go. Ready? [Deep breath] [Laughter] Oh--

Audience: [Laughter]

Vitaly: I have more.

Audience member: (Indiscernible)

Audience: [Laughter]

Vitaly: I really want to--

Audience: [Laughter]

Vitaly: Okay. Anybody else? I have so many that I want to get rid of them, to be honest. Oh -- ooh, this is tough.

Audience: [Laughter] Ooh!

Vitaly: Not bad, huh?

Audience: [Laughter]

Vitaly: Just -- I know. [Laughter] Anybody else? No. Okay. We have books. No, you don’t?

Audience: [Laughter]

Vitaly: That would be scary. Probably not books. All right.

We can use iframe. Iframe is the best option because the scripts then will run in the context of that iframe and have no access to the DOM. There are a couple of things we can do. One of them, if you are using service worker already, I highly encourage you to experiment with this. You can raise resource downloads with a timeout. You could say if something hasn’t responded within a certain timeframe, I’m going to just drop it, and I’m going to log it and see what happens, analyze what happens the next time. You can just define that timeout and see how it’s going to work.

You can also use, of course, the sandbox attribute in iframe. I mean that everybody who is involved in that serving ads or so will probably use this as well. You can actually sandbox the privileges that those external scripts can actually have. You can prevent scripts from running, or you actually can, rather, allow things, only those things that you really want to happen, alerts from submission plugins and things like that. You can constrain it to the bare minimum.

It’s not just us fighting that war, to be honest. SafeFrame is one of these initiatives that came out I think three or four years ago, which is the specification or kind of a standard that the ad industry came up with to provide an alternative to obtrusive ads. They even created a sample or a reference implementation that all of you can just go ahead and implement today as well or use today. The primary use for SafeFrame is to encapsulate external HTML content to basically really limit the scope that those third-party scripts can do while protecting the host from content that could otherwise inadvertently or purposefully affect the host site in unexpected ways.

If you look into the Interactive Advertising Bureau, which obviously also has a GitHub page because, well, GitHub, you will find a reference implementation that you can go ahead and use. It essentially specifies an API that provides a communication protocol between the host site and external site, it’s always limited, and it’s kind of trying to make sure that you safeguard the publisher. That’s pretty cool.

But, another thing you can also do is to try to kill all of those things like open access and others because, in many ways, you really don’t need them. You don’t need to rely on them. Of course, you still have to serve advertising somehow, but you can use Intersection Observer for it because the one main thing that advertisers want to know is the number of clicks and display that ad only if it’s actually within the fold of what people are seeing.

You can use Intersection Observer for it. You can say, I can look into if that component or that element that I have, if it’s actually in the viewport, if it’s visible right now. If yes, then count as a view. That’s basically essentially what we have. That’s essentially what we did to move away from open access, which I really hated for four frickin’ years, to just simple Intersection Observer - Intersection Observer observer. No, observer, Intersection Observer - it doesn’t matter.

All right. It was designed specifically for this problem, and that’s really great. It’s specifically designed to deal with ads and third-party things. I’m not talking about only ads being the thing here. If you’re serving CSS, for example, from a third party or anything related to a third party where you just really don’t know what will happen--analytics, anything--you can actually control it way better. It kind of allows us to observe changes of a target element within an SSL element over the top of a document viewport.

You can say, if it’s visible, how much do I see? You can say 50%, 20%, whatever. You can have a very granular control about what’s going to happen. It’s super simple. Even I understand it. [Laughter]

You have a root here where it basically defines the window where you’re looking at. Then you have a root margin, which gives you an opportunity to say, “If I am--” If the user. Sorry. If the user is, let’s say, within 200 pixels away from the ad, I can start loading some scripts or I can start doing something so, when they kick in, when the ad is visible, something actually starts happening. The same can be used, for example, for animations or any kind of triggering, some CSS classes, or anything of the client.

Then you can also a threshold where you can say, “Well, at this point the threshold is 1.0. It means that 100% of the entire thing is visible. If it’s 0.5, then 50% is visible.” You have kind of a good control about what is going to happen. Once you define that thing and define that observer, you also have a callback where you can say, “Okay. If it’s visible, then do something.”

Then you, of course, will be targeting that particular thing. It could be a button. It could be an element. It could be anything. That’s pretty much it. You have those three properties that you can use, and you can really make it work quite quickly. It took us maybe five, four days to really move away from open access; I’m really, really happy about.

Of course, one thing is important because we want to serve it within an iframe. If an iframe observes one of its elements both scrolling the iframe as well as scrolling the window containing the iframe, it will trigger the callback. So, you don’t have to worry about that either.

There are many different use cases. You can have native laser loading. You don’t have to worry about triggering, like looking into, okay, where am I on the page and really measure exactly what is happening. You can use the infinite scroll. You can use it to report visibility of ads, also condition loading of assets, trigger animations and, if you don’t need it because the ad is already visible or if you laser-load images, the image is already there, you also can un-observe. Just stop observing so you don’t have this kind of triggering looking into what’s happening on the page.

If you really want to polyfill it for other browsers, all the browsers, you can also use a polyfill for it. There are actually three or four available, and they all do the job. Browser support is actually, you know, there is no reason not to use it except, again, you know who, and you know who number two. [Laughter] But, we are used to them as well at this point, I think.

One more thing you can use, which is also available in CSS, which is the content property. If you have elements which are not visible but kind of resource intense, for example, you might have navigation, like an off-canvas navigation, but it’s going to be visible only once you click on a hamburger icon -- no, no, no -- on the menu button. Sorry. [Laughter] Of course. Then you can say, okay, isolate scope. I don’t want to paint it. I don’t want the browser to actually make, you know, try to paint it at all. Paint it when you need it.

Anything like third-party widgets and off trend modules, and container queries hopefully in the future, we can actually contain the painting, not just the loading, the painting with content restrict. But, it will be ready, of course, I mean unless it’s really intense, unless it requires 400, 500 milliseconds to run. Then you probably shouldn’t do it. But, if you don’t really need to display, then you don’t really have to worry about painting it. You can actually contain it as well. But, at this point, it’s only the Blink browser supporting it, as it often is.

But, it’s still getting there, and there is a nice article by Michael Scharnagl--I don’t know if he’s here by any chance--explaining it as well. If you really want to learn a bit more about third-party scripts and how to deal with them, there are two great articles, like one article and one talk, one article here on SOASTA and the other one is by Yoav Weiss on taking back control over third-party content, which is really, really cool for this kind of stuff. That’s really, really helped us a lot.

I’m told I’m running out of time, Marc. Do I have, like--? No.

Audience: [Laughter]

Vitaly: Okay. I have time. Let’s go.

Male: (Indiscernible)

Vitaly: No, I need maybe like five, seven minutes.

Male: Five minutes.

Vitaly: Five. Okay, because the one thing that I really wanted to cover is….

[Loud metal clanging]

Vitaly: And, I’m going to rush through it quickly, and I’m going to distribute the sweets later. But, I really want to kind of bring away or give you a couple of things that really worked for us and some things that really didn’t work for us.

Some of you might have seen this article a while back. Cutting the mustard technique used to be the performance technique that we all used at some point. Maybe many of us are still using, right? No? Who doesn’t care anymore at this point?

Audience: [Laughter]

Vitaly: Okay, some people don’t care. That’s okay. This is the cutting the mustard technique I’ve been using for, like, maybe four or five years. In fact, it helped us do something very, very simple. It helped us do feature detection to separate browsers into two groups--smart browsers and stupid browsers--which BBC called HTML4 browsers and HTML5 browsers. It’s just a few lines of JavaScript, but you can say if this, if query selector in document, the local storage in window, and add event listener in a window, then we kind of got the modern browser. That’s fine. We can load advanced stuff and things like that. If it’s not the case, then we have a legacy. We could replace it at some point with just one line, which is checking for visibility state and document because visibility state has exactly, often-often has that cut that we kind of need, except of course i9, potentially.

The problem with cutting the mustard, which many of us are still using, is it doesn’t really reflect the state of performance as we need it to be understood. Cutting the mustard technique, it uses device capability from browser version, which is no longer something we can do reliably these days. Why? Because there are many cheap Android phones that run Chrome. They totally support pretty much everything that you were playing with, but because of reduced or limited memory or limited CPU capabilities, they are still not able to perform as good as high-end devices.

There are a couple of things we can do there as well. We can target low-end phones with device memory JavaScript API. We can say, “If you have a low memory, I don’t want to load this. I don’t want to load that.” For a while, we had a battery level API, which I was a big fan of, and then it was killed. There are good reasons for it, security reasons for it. But, it would also give us access to say, “If you are on a low battery, just maybe remove auto play for video, remove parallax. Remove maybe even Web fonts and things like that. We can’t do it now with battery level, but we could do it with device memory JavaScript API.

We need to look out for devices that we’re actually using because many of our customers will be using this kind of device, which is a Moto G4. This is has become the testing device that many developers will be looking to as an average these days because the problem is not necessary just the bandwidth, even though bandwidth, we have to be really saving on bandwidth, the problem is also the parsing time and the executing time. If you look on the average phone, like Moto G4 here, it’s approximately, I think, 17 or 18 times slower in terms of rendering or parsing one megabyte of JavaScript than compared to iPhone 8. That is a significant difference. If you go low end even further down, you will see the difference is really noticeable.

If you look into a site like CNN, you will find out that there are significant disparities between maybe your experience on your laptop. Even if you emulate, you know, throttling on your machine, it is not the same as if you actually experience it on the phone. It’s really always better to have that phone or one of those phones.

What can we do there? Now, one important thing to know, and this is kind of the article I’ve been work on for, like, a month. There are many interesting things coming up now but, specifically, it’s important to know that if you’re looking on optimizing experience, we’ve been looking on load events for a long time. We’ve been looking at speed index for a long time. Speed index is still important and first meaningful paint is still important, but what really is important, that phone that is not optimal, is getting time to interactive within five seconds.

On a 3G connection, on a slow 3G connection with 400 milliseconds roundtrip time, this is something that Google has been kind of advocating for over the last half of year or so because, even on that timeframe, like 5 seconds, you will need approximately 1.6 seconds for DNS lookup and HTTP handshake and HPPS handshake if you’re using HTTPS, which you probably do because of HTTP2. At 400 kilobytes per second, which is again the slow 3G network, we can send 170 kilobytes. That happens within five seconds.

We talk about two kinds of critical budgets. One of them is 14 to 15 kilobytes that they want to serve in their first roundtrip when the user accesses the page, but the other one, for the entire thing that you’re serving for the first load, for the first initial page load, your entire file size budget is 170 kilobytes. That includes the framework. That includes the router. That includes state management. It includes all the utilities. It includes the app.

This is what we are kind of looking into at this point because the problem is not necessarily again just downloading the bandwidth. The problem is parsing time. There are many different things we could look into, but if I want to take something away from this for you is maybe a picture of this, if you really want to, because this is kind of for me a summary of things that I would probably do when we start looking into performant things. One of them is, again, test on a device that would be a Moto G4, a midrange Samsung, or Nexus 5X. Throttling obviously matters. It’s a good idea not just to say, “Let’s throttle to 3G and we’re fine.” Also, throttle CPU and, namely, five times because this is exactly what you should be expecting on an average phone.

Then you have a speed index of 1,250 as a baseline, which is pretty decent. Then again, time to interactive on a slow 3G under 5 seconds, which means that you also have to keep in mind 14 kilobytes of critical CSS, like the critical first requests. Then you have 170 kilobytes for pretty much everything to be sure that you are fast.

Male: (Indiscernible)

Vitaly: Yeah, I’m done, almost. Yes.

The one thing that also is important that right now one thing that we should also be caring about is not just, again, the download time and this kind of stuff, but also CPU hooks and the memory hooks that we will have because we have to deal with all these devices, which are not iPhones. This is why all those fancy things that webpack has like tree-shaking, scope hoisting, code splitting, and all this stuff is really, really -- and even JSON tree-shaking recently with the release of webpack 4 is really important, obviously.

Audience: [Laughter]

Vitaly: We can use things like that, right?

Audience: [Laughter]

Vitaly: Yeah. That’s beautiful.

Audience: [Laughter]

Vitaly: Okay, so I’m being dragged, so I’m going to end with this. This is the story of 2018. This is what you should be experiencing. I think that we all will feel better after it.

There’s so much stuff happening, and I have trouble following every single Friday. It’s a nightmare. You don’t have to be perfect. It’s okay to be okay.

[Music, crane]

Vitaly: Fifty seconds.

[Music]

[Sizzle]

[Rattle]

[Music]

[Clicking]

[Crackle]

Vitaly: Yeah, so this is our world frontend 2018. You will never get things done just right. It will be okay if it’s not like that.

Audience: [Applause]

Vitaly: This in mind--

[Jeopardy theme music]

Vitaly: Thank you.

Audience: [Applause]

Speakers