#btconf Berlin, Germany 07 - 09 Nov 2016

Una Kravets

Una Kravets is a UI Engineer on the Creative Engineering team at DigitalOcean. She’s a technical writer, having written for various online publications such as A List Apart, Smashing Magazine, and Sitepoint. Una also co-hosts the Toolsday podcast and started both the DC and Austin Sass Meetups. She's a performance nerd, loves the open source community and listens to way too many audio books.

Want to watch this video on YouTube directly? This way, please.

The Joy of Optimizing

Images are by far the greatest bottleneck to performance on the web, and with the average web page size now about 2.5MB large (images taking up 65% of that), we need to tame the beast. This talk will dive deep into image rendering and performance optimization techniques, and demonstrate how to implement them in your own work with practical examples to make your web projects noticeably faster.

Transcription

Una Kravets: This morning it was really hard for me to sort of justify coming here and talking to you all about the Web when there’s so much shit going on in this world. But I recognize that the only way that we can make a change is by starting with something small and with something that we have control over. If we all work on this, that compacted can make a really big, big change.

Who here works with images? Designers, developers, that’s pretty much all of you. That’s what this talk is about. It’s about not only making information available, but making the Web better and faster so people can access it, access your data. I think that we can all use a little bit of Bob Ross in our life right now. I know I can. I love him.

Welcome to the Joy of Optimizing. Let’s take a moment to relax and make some happy little pixels. The littler the better.

Bob Ross is the inspiration for this talk. He’s a famous oil painter and TV show host from the ‘80s. When preparing for this presentation, I tried to put myself in his shoes. It didn’t really work.

Audience: [Laughter]

Una Kravets: I had like a weird neck beard. This is a little neck beard going on, so here’s a better version of who I am.

I’m Una. You can find me on the Internet, @Una: GitHub, Twitter, anything, really. I’m a UI engineer at DigitalOcean. I’m also living nowhere. Currently I’m a digital nomad and really avoiding checking into my flight back to the U.S.

Images on the Web. Images on the Web are great. This talk will go through a couple of things. We’re going to start with the problem, so how did we get here? How did we get to the state of the Web? Then we’re going to talk about the medium, so what’s coming next on the Web, the solutions, things that are automatic that you can write once and then forget about them, to manual optimizations for your larger images, then example time. Yay!

How did we get here, first of all? It all has to do with the technology and how it’s evolved. Here’s a graph of camera ownership over time. As you can see, it has increased exponentially. What that yellow is is smart phones. It’s gotten cheaper to take photos, keep data, and share large files. If you ever heard about that quote, “The best camera is the one that you have with you,” that’s why we all have our cameras with us today.

This is where we are now. Now the cameras that we have with us take 12 mega pixel photos. That’s the iPhone 7 stack. In fact, since you all have your cameras with you, let’s do this together. Let’s take a photo of anything. Take a photo of your friend. Take a photo of this amazing DJ. Let’s do it. Let’s do this together. I’m going to take a picture with you all. Let’s get some lights.

All right, get ready for my selfie. I’ve actually never done this, and I’ve spoken a few times. Yay!

Does everyone have a picture? Okay. So just one. If you took just one picture, you’ve now introduced two more megabytes of photos into this world. Times 500 of you, that’s a gigabyte of photos that we just produced together – if you took one, which I guarantee most of you took more than one because that’s just how photos work.

It’s like free. But is this free? Is this really free? What we’re doing is this. We’re introducing irresponsible imaging into this world.

If you put that photo on the Web today unsized and uncompressed, it’s twice the size of the recommended first time to byte of that website - twice the size. We’re doing this to people. And, more importantly, we’re doing this.

You might be thinking, “So what? Devices are getting better. Network speeds are improving.” And the classic, right, “It works on my system.”

Last month, for the first time ever, mobile tablet usage has eclipsed desktop usage. That’s a huge thing to recognize. There was a Google study from September of this year that Paul sort of talked about, and it surveyed over 10,000 mobile sites on 3G. Let’s play a game to see how well you have retained that content.

Does anybody remember what the average load time for mobile sites is over 3G based on that study? Just shout it.

Audience Member:: Eighteen seconds.

Una Kravets: Eighteen? That was really, really close. It’s 19 seconds. That’s a long time.

Mobile sites that loaded within 5 seconds versus that average of 19 were observed to have what percent longer average sessions? Should it out loud. I’ve got my captive audience here. What do you think it is?

Audience Member:: (Indiscernible)

Audience Member:: Thirty?

Una Kravets: Thirty. That’s what I heard. It’s 70% longer.

And then what percent of visits to mobile sites are abandoned after just three seconds? We didn’t get to 19. We didn’t get to five. We got to three.

Audience Member:: Fifty.

Una Kravets: Fifty. That’s close. Fifty-three. You all did great. But this is still how I feel.

Here’s another fun fact for you. As of 2014, India was the third largest online market with more than 198 million Internet users, ranked only behind the U.S. and China. That is only projected to grow, grow exponentially over the years.

It’s interesting when you compare it to this graph. The average connection speed in the U.S. is 12.6 megabits per second, and the average peak is 57.3. Here in Germany it’s pretty close to that, maybe a little bit slower. There are other countries that are faster, other ones that are slower.

But if you compare that to India, this is so much slower than anything that we’ve ever experienced. Like Tim was saying the other day, we live in the easiest place in the world to access the Internet. Think about that.

Access to information should be a basic human right. A lot of the data that I talk about in this presentation is about e-commerce because it’s easy to get that kind of data funded. But by refusing to pay attention to performance, you’re restricting access to information for people who don’t have the fast connections that we are used to.

Back to the brilliance Bob Ross: “You can have anything you want in this world – once you help everyone around you get what they want.”

The HTTP archive provides a service where they archive website data, and they do this bimonthly. The last one was on October 15th, and it showed that images take up the majority of the website, 64%. The average single JPG is 2.3 times bigger than the average JavaScript file. While people are arguing about including jQuery or not, you know, reduce one image request and have all the jQuery you want. I’m not saying to do that, but I’m just saying it as an example.

These images are the biggest bottleneck we have to fast Web performance, so let’s start to understand our tools, our paints. There are so many image formats. This is a list of 23 out of, like, over 70 that I found on my research for this talk, and every platform uses these formats differently.

If you just look at emoji on Android uses BMP. On a Mac, you have a SBIX, which is this extended BMP format. And on Windows you have color, which is a layered vector. So no one agrees on anything when it comes to image formats these days.

Now let’s look at some of my own happy little trees. I drew that. Yeah, I can’t compete with all the artists that are speaking here. But what this represents is that different images work better in different format types. What works well here, this is like a large block of color. The reason why is because different format types compress and store data differently. Here you could see the SVG, the GIF. These are the smallest produced when you have these solid areas of color. But when you have a lot of rasterization and gradation in your image, the JPEG wins.

Let’s briefly talk about these image formats and introduce some new ones. We’re going to talk about the GIF, the PNG, the JPEG. Then we’re going to talk about WebP and then two new formats, the BPG and the F-L-I-F, the FLIF, which I will probably mispronounce a few times.

The GIF: the GIF is something that we are probably all familiar with. The GIF stores 256 colors in addition to the raw image data. It doesn’t support transparency, but it does allow for dithering. Dithering is added noise to reduce distortion, and it looks like this.

Now there are actually two types of dithering, and these different dithering types affect the size of your image. This is called a noise dither. This image comes in around 77 kilobytes. There’s also this pattern dither. Pattern dither is a little different. As you can see, it lines up in an even row. This one is 67 kilobytes, so you’re actually saving 10 kilobytes by changing the type of dithering that you’re using on your image. I also really like pattern dither because it reminds me of cross-stitching a little bit.

After the GIF came the PNG. The PNG is this open source reaction to the GIF threatening to license. The PNG comes in an eight-bit variety. It also comes in a true color variety, which is 24-bit or 16 million colors. Because of the better compression of the PNG, and eight-bit PNG is generally better than using a GIF in the first place. It supports alpha transparency and that type of compression is called delta encoding.

PNG is also lossless, so it doesn’t lose quality when you save the file over because it uses that delta compression. What it does is it looks at the difference between one pixel to another because most images have large areas of similar pixel values like skies or like if I was wearing a black coat, say I am, or black pants. This is all dark, right? It chooses the pixel closest in value hoping to be zero. Then it saves the difference between those. That decreases the overall file size.

The JPEG also looks at the differences between one pixel to another, but then it discards the excess, so it rounds, and will always resave that image and lose some data. Even if you’re saving a JPEG at 100%, you will lose detail. Sometimes when you save it at 100%, it actually increases the file size. Just keep that in mind.

If you ever look at the background of what these are like me, then you can see a difference. The PNG, you can see where those pixels are similar. There’s going to be blank spaces. You’re going to have less area on that file whereas a JPEG will always rewrite all of the pixels. That’s where you can see how that large block of data is better compressed as a PNG than a JPEG.

JPEG is really, really smart. It goes through a several step process for compression, and it starts with breaking up your RGB channels. Once you have those, you break it up into illuminants, your red chroma and your blue chroma. Then you’re always going to be using an 8x8 pixel block. You go through a cosign transfer. There are a bunch of algorithms involved here.

There’s a quantization table. The quantization table uses the quality that you’re setting when you save that JPEG to figure out what values it should be comparing that eight pixel block to as it does that zigzag scan of those eight pixel blocks. Then it will rewrite that based on the quantization that you set. It goes through a couple of other algorithms, and then it saves your new values, your new compressed values. This saved space on the server. Lossy isn’t a bad thing.

At smaller sizes, you can’t even really tell the difference between these images. You see pretty much the same thing at 10% as 100%. But the file is so much smaller, so you’re saving a lot of data being sent to your users.

The different that you can see here, though, is in the histograms. If you look at those histograms, you can see that when you’re doing this sort of optimization, this really lossy JPEG, you eliminate a lot of those in-between shades, the transition colors. You keep those colors on the ends, so that bright red or that bright blue spike remains in there, but you’re losing a lot of the colors in between, so you have like these really jagged lines between the histogram colors that are presented in this photo.

If you a little closely, you can see the distortion here up close. As you’re rounding, you’re going to get some of this banding or artifacting, as it’s called. You can see it, like the peak of the mountain. But that’s really if you zoom in. This is sort of up to interpretation. Eighty percent quality is probably a good medium before you start to notice changes. Honestly, 60% to 70% is usually fine for Web projects.

As I was doing my research about JPEG, I kept hearing a couple additional names: JPEP 2000, JPEG XR. The problem is nobody can really agree on a better JPEG implementation than we currently have. According to JPEG.org, there is, like, seven JPEG types out there. While people are complaining about JavaScript frameworks, just keep this in mind.

I realize the big difference between them is that JPEG 2000 is supported by Safari and only Safari. JPEG XR is supported by Edge and Microsoft, really, with IE, and only those two browsers. Is this what we’re saying, use JPEG XR in IE and Edge, use JPEG 2000 on Safari, and for Chrome just screw JPEG, use WebP? Welcome to the Web in 2016. Chrome is just done with JPEG.

What’s wrong with JPEG? Nothing. It’s brilliant. It goes through a lot of awesome optimizations and transforms. But it’s old, and we can do better. This is the Web in 2016, and what we do is rewrite things.

WebP is the most promising new image format out there, and it was originally developed by a company called On2, which is a video codec company that Google then purchased and took over the project. That’s also where we get WebM from. They put a lot of work into this since 2010. It also supports transparency, like the alpha channel, so that makes it not just a better JPEG, but also a better version of PNGs.

You usually get around 30% savings, so like 10% to 30%, which is pretty good. The way that WebP works is by something called predictive coding. Predictive coding uses the values in neighboring blocks of pixels to predict the values of that block and then encodes only the difference, and that’s how video works.

Take a look at those histograms. You can see here we have a TIFF versus an RGB histogram of the WebP, this lossy WebP. Even if we compare it to the JPEGs, it’s different. You see that there’s an even transition of these lines throughout. That’s kind of where you can see the difference of how these things are compressing.

Support isn’t super strong, but Safari and Firefox are both experimenting with it right now. That’s actually quite a big sign in the world of image formats because, as we saw earlier, literally nobody agrees on anything. Because the fact that they’re saying they’re starting to experiment with this, that’s really, really promising.

To convert to WebP, you have a variety of options here. You don’t need to take a picture. You could take a picture if you want. I’ll share a link to these links after this talk. But, yeah, there’s a Photoshop plugin if you’re comfortable with that. I personally like using the ImageMagick CLI tool because all you have to do is this: Have Homebrew installed and then use that to install ImageMagick and WebP. Then in one magical line of code in my terminal, I can convert all of my JPEGs into WebPs inside the folder that I’m within.

Now I have a copy of JPEG and a copy of WebP. What I can do then is I could use a Polyfill if I want to. This Polyfill is called WebP JS. It supports IE6 and up. There was a study done by Andrian James and Matt Shull in a post for Smashing Magazine that determined this to be the most lightweight solution that works in all browsers.

However, it might be kind of weird to require JavaScript for your Firefox and Safari uses to see your images, so you could do the conversion server side, or you can use the picture element, which is some HTML magic that we can use instead. With picture, it just looks like this. You can send your WebP source first in case the browser supports it. If it doesn’t, then you can send your image in a JPEG or PNG.

Picture support is pretty good. But if you do need to support Internet Explorer, there is a Polyfill called PictureFill, which is only eight kilobytes, that you can then implement and have picture work. Picture is really great. You can use it for a variety of things, as well as image sizing.

Just like WebP, WebM is a Google initiative now after they bought that video company, On2. It has a little bit more support than WebP right now. Edge is supporting it. Firefox and Chrome support it. You can convert it online like this.

This is what it looks like. It saves you almost half the file size from MP4 to WebM, which is huge, huge, massive savings when you have larger videos. The best part of WebM is that you can use a native video element, which works in all browsers. With video, we don’t even need a Polyfill because we can send multiple sources. The browser will then determine the source that is available in its subset, so more options, more power.

Now we’re going to talk about the experimental formats. I love this GIF. Also, that’s pattern diffusion. Did you recognize that? It’s like the more you know.

These are some really promising feature formats that may well be the future of image types. The first one is the BPG. Better Portable Graphics is what it’s called. This is an open source format that is said to be better than JPEG, JPEG XR, and WebP itself.

The purpose of this was originally to replace JPEG, and there’s a high compression ratio. I think where it really shines is at smaller file sizes. It’s a bit smoother. It doesn’t have that same artifacting. But it does have like a more jelly sort of look to it when it’s compressed. This is also based on that interframe encoding of the HEVC video format. That’s kind of the trend these days. We’re taking inspiration from video to compress our images with.

Alpha channels are also supported, so you do have that transparency option. But, unfortunately, there appear to be no plans to support it in a browser as of yet, so you can’t even find it on the “can I use” website. But there is an encoder. There is a Polyfill if you want to start playing with it. The possibilities are out there. It’s open source, so if you want to work on it and you’re a developer, there are people you can contact and be the future of images.

Then there’s the FLIF, which is the Free Lossless Image Format. This is like the newest kid on the block. On their website they claim to be smaller than literally all of the things that I mentioned in this presentation up to date.

They’ve done a lot of studies on image corpuses. The bottom two, the brown and the red, are the FLIF and how it compares to – here we have PNGs with a few different kinds of conversions and compressions. There’s a WebP in there. It seems to outperform all of them at basically every compression ratio.

The coolest thing that FLIF is its use for responsive images. You can encode a FLIF to have different variations in the same file, so a browser can start downloading the beginning of the file immediately, and even before it knows exactly how much detail is needed. Then it cane make adjustments based on that browser, so you don’t need to send multiple sources like you could do with a picture element. You can just encode it within a single picture. That’s so cool. I think it’s cool.

On the Polyfill site it says, “Beta. Not tested on all browsers. Performance and size is still being optimized.”

Back to Bob Ross. [“Find freedom in this canvas.”]

That’s so many things. But as designers and developers, we’re used to all of the things in 2016. I think that’s great because the best thing will win.

Let’s talk about what we should definitely be doing now, how we can improve our websites now. This is the solutions and protips section.

The first thing that we can do is automate. When I think of automation, I think of ImageMagick. ImageMagick is this wizard. You can use it in a variety of ways. If you’re using Webpack in your build process, you can use the magic loader. There’s one for gulp. There’s one for grunt. If you just want to write straight NPM scripts, there’s a wrapper called node-imagemagick.

It looks like this. You can take an image, source data. You can format it. You can change the quality. You can change the size. There are a lot of options that you have there. This is sort of just write it once and then forget it, and you’re automatically sizing the files inside of your build system.

There’s also semi-automation. This is all about tools. There are three tools that I recommend in terms of semi-automatic image sizing.

The first one is ImageAlpha. In ImageAlpha you could pull your images in. This is as GUI. Then make adjustments based on the individual image to see what it looks like on different backgrounds, play with the color formatting. Also Photoshop Studio for Web is a really, really awesome image optimization tool.

You can use this in concert with ImageOptim, which is literally my favorite one because it’s magic. You just take a folder with images, drag it in there, and just watch the savings happen. This does a lot of optimization. It takes out some metadata and does adjustments for you that you can’t really see the difference in your images before and after.

There’s a project that just came out called Yarn, and they had a website. On that website–I don’t know–I was feeling like I want to contribute to something. I took a look at their images. Did a little audit. Just ran them through ImageOptim. Doing that alone saved over 200 kilobytes of image data after they were already optimized. It makes a really big difference just running it through. It takes a few minutes, even seconds.

I say ImageOptim always and often, IOAO. I’m going to try to make this happen. Can we make this happen? Okay, maybe not.

The next one is called SVGOMG. This is for SVG files. You can drag it into your browsers. There are a few format options on the side that you can select, and then just save it out. Yeah, SVGOMG is great for all of your SVG needs.

Finally, I take us to the manual image optimization. This is when, you know, let’s design a website, right? You’ve been commissioned, so you have this option or you have this option. That yellow area is just a huge buzz kill to your page performance. This is how we can improve that performance. We’re going to take a look at these individually.

This is a quote about human communication, but I think it really relates to the Web as well in terms of perceived performance. It’s not what you send. It’s how people receive it. We here about perceived performance a lot. It’s the idea of is this thing feel like it’s loaded. Does it feel fast? That is super important in terms of performance optimization.

Continuing our little game, let’s play spot the difference. These are the same size, but they are different in one little way.

Audience Member:: The fence….

Una Kravets: The fence isn’t the same.

Audience Member:: It’s blurry.

Una Kravets: It’s blurry. Yes, and you can see that if we overlay it. There’s a little bit of change in that area. Blurring an image makes that image a lot smaller, and it’s because of the way that JPEG is compressed. We kind of just went over.

Medium renders images and includes them as placeholders to show until the image actually loads. You have just like kind of this blurry area until the image loads in and then animates in. It provides a ton of savings.

If we start with this image, 409 kilobytes, we can save that down to 30 pixels big. That’s now less than one kilobyte. Then we can scale it up to fit the container and use something like a CSS filter like a blur, and we can load in the image when it finishes.

Facebook also does this with our headers. They actually have a 200-byte sort of limit here because of how they get their images from their graph QL databases. If a file is less than 200 bytes, they kind of trick the database to not realizing that it’s an image, and they don’t have to send an extra image request. They use this technique. They load it on the page. It feels like it’s fast. You have that perceived performance, and it saves time and data for the users. They can access their content immediately.

This is similar to something called Progressive JPEGs, if you’ve heard of Progressive JPEGs in terms of how images will load in. With Progressive JPEGs, you kind of get it for free. Things load in quality layers. But you are really getting those same savings in terms of initial render time because these are still going to be pretty big. But Progressive JPEGs are great for optimization. You can do like a Photoshop task when you save for Web. There’s a little checkbox for Progressive. This is also a really good thing to do. So the protip: embrace the blur.

Protip: Embrace Blur

What’s the difference here? Just kidding. This is pretty obvious. No color data. If you are sending a black and white image, it is significantly smaller than sending a full color image.

Black and white doesn’t have to be boring. It saves on image sizes, but you can play with it, so hello blend modes. I have a blend mode of hard light here with this pink layer applied. You can play around with these things and dress up your black and white images while saving in terms of performance.

I have this open source library called CSSgram that I built. I had my Wi-Fi turned off, so it’s no loading here, but go to CSSgram. I just pulled in the CDN link. You can see that even though you have a black and white image, you can apply filters on top of it, different layers. There’s before and after elements and sort of recreate filters on the Web.

I wanted to see what the impact of that additional paint would be in comparison to the image savings that we’re getting, right? I did a little test, and I sent a colored image, a black and white image, and then I sent that same black and white image with the blend mode applied to it to create this pink effect. I found this. The black and white image was the fastest. Then the blend mode image, the black and white image with the blend mode on top of it came in pretty close at 2.2 seconds versus 2 seconds. The full color image loaded in at 2.5 seconds, so we’re still getting that savings despite that additional time to repaint the blend mode.

I thought, okay, this isn’t a fair comparison. Let’s look at the images alone. No live blend mode. Let’s benchmark that again with the actual color data that you would get if you saved this in Photoshop as an image with a pink overlay on it. You’re saying 115 kilobytes, not 190 kilobytes. What happens there? It’s still rendered faster to send a black and white image with the blend mode than an equivalent colored image pre-designed and saved in Photoshop. I thought that was pretty cool. Less colors, less problems.

Protip: Less Colors, Less Problems

Let’s talk about animated images. I work at DigitalOcean. Recently we rebuilt our website with a focus on accessibility and performance. I was really focused on this homepage. We have this big header image. That header image, after optimization, it’s a little GIF. It looks like this. It was still 600 kilobytes. That’s after optimizing. But I was able to get it 200 kilobytes smaller, and it’s over 30% smaller, to get that site to load in less than one megabyte, which was sort of the goal here.

What did I do? I opened it in Photoshop. I looked at those individual frames. This reminds me of Jasc Animation Shop. I used to do Jasc Paintshop Pro 2 back in the day. But what you can do is look for those frames that aren’t moving. Sometimes things still. If they’re not moving, you can delete those frames and then lengthen that identical frame before it to take up that space so you’re not actually changing the length of that GIF, and resave it. Doing this along with colored data transformation made a huge difference in that GIF’s size, so format the frames.

Protip: Format the Frames

Now, GIFs are just so big. There’s got to be alternatives, right? There kind of are alternatives, but none of them are really catching on like images. But the one that I found that was really interesting is called GIFV by Imgur. GIFV is pretty cool. It’s actually converting this GIFV to a video. Based on your browser, you’ll get a WebM, or you’ll get an MP4 video, but you keep that GIFV extension. Imgur is going to automatically do that transformation for you. It’s greatly going to decrease that image size.

This is even better because silent video autoplay is back in iOS 10. I don’t know why no one is excited about this, but it’s great news for GIFV because this means that you can have those autoplaying videos without your user having to click play. This is also really great for websites that have videos in their background images like remember that big, yellow area. Now if they’re silent videos, they will autoplay as well.

Why is video so much better than a GIF? It comes to that compression. Like I was saying, interframe compression is what videos use. That’s where you have that difference between the frames. If you have a lot of movement in a video, it actually makes that video file bigger. It stores those differences. Like a formula, it has to look those up and render them at speed, whereas an intraframe compression like a GIF stores every single frame, so there’s no lookups or preloading. It just stores every single frame, and it shows a picture.

We talked about a lot of things. Let’s do a little example here. I was at a conference the other week called Mixin, and they have a beautiful website. There’s a lot of awesome animation, lots of images, and they gave me permission to sort of audit their site for image optimizations. They’re great. It was awesome.

Step one in this process: Determine the greatest common denominator image size. This is sort like the lazy way, the fast way to do this, if you remember that term from, like, middle school math. What we’re going to do is look at the site and look at the images that we have and what size we’re sending.

This is the homepage. I was looking at the speaker images. This speaker image is at 540 pixels, but we’re sending it at 1560. It’s a little bigger than 540. But, you know, sometimes when you move around the site or resize it, it might change. In this case it didn’t change. 540 was still that maximum image size for that, all of the images that were sent in that way.

I looked at some other pages and found, okay, 661. That’s bigger than what we saw originally. Now the max, the greatest common denominator is 661.

Then I found this one, which was 780 pixels, and this is if and only if one of the speakers were to blog post, they would have a wider image on a spread there. We’re stilling sending out at 1560.

You can do some math to see what the difference is and then just batch save everything down by 50%. You can do it in Photoshop. You can do it with the ImageMagick CLI, one line of code, super fast. Mogrify is a really great function that you can use for that. This is kind of like the lazy way to make some overall overarching improvements.

You might be thinking, like 50%, that doesn’t sound random. We hear this quote, “Save retina images at two times the size.” I kind of want to challenge that. Yes, retina devices allow you to present more pixels in a more condensed space, but can you see the difference here? You really can’t tell the difference on your screen. If you can’t tell the difference on your screen, then your users sure as hell won’t be able to tell the difference.

If we save it at that greatest common denominator size, we save a lot of data here. Then if we save it at the actual size the image is being sent, it’s even more. It’s over half. It’s about half. We really don’t need to send all of our images at two times. We really shouldn’t be doing that.

Just in one image, we’re saving 195%. That’s 195% improvement in the image sizing. There are lots of places all over the site that things can be improved. Just from saving the speaker images alone, we can be saving 600 kilobytes from what our users see on the initial load of that site. That’s 220% savings.

Think about that. That’s data that you don’t have to add to the Web. We don’t have to make that line grow of what websites are in terms of size. We don’t have to send that to our users and make them pay for it, actually pay for it because they have data plans that they’re paying for.

Step two is then to consider the image format, right? JPEGs, if we save those, we’re at 500 kilobytes. If we convert that to WebP, even with the Picturefill Polyfill, it comes out to 348 kilobytes. Now we’re at a 316% savings in image sizes. That’s just on the initial load of the website.

Step three then is to choose the best image quality. If you have smaller icons or smaller areas, use lower quality. You won’t be able to notice the difference. Think about that. Maybe think about how it interacts with your user. If you need a nice quality homepage image, then please use a picture. Again, you can send different images at different sizes by using this, different sources. When your user is on their mobile device, they don’t have to load that big 1560 pixel image. They can get something that’s much more appropriate.

If you’re not concerned about performance or you don’t care about humans getting access to your data around the world or something because a lot of people around the world are exclusively on mobile devices, even if your audience is entirely 24-year-old tech bros with MacBooks, you still need to consider mobile performance. The reason why is this.

I learned this yesterday. I was getting lunch with somebody I just met. His name is Alex. Hi, Alex. I don’t know where you are. But he works with e-commerce, and he told me about this company Criteo, and they do this research.

They found that mobile design speed was even more important for the high-end and luxury market because those users are more likely to shop on their phones and make decisions without having to get on their desktop, do the research, make sure it’s the best choice. This is really important to realize. It’s good for all users to think about performance.

Step four is, like, literally, seriously profit. Amazon calculated that a page speed slowdown of just one second could cost it $1.6 billion in sales each year. That’s a figure that we see from these big companies, but it’s saving every company when you save on your performance. Even if you don’t care about page speed and information access, improving performance is still worth it.

Some tips overall:

  • Probably don’t send an image larger than it needs to be.
  • Be aware of the emerging formats, their pros and cons, how to implement them.
  • Picture element is your friend. Video is also great. You get that for free.
  • Compress your smaller images to low qualities because you’re not going to see a difference there.
  • Always run all your images through an optimizer. ImageOptim always and often.

I don’t know. I messed that up. Whatever is fine.

  • Small improvements are still improvements.
  • Educate your entire team.

You’re here at this conference. You’re learning this information. It’s really important to take this back and multiply your knowledge. Be a multiplier and make that change happen from the ground up.

It’s really about, like, finding balance. I know I’m super heavy-handed with performance optimization in this talk and a lot of you are designers, you know, like, “But I want really nice quality photos.” I get that. It’s about finding balance. It’s about finding a rhythm that you can send your images, but not overkill your users.

Back to Bob Ross. This is one of my favorite quotes by him. It’s so good. Just brilliant. [“Remember how free clouds are. They just lay around in the sky all day long.”]

This is sort of how the Web should be. People shouldn’t be obstructed by media to use it. The Web should be a free place for users to ebb and flow and access the data they want when they want. Build happy little pixels to make happy little websites and happy users.

It just comes down to the single principle. It’s just build a better Web. Information shouldn’t be a privilege. Build it to be faster. Build it to be more accessible. And consider that small, incremental changes can lead to a large impact overall.

Build something that people love. But more importantly, build something that people don’t hate. Thank you.

Audience: [Applause]

Speakers