Transcription
[Music]
Leonie Watson: Hello, everyone. I’m really sorry I can’t be there in person today, but I hope you’re all having a fabulous time, nonetheless.
In the UK, we have a phrase that something can be like a bag of spanners. It means that something can look really well put together and very organized from the outside, but when we open it up and look on the inside, things are not so well put together and organized. In fact, it looks like someone has thrown a bunch of spanners into a bag, given it a good shake, and what’s on the inside is a big, old, jumbled mess.
Accessibility, I think, can sometimes feel a little bit like that. We throw things together, we try things, we test things out, these things that may be unfamiliar to you, so we’re experimenting as we go. And it can often feel a bit disorganized and a bit chaotic, but it really doesn’t have to be like this. With a little bit of knowledge and a bit of effort, we can turn that bag of spanners into an effective development toolkit.
Before we get into that, I want to take a moment, though, to stress that accessibility is everyone’s responsibility. This talk focuses on code and development. But actually, accessibility is something that needs to be thought out at every step through the production lifecycle (from the moment we start thinking about requirements, feature requests, design, interaction, user research, development, right through to QA testing and launch).
Everybody has to be involved in thinking about accessibility if we’re going to get it right. Although that’s the big message, for now, we’re going to focus on some of the code aspects of accessibility.
What do I mean that accessibility is part of your toolkit? Well, it’s like privacy, performance, security - all the other things that you think of because you’re a good and effective front-end developer. Accessibility is just one of the things that you should have in your toolkit so that when you produce products and services, when you script and render code, you know you’re doing the best job that you can and creating things that can be used by, well, anybody who wants to use them.
Cause and effect: Who knew you would come to a Web development conference and be talking about Newtonian mechanics? Well, here we are. But it’s very true. There is a cause and effect.
When you make choices about the JavaScript frameworks you use, the libraries, the code you write, the code that gets rendered on screen, those choices have a very direct effect on the accessibility of the interface and, from there, the accessibility of the user experience (for someone like me who can’t see and uses an assistive technology known as a screen reader). Think very carefully about the choices that you make when you’re coding, how you’re coding, and what the end result will be.
To do that, there are some key things that we can take away. The first is an understanding of mechanics. Well, what else would we think about? If we’re thinking about toolkits, then the mechanics of how things actually work in the browser are an important piece of that toolkit.
There are things called platform accessibility APIs, and these are used by assistive technologies like screen readers to query information from the browser. Every platform has one of these accessibility APIs: Windows, Mac OS, iOS, Android, Linux.
They’re not APIs in the sense that you’ll be most used to. They’re not JavaScript APIs. They’re not available to us as developers. They are there at the system, the platform level, and they’re used exclusively by assistive technologies.
But they’re extremely powerful things, and it means that my screen reader can query information from the browser. The browser makes that information available in something called the accessibility tree.
When a document gets loaded into the browser, the DOM gets created. The graphics engine kicks into action. And if there’s an assistive technology running on the platform, a separate structure is created known as the accessibility tree.
Like the DOM, it contains all sorts of information about the content that’s shown on the screen. But unlike the DOM, the accessibility tree has all the accessibility information that an assistive technology like a screen reader needs. There’s lots of information that goes into making an accessible interface.
This information collectively is known as semantics. As we go through the rest of this talk, I hope you’ll start to understand how vitally important semantic information is to someone who isn’t able to see the screen.
We have two sorts of semantic information. Some semantics are implicit. Every HTML element and attribute has implied semantics. Basically, that means that they have accessibility information stored up in them without you having to do anything other than cause the particular elements and attributes to be used or to be rendered. There’s lots of accessibility information available for you for free.
But sometimes we need to explicitly apply some semantic information, and we can do that using something called ARIA (or Accessible Rich Internet Applications if you’ve not come across it before). Explicit and implicit are two things we’ll come across later in another context. But let’s look at some good examples now.
The first part of semantic information is something called a role. An element’s role is what describes its purpose. When you look at something on screen, you will understand, from the way it appears, that it’s a checkbox, a radio button, a button, a table of information. You’ll get that understanding because you’re looking at the say it’s styled and presented on screen. The role, the semantic role of an element is what tells me exactly the same thing but without me having to look at it because, of course, I can’t see what’s on the screen.
So, if we take our old favorite, the button element, it has an implicit role of -- this will come as no surprise -- button. So, when I cause my screen reader to focus on a button element on screen, my screen reader uses the platform accessibility API to query the browser and ask for the accessible information about this particular element.
The browser says, “Well, that particular element has a role of button,” and so my screen reader will very helpfully tell me--
Screen reader: Button.
Leonie: It’s a button. So now, I have access to the first piece of information that you have visually by looking at the button on screen. I know what it is, and that tells me what I can do with it. I can activate it and something presumably will happen.
So far, so good.
The next piece of semantic information is the name or accessible name. This is what tells me what the button is for.
So, in the case of a button element, the accessible name comes from the text that’s inside the button element. Let’s say it’s a show password button. Again, the same thing happens. Now when my screen reader queries the browser for information, the information about this element in the accessibility tree says it has a role of button and it has an accessible name of show password, so I get to hear this.
Screen reader: Show password button.
Leonie: So far, so good. Two pieces of information coming to tell me what this thing is and what I can do with it.
The last piece of information that’s useful is an element’s state or its accessible state, the condition that the element happens to be in at the time (if indeed it has a state). Of course, not all elements do all of the time.
Now, we have no implicit way in HTML to describe the state of a button. The button element’s role is implicit in the button element. The accessible name is there because we’ve put text inside the button. But if you remember, when we want explicit semantics, we need to use ARIA, and we’ll use the aria-pressed attribute to explicitly say the state of this button is either on (pressed) or off (not pressed).
Interestingly, this also changes the role. My screen reader will now get the information back that this is not just a standard button, but it’s a toggle button that tells me that this is a button that can be pressed or not pressed, turned on or off. It sounds like this:
Screen reader: Show password toggle button pressed.
Leonie: So, we’ve got three pieces of information now that have come together: the semantics of the role that tells me what it is and what I can do with it, the accessible name that tells me what’s going to happen when I do use it, and the state. In this case, that the show password is either pressed or not pressed. It’s building up those layers of information that to you will probably be very apparent visually.
Let’s take another example, the nav element. This also has an implicit role, this time of navigation. My screen reader will tell me:
Screen reader: Navigation region. Navigation region ends.
Leonie: I know I’ve got a region of the page, and the purpose of it. The role is navigation. So far, so good.
Actually, this is one of a number of elements that came along with HTML 5, including the main element, header, and footer. They describe the key areas of content on a page. Most screen readers have shortcuts that let you jump between these types of elements as well, so they’re a really convenient way of quickly understanding what the big blocks of content on a page are. It’s a good technique if you can’t take all of the page content in at a glance.
We’ve got an implicit role of navigation for the nav element, so we know what this part of the page is for. But it’s not uncommon on a page to have multiple navigation blocks, so we can explicitly apply an accessible name this time using the aria-label attribute.
Let’s imagine that this particular block of navigation is just for the website as a whole, so we’ll just put the word website inside the aria-label. Now, my screen reader gives me the two pieces of information: the role and the accessible name.
Screen reader: Website navigation region. Website navigation region ends.
Leonie: But there’s a bit more we can do. Let’s look at another example, this time a set of lists.
Here we’ve got a few bits of information that come together. We’ve got the implicit role of the UL element, which is list, the implicit role of the LI element, which is list item. But the browser does something even better. It counts up the number of list items inside the parent list and it makes that information available. So, I know that the state of the list is that it has three things inside it, and the accessible name for each list item comes (as with the button element) from the text inside the list item elements.
Screen reader: List of three items: bullet role, bullet name, bullet state. List ends.
Leonie: Again, I’ve got those layers of information, but this time I get the fact that there are three things inside the list. This is useful from a navigational point of view because three items in a list are pretty quick and easy to navigate through, so I might choose to do that.
If I had heard that the list had 96 elements in it or 96 list items in it, I might not have chosen to navigate through all of them but might instead have used a keyboard shortcut or a screen reader shortcut to jump straight over the whole list and move on to the next thing. Again, it’s about making that information that’s apparent visually available to me through the semantics of the code that you’re writing.
If we put this together into our nav example, we get a really nice package of information. We’re also going to throw some links inside the list items, so we’ve got a new element there with an implicit role of link and the accessible name this time goes inside the link rather than the list item itself, but it all comes together to give me some really useful information about what’s on screen.
Screen reader: Website navigation region. List of two items: home link, about link. List ends. Website navigation region ends.
Leonie: Again, I now know the part of the page I’m in, what it’s for, what it contains, how much is inside the navigation block, and I know that these are links, so I can understand that if I use one of them, I will navigate to the page in question.
But there’s something that’s missing from this. It’s a very common visual pattern to highlight the page that is currently the one displayed in the browser. For a long time, there was no way to make that same information available to someone who couldn’t see that in the browser.
Fortunately, now we can do it with a little bit of ARIA in the form of the aria-current attribute. It lets us express the state of one of those links to say the state of this link is that it represents the current page in the set or in the navigation block.
Now, that information too gets added into the overall package.
Screen reader: Website navigation region. List of two items: home link, current page, about link. List ends. Website navigation region ends.
Leonie: There’s so much information there coming together, but it’s all incredibly useful and, collectively, it tells me exactly what I need to know and let’s me make choices now about what I do next in terms of navigating and interacting with the content.
Let’s take another example. Here we’ve got a couple of radio buttons. We’ve got the input elements and they each have an implicit role of radio. This time it comes from the type attribute with the same value: radio.
We’ve also got accessible names for each of those radio buttons. But this time it comes from the associated label element, and the association comes because the for attribute on the label element and the ID attribute on the input have the same value. This is really important.
Without those two matching attribute values, the browser does not make an association between the two elements. And when my screen reader queries the browser for information in the accessible tree, that association is not known. In other words, the radio element will have no accessible name as far as the browser or my screen reader is concerned. But when we have this association, we get two radio buttons with accessible names. Again, we start laying that role and name information.
Screen reader: Purple radio button not checked. One of two. Red radio button not checked. Two of two.
Leonie: In fact, we got some state information in there too. Two bits of it, in fact. We heard that neither of the radio buttons was checked, and we also heard that there were a set of two radio buttons because of the name attribute sharing the same value. Again, all of that information is made available in the accessibility tree and my screen reader comes along and queries for it and tells me what I’m looking at on screen. Again, I get to make choices about what to do next.
But again, there’s a little bit of information missing. Why am I being asked to choose between two colors? We can help with this too by again using role and accessible name, but this time for a group rather than for an individual radio button. We do this by wrapping the whole lot in a field set element.
A field set element has an implicit role of group, so that will tell me that there is a group of things to come. We use the legend element as the first child of field set, and that provides an accessible name for the group of radio buttons. Again, lots of information but now I understand the context of the choice I’m being asked to make with the radio buttons.
Screen reader: Group start. Choose a color. Purple radio button not checked. One of two. Red radio button not checked. Two of two. Group end.
Leonie: Remember, this is all coming because of implied or implicit semantics. It’s just there in the HTML for free.
Right now, other than perhaps kicking your chosen JavaScript framework into spitting out really good, quality HTML, you don’t have to do anything at all. This is all happening for free courtesy of the HTML, the browser, the accessibility tree, the screen reader, and the platform accessibility APIs.
Let’s look at a little more complex example now and a data table, this time showing some different people and the numbers of cups of tea and coffee that they might drink in a day. We’ve got a table element that has an implicit role of table. We have a caption element that gives the table its accessible name, so it describes what the table is for.
Then we have TR and TD elements to make up the body of the table, the rows, and the columns. Finally, we mark up the column headers using the TH element. Now, these are really important when it comes to screen reader accessibility and the semantic information is available.
I can best show you how this works, though, through a demo. So, what I’m going to play now is a video of a screen reader user navigating through this table. Listen very carefully because you will hear a lot of information coming your way now. Once you understand how useful it is, you’ll understand the importance of the semantic information that’s available in a properly constructed data table.
[Video played]
Screen reader: Table with three columns and four rows. Average daily tea and coffee consumption. Column one, row one, person. Njoki, row two. Lesha, row three. Coffee, one cup, column two. Tea, two cups, column three. Leonie, 25 cups, row four.
[Video stops]
Leonie: I say lots of information we had that it was table, that it’s a table consisting of tea and coffee consumption and, as we move through the table, we heard the different coordinates. That it was row one or column one, and that, as we moved down through the columns, through the names, we moved from the row for Njoki to the row to Iesha. We then move right through that row, heard how much coffee and tea Iesha drinks, and then drop down into the bottom row to hear about the truly ridiculous amount of tea that I drink in the course of a day.
It helps to understand that amongst the many shortcuts that screen reader users have at their disposal, there are several for navigating tables. As this demo has just showed you, we don’t read tables one cell at a time. we can go up and down through columns or left and right through rows. This is where the importance of the TH elements come into effect.
You might have heard there, again as we moved right through the row for Iesha, it announced coffee and then the number of cups and then tea and a number of cups. It’s doing this because the browser has created an association between the TH element that represents the column header and the cell that’s currently being focused on by the screen reader. Those two things come together to tell me the purpose or the column, the reason for the cell I’m moving into, then the data as well.
Again, lots of information coming together to create a really effective package.
We’ve looked at semantics, the implicit and explicit role, name, and state that’s so valuable when you can’t see what’s on screen but still want to know what’s there and to be able to make choices based on that information. Once we know those things, the next thing we need to be able to do is make sure that people can actually use what’s on screen. This, of course, is particularly relevant for interactive elements.
Again, I said we would come back to this idea of implicit and explicit, and here we are.
HTML interactive elements like links and buttons and form fields have implicit keyboard support. In other words, the browser will make sure that all of these things work with a keyboard just because you’ve used those elements. However, we quite often create custom components or we change the behaviors of other HTML elements, and so there will be times when we need to explicitly provide keyboard support to make sure that everything works as expected.
Let's go back to our button for an easy example. When we use a button element, we just need to apply some functionality for the click interaction, and the browser quite happily adds all the keyboard support in for us, so all we have to do from a development point of view is provide the mouse functionality and the browser will automatically make that same functionality happen when someone uses a keyboard to activate the button.
Very importantly, two keystrokes can be used. All browsers support button activation using space or enter. Those two keystrokes are important to remember because what we quite often see is that we start off with an anchor and we repurpose it to be a button.
This happens in a lot of frameworks. It happens under a lot of circumstances. It’s a really common pattern. It’s a dreadful pattern, but it happens all the time, so it’s important to understand how we can improve it.
Well, the browser makes the link work with the enter key, but not a space key. So, if you’re going to repurpose a link to be a button, this is where you start to need to think about explicitly provided keyboard support. You need to make sure that your fake button also works when the space key is selected. So, we’ll let the browser handle enter key interaction and, in our script, we just need to listen for the keycode 13, the space key, and make sure that the same functionality is executed when that key code or that key press is detected.
There are a few more things we need to think about, though. The href attribute is incredibly important to keyboard functionality of links or links as our fake buttons. It’s the href attribute that makes it possible for a keyboard user to focus on the button. By focus, I mean use the tab key to move their focus onto the link so that they can then use the enter or space keys to use it.
If the href attribute is not present, then everything starts to break down pretty quickly. It becomes inert. There is no functionality, no mouse functionality, no keyboard functionality, and it can’t be focused on. So, we really now need to start thinking about providing explicit keyboard support along with everything else.
We have to apply the mouse functionality. We have to provide the keyboard support. But this time, we have to listen for both the enter and space key, so 13 and 32. And we have to also make sure that the fake button can be focused on. We do that by setting the tab index attribute on the link with a value of zero.
Now, a keyboard user can focus on the link. A mouse user can click on it. And a keyboard user can use either space or enter keys to make the link or fake button actually work in practice.
There’s one more thing we need to do, though, in this particular pattern. That comes back to the semantic information. At the moment, this may visually look like a button. We’ve scripted it so that it behaves like a button. But a screen reader user will still hear that it’s a link, so we need to make sure that a screen reader user is told that it has a button. To do that, we use the role attribute, explicitly apply the button role to it. And together, now we have a possibly functional button, albeit one that started out life as a link.
We’ve now looked at three different things: the mechanics, how they work in the browser, the semantics, the information that is made available through the browser to the screen reader, and the interaction, making sure that everything works in practice.
The final thing is to bring it all together into the construction to make sure that we use all of those things to construct really good, accessible interfaces. To do this, let’s look at a slightly more complex example.
We’re going to build a menu bar. Actually, it’s the menu bar from categories of posts on my blog at tink.uk.
We’re going to start off with some good old-fashioned HTML because progressive enhancement is important. It’s just a simple list with some nested list items, another nested child list, and you’ll remember that these elements all have implied semantics.
The UL element has an implied role of list, the list items have implied roles of list item, and the links inside have implicit roles of link. Finally, the accessible name for each of those links comes from the text inside of it.
We have a good fallback if, for any reason, our scripting fails out, network problems, whatever that may be. But let’s start enhancing it now into a more complex, custom component. We’ll start off with the first semantic information: roles.
Because we actually want to create a menu bar, we don’t want this to be presented as a set of nested lists. We’ll start overwriting the implicit semantics with the ones we actually want to use. We’ll put a role of menu bar on the parent list. That’s the overall container for the element.
Then we’re going to do something interesting to the LI elements. We’re actually going to say to the browser, “In the accessibility tree, say that these elements have no role.” We’ll apply a role of none, and that’s because the list items are important for our fallback HTML code, but actually, in our complex component, in our menu bar, those semantic bits of information are not important. Actually, they’re not really helpful at all, so we’ll just say for now, “Ignore the role on those elements.”
Then we’ll change the roles of some of the other elements inside. For the anchor element that’s inside, we’re going to apply a role of menu item. Like a list item inside a list, a menu item inside a menu bar says that that’s just one of the things inside it. The child menu inside has a simplified version. It’s just a role of menu, so we have a parent menu bar. It contains a child menu, and the child menu contains a bunch of menu items.
We can also do a little bit more to help in terms of accessible names. I mentioned that the text inside the anchor elements or now the menu items gives them their accessible name, and that’s fine. We can leave that alone. But we can add a little bit more information. That child menu, we can use the aria-label attribute again and just say that actually the purpose of this sub-menus is that it’s the category listings that are available on this particular blog post.
Then we can add in some state. When you’re navigating through a menu bar, you want to know when things are open and closed. Again, information that’s obvious visually when you can see things appearing and disappearing, but when you can’t see that, we need to make that state clear in the code, and we can use the aria-expanded attribute to explicitly apply that information.
Again, there is no equivalent or no implied semantics for the state of open or closed, or expanded or collapsed, at least not that works in this context, so aria-expanded lets us do that. We can set it to true to indicate that something is open or expanded, and false when it’s not.
We’ve started from our nice, clear HTML just in case it all goes wrong and we need good quality fallback code. But then we have enhanced it with role, name, and state information. Collectively, this comes together to give a screen reader user an experience like this.
Screen reader: Menu. Categories, submenu, one of two.
Leonie: That’s great. We know that it was a menu. We heard it was a category submenu, and that was one of two.
But there’s a problem, and that problem is that’s all the information that we can get to. The reason is because there’s no keyboard functionality available at the moment.
That thwacking noise you heard is the noise the screen reader makes when it basically says, “Okay. I’m handing all the keyboard support, all the keyboard behavior back to the browser.”
A lot of the times, screen reader users, as I mentioned, will use shortcuts for navigating around. The keys to do that are pretty much every key that’s available on the keyboard. For example, I can use H to move between headings, B to move between buttons, T between tables, G between graphics, L between lists, and so on.
But there are times when those keyboard commands are not appropriate or they’re not useful to the situation. Navigating through a custom menu bar like this is one of them. So, when the role of menu bar is applied, it says to the browser, “Okay, at this point the screen reader is not going to handle any kind of navigation. You have to do it all.” Because this is not a native component, it doesn’t exist in HTML by default, it’s something we’re creating as custom, that means the responsibility for providing the keyboard functionality rests on us as developers.
We have to make sure that in our script we have done a whole bunch of stuff to support keyboard interaction. We want to make sure that, first of all, it can be opened and closed. The child menus can be opened and closed using space or enter.
We want to make sure that you can navigate around the menu, up and down, left and right, by using those particular arrow keys. Then it’s also good practice to be able to support someone closing the whole menu and returning to the parent level just using the escape key.
There’s actually a lot more to it than this, but you get the idea. We need to make sure that that interaction support is there too.
When we do, what we get is a really well-functioning, well-put-together, interactive menu bar. From my point of view as a screen reader user, I know exactly what I’m dealing with, exactly what’s in it, what I can do with it, and how it will work.
Screen reader: Menu. Categories, submenu. One of two. Tags, submenu. Two of two. Categories, submenu, one of two. Submenu expanded. Code things, one of three. Web life, two of three.
Leonie: Isn’t that wonderful, because of all of those things coming together: the mechanics of the way the browser makes the information available in the accessibility tree, my screen reader queries it using the platform accessibility APIs, all the implicit and explicitly applied role information coming through, so I know it’s a menu bar full of submenus and menu items, the names and state telling me the different menu items are for things like Web life and code life and whether the menu is expanded or collapsed. It all comes together underwritten by the keyboard functionality that we’ve provided in the JavaScript to create something that’s just so beautifully easy to use and to understand. It’s a really great experience. I can’t describe to you how lovely it is to see things like that and be able to use them on the Web.
I hope this has given you an idea of how these things, the mechanics, the semantics, the interaction, and overall construction can turn what sometimes may feel like a bag of spanners into, as I said at the beginning, a useful toolkit that you can use.
Thank you.