#btconf Berlin, Germany 07 - 09 Nov 2016

Erika Hall

Erika Hall is the author of Just Enough Research. In 2001, she co-founded Mule Design Studio in San Francisco where she is the Director of Strategy. Erika speaks and writes frequently about cross-disciplinary collaboration and the importance of natural language in user interfaces. In her spare time, she battles empty corporate jargon at Unsuck It.

Want to watch this video on YouTube directly? This way, please.

Beyond Measure

Site analytics. The quantified self. Big data. We can track, measure, and store more than ever before. This is naturally exciting to designers and technologists who want to make better informed decisions. But more data doesn’t necessarily create more meaning, and might even make it harder to see what matters. Human experience does not reduce to an engineering problem and what we can’t count still counts in an increasingly quantified world.

Transcription

Erika Hall: I am the filling in this afternoon’s sad American sandwich, so yeah. We’ll see how this goes because my talk is tangentially related to things that we’re thinking about. It kind of starts with a cute story and then maybe there’ll be sadness and anger. But there’ll also be learning. That’s what we can hope for, right? Learning after the sadness and anger.

I’ll start by telling a little story. Long ago, way out on the other side of the galaxy, there was a species of hyper intelligent, pan dimensional beings. Despite their astronomical intelligence or perhaps because of it, they bickered constantly about the meaning of life. They couldn’t agree, and it was really important to them, and they just went back and forth and back and forth. They got so fed up with this constant arguing that they found their brightest and best, and they said, build us a super smart, super computer to calculate the answer to put an end to this constant arguing.

And so they did. They created this computer, and it was called Deep Thought because the marketing department got involved at some point. You know how that goes.

They said, “Deep Thought, tell us the answer.”
“What answer?” said Deep Thought.
They’re like, “You know, the answer to everything, to stop our arguing. The answer to life and the universe and everything.”
Deep Thought said, “Okay. I’ll need a little bit of time.”
They’re like, “Okay. Fine. How much time?”
He’s like, “Come back in seven and a half billion years,” or roughly the length that 2016 has seemed.

And so they waited. They didn’t really have another choice. They said, okay, this is really important to us to settle this argument. Then they waited and generations lived and died or warped across dimensions, whatever they did. Then after seven and a half billion years, they came back, and they were so excited. They were so excited to get the answer that would end all of their arguing.

They booted Deep Thought back up, and they were like, “Okay. We’re ready for it. What’s the answer?”
Deep Thought said, “The answer is 42. You’re welcome. That’s the answer to life and everything else that you were talking about.”

The architects of Deep Thought were monumentally unsatisfied. They’re like, “What are we supposed to do with that? That solves nothing. That will end no arguments.”

Deep Thought replied, “Well, you know, you never actually asked a question. You just demanded an answer. It would have been much simpler to know the actual question because only when you know your question will you know what the answer means.”

Some of you might be familiar with this parable. It’s from The Hitchhiker’s Guide to the Galaxy, a book near and dear to my heart. It began as a BBC series way back in 1978. This was the very dawn of the personal computing age, and the author, Douglas Adams, had a really good handle on the issues. He knew that humans craved certainty, especially mathematical certainty in this crazy, messy, complex universe.

Now that we had computers, we thought, hey, we can just foist all our hard questions onto them and not have to think for ourselves ever again. He also knew that literally all the computing power in the galaxy would not let us off the hook for the really hard questions, the questions of what anything at all actually really means. On the whole, it does an amazing job of capturing this truth about the human condition because, you see, we humans, we have the terrible misfortune to be born self-aware in an enormous and uncaring universe. This is the existential bumming out part.

And so we want to feel special. Since the dawn of western philosophy, we have made ourselves feel better by separating ourselves from other living creatures based on our capacity for reason, you know. We’re so smart. Sorry, bats and warthogs. I can do a syllogism. I’m a logical being.

Even as more and more studies and experience shows that maybe we’re not so rational after all and a lot of animals are actually smarter than we give them credit for, we cling to this myth of our rationality because it makes us feel better. But as a consequence of this consciousness that we’re so proud of, we have to make choices. Choices are scary. We have to commit to things.

And because we’re social creatures, we have to communicate, negotiate, and agree on our choices. We have to decide things, and we have to do this to get anything done at all. We’re not always great communicators, and we hate conflict. We’re really uncomfortable with this constant bickering. We just want to settle it.

Then because we’re so smart, we made computers. People, those of us who are comfortable with technology tend to think, hey, maybe there’s a mathematical solution to these hard problems. Maybe we can math our way out of deciding and agreeing with all these other messy humans.

But guess what? We can’t. Math won’t fix it for us. To design and develop these complex, interactive products and services, we have to work together really well to make complex decisions. And we want to feel, we hope that we feel, like we’re making the very best decisions.

We need to resolve these disagreements. We need to resolve this constant bickering to get anything done. Counting, measuring, calculating, computing gives us that feeling of certainty and objectivity that we really crave, but we have to face the fact that our measurements and how we manage them and how we count things are just as subjective as everything else because people on the whole, we’re terrible at dealing with numbers.

Even though we created computers, and we compare ourselves to computers all the time, we’re not very much like them at all. We need to use these big brains of ours that we think are so smart to understand a fundamental truth. Even as we get so excited about the possibilities of using technology to improve our lives, we need to kind of temper that with a little humility because we’re obviously imperfect humans designing things for other imperfect humans, and we always will be.

I left this slide in. [Don’t Panic] Yeah. We can do this. We can still do this. The answer is not to just give up on data and rely totally on intuition and think we can’t figure anything out. We just need to understand how we use our brains to make decisions. We need to better understand our decision-making process. This means we just need to take a more logical approach to the fact that we humans are highly illogical.

I’ll give you an example. This one is a happy example. An important part of being human is having red blood cells. The red in those red blood cells is iron, and that’s a key part of the hemoglobin. It helps carry oxygen throughout the body. Most of us get plenty of iron from red meat or dark, leafy greens or cooking in cast iron pots. But people who don’t get enough iron can develop anemia. That’s still pretty common in many places around the world.

In Cambodia, 60% of the population suffers from iron deficiency anemia. This affects women and children and the whole economy, really, because when that many people are affected by something that affects cognition and your energy level, it makes it difficult for people to work. Seventy percent of people in Cambodia live on less than $1 per day, which means it’s hard for them to get everything they need in their diet to be healthy. They just can’t have access to maybe as much red meat or good vegetables or those cast iron pots.

A few years ago a group of scientists went to work in Cambodia on this particular issue to see if they could help with anemia. They thought, well, adding iron to food is really, really simple. You just have to add it to the pots while they’re cooking. And they thought, you know, iron is a really inexpensive material, so maybe the best way to get iron into the food is just to give people pieces of iron to put in with their food. Simple. Totally simple. Totally accessible. Really inexpensive.

This all sounded great, so there was a Canadian grad student. Chris Charles, take me with you. They look so nice. They all dress like that too. He took a summer away from Canada to go work with these scientists in Cambodia to help distribute iron to see if they could help with the anemia problem. Because he was the new guy, he got the job of going door-to-door handing out the pieces of iron.

He was really good. He was a very, very lovely Canadian. He learned some Khmer, the local language, and he would knock on the door of each house and say, “Hi, I’m Chris Charles. I’m from Canada, and I have rock for you to put in your dinner,” or, you know, something equally convincing.

The families would nod and smile and say, “Oh, what a nice red suit you have,” and usually take the piece of iron and use it for a doorstop or something else that seemed more appropriate because not many of them wanted to put what looked like a rock in with their food. I mean I don’t think you would, no matter how lovely, charming, and friendly the Canadian at your door with a rock was.

Chris Charles went back to the rest of the scientists with what I assume was a very heavy backpack full of little rocks with a Canadian flag on the back. But he thought, you know, I’m not going to give up because this seems like it could work, but I have to figure out why it’s not working. He thought, well, maybe it’s the shape, because he looked at it and he’s like, hmm. You know maybe it makes sense that they wouldn’t want to put this in with their food when they’re cooking dinner. Maybe if I made it a slightly different shape, maybe a circle might be nice. Maybe a flower.

Then he went into the villages where he’d be working, and he started talking to people. He talked to some of the village elders, and eventually he learned that there was a river fish that was eaten pretty commonly, and it was considered lucky. Maybe not so lucky for the fish, but the people thought it was lucky. He thought, hey, this might work. I’m going to go back, and I’m going to make little pieces of iron that looked like happy little fish.

And it worked. It worked. He went back, and he went to each home. He said, “Hi, I have a little fish for you to put in with your food while you’re cooking. I will make you feel better.” All of a sudden that made sense, and people started to take them and use them because it looked like something that would reasonably belong in with your food. They’re like, oh, it’s like a little good luck charm. It looks like a fish. It looks like something I cook. That’s great.

In the months that followed, iron levels began to rise, and people suffered less from this iron deficiency anemia. Families reported feeling better after six months. Then after about nine months, in the villages where they were working, the incidents of anemia was down by half. These families started recommending it to one another because they felt that the fish was actually bringing them luck and health, which it was.

This is a really nice story, and the thing about it is that there was no significant difference in the chemical composition of that ugly little rock and that happy little fish. From a quantitative perspective, the inputs were identical. But the measurable success of the outcome could not be more different. Biochemical facts were no match for a story, for folklore. What this shows is that when we start with a true understanding of the human experience rather than abstract data or wishful thinking, we have a better chance of creating things that actually fit into people’s lives and, therefore, change people’s lives.

Of course, this wouldn’t have been possible without the hard science. You couldn’t treat anemia just with a good luck charm. If Dr. Charles had made those little fish out of led instead of iron, it would have been a much different story. But rational analysis alone would have never produced this irrational outcome. You can measure the quantity of iron that would be best to contribute to health, but no computation, no mathematical equation, no proof or argument that they could have made would convince those families to start cooking with that little rock. It took a little ethnography and a very human kind of insight. The data only became useful once a story gave it meaning.

Despite this and other evidence of the power of storytelling and the importance of finding meaning, many people working in technology and business are just as skeptical of qualitative approaches as those families were of Dr. Charles and his door-to-door iron business he was selling there. Our clients ask us all the time, how can you draw conclusions from just talking to a handful of people? Wouldn't data from a thousand or ten thousand people be more useful? The answer is no, not necessarily.

This question comes up more and more as more of us live our lives online. The Internet of information becomes the Internet of Things. Those things are gathering, creating, and sharing data about us.

A couple of years ago IBM released this sort of splashy PR announcement that humans now create 2.5 quintillion bytes of data per day, so much that 90% of the data that we just have at all has been created just in the past couple of years. This is so much data that we have data sets that we can’t even analyze with our current computing technology. We have to break it up into little random segments and analyze those as a sample. So this is big data.

Soak in that for a minute.

We even have data about how much we talk about data. We’re so excited about all of this data. But I have to ask, what is that 90% represent? Does 90% of the data ever created represent 90% of our knowledge, 90% of our culture, or 90% of the things that actually influence us? Ideas that change the course of history come from human insights about data. But leaving that trail of data and acquiring the ability to analyze it has changed the nature of what it means to be human and to interact with other humans.

We have to ask, is this a good thing, because now electronic sensors and processors are in our cars, in our watches. We carry super powerful super computers disguised as mobile phones that we just carry around with us. We share information on social media, back it up into the cloud, and this is all particularly concerning right now that there’s so much data being captured around about us.

And, you know, in America, we have the Amazon Echo, which is super cool and great, but it’s a listening device that is created by the company run by Jeff Bezos, the man who owns a major news paper, the Washington Post. It all connects in a way that I’m increasingly concerned about because this is pretty creepy when you stop and think about it. But we put up with this creepiness because of the convenience that comes with it, the convenience that we get in return for sharing this data because of the positive possibilities.

People who work with technology love the idea of data and collecting data. We use it to understand the world and make decisions. If a little data helps us make decisions that are a little bit better, then 2.5 quintillion bytes of data should help us make the best decisions that humanity has ever made. But somehow this seems like it’s not the case. This is really interesting to me that we’re collecting all of this information and, yet, it doesn’t seem like we’re making better decisions all the time.

Let’s talk about the limits of our rational understanding. We’re not computers. Our reason is tied to our sense perception. We can’t get away from our brains. They’re always trying to trick us. We’re not just these objective processing things. We take sensory information, and we process it. The most insidious trick that our brains play on us is convincing us that we’re making logical decisions based on the data. We want to think of ourselves like computers. But this is just another story that we tell ourselves.

We love telling ourselves stories. Humans are not naturally statistical thinkers. Unless we’re highly trained and disciplined when we look at the data, we see what we want to see. We fit that data. We fit that information we get into our preexisting stories because we’re pattern matchers, sometimes overactive pattern matchers.

All of us, every single one of us, no matter how smart or how educated, is a highly forgetful, lazy creature of habit. And we’re afflicted with all of these cognitive biases that we can’t sense. Your brain is lazy. Your brain loves having it easy. All of our brains do. Thought processes that feel easy feel more true. Your brain just fast tracks anything that’s easier to think about.

This is Daniel Kahneman’s diagram of cognitive ease from his book Thinking Fast and Slow, which is a really interesting and humbling book about our thought processes and how much goes on that we don’t really control or even sense, but at the same time think that we’re totally being objective because, if you see information that’s presented clearly, you’ll be more likely to believe it’s true. This is why design is so important for credibility of products and services.

If you’re in a good mood or if something has already primed you for the idea, you’ll believe it easier. This has nothing to do with the merits of the information itself. You’ll just rationalize. You’ll tell yourself. You’ll give yourself a reason why you should believe the information that’s easier to believe. That’s just how our brains work.

A simple statistic is often easier to read and easier to remember than a narrative description, which is lengthy and complicated. And so a lot of times we’ll believe statistics more easily. If the story feels coherent, it feels right. In fact, having less information can make it easier to fit everything into a coherent pattern and feel good about that. If information spoils the story, people will just reject it.

It doesn’t matter. Facts don’t matter if they don’t fit the story because our brains will just reject contradictory information. The presence of all of this data and all of this information is making us feel like we’re capable of making logical decisions, capable of being objective, but probably not. We’re still reading the nutritional information on the back of the bag of Doritos and eating the whole thing anyway. Right? We’re still combing through Google Analytics and then making the changes to the design that we want to make or that our CEO wants to make. We’re still electing people we maybe shouldn’t elect.

Last year the National Science Foundation did a survey because they like to depress themselves - in America. They said -- it was just a simple, one question sort of science test. They said, does the earth go around the sun or does the sun go around the earth? Twenty-six percent of people in America got it wrong.

Okay, and it’s really -- okay, it’s easy. Give me a second. It’s easy to mock these pre-Copernican thinkers, except -- except, I want you to think, like, could you prove to me that the earth travels around the sun? I couldn't prove it to you. Most of my scientific knowledge comes from Neil deGrasse Tyson says so, so it must be true. You know? Think about how little we consider our knowledge is something that we could personally prove or verify.

Our knowledge comes from trusting our sources, and we don’t really admit that to ourselves very often. You know? We have cognitive limitations. Our pattern loving brains evolve to help us spot lions in the tall grass or mammoths against the rocks, not statistically significant survey data.

Ironically, these cognitive limitations prevent us from fully grasping their extent. If you sprain your ankle, you will know. You will have sense data that tells you, oh, I can’t run for the bus as fast because my ankle hurts. But there is nothing in your brain or your mind that tells you when you have a cognitive bias that’s preventing you from correctly interpreting the information. You know? We can’t tell when something is preventing us from seeing the true facts. This requires a lot of deliberative thinking to really think, am I thinking about this correctly?

It’s usually something that we need somebody else to help us do, so you can all look around and find your metacognition buddy. This will be the person who will correct your biases because you just can’t do it for yourself. None of us can, and it doesn’t matter how well educated we are. This is just how our brains work.

As a species, we have this new ability we’re so excited to use. We can track and measure things that we could never track and measure before. But data alone doesn’t change minds or improve the world. When we say that we’re just going to rely on the data, we are evading our responsibility. Without a clear point of view and a clear set of questions as a frame of reference, any data point has exactly as much meaning as the number 42. That is to say none at all. That means it’s easy to make any piece of information mean what you want it to mean.

Here we are. Here we have the essential paradox of so-called data-driven decision-making. Right? This is our situation as humans and as people who work with technology. As more and more of our lives go digital, increasingly our life experiences involve and depend on these interactive systems. These interactions are tracked and captured in databases, and we end up with more information than we know what to do with.

This actually leads to us making worse decisions because we like stories, so we want things to fit into a story, and we’re actually really terrible at math. Math is really hard. The hard decisions we have to make, they’re not math problems anyway. They’re usually judgment calls.

Keep not panicking. I’m trying not to. Deep breathing. Deep breathing.

When you’re trying to make a decision based on data, the good news is that there’s only two kinds that you have to worry about. You only have to deal with two types of data to be better informed, but you need to be able to tell them apart. You know what qualities and quantities are. Your cats are furry and mean. Those are qualities. Your have five of them. That’s a quantity. I think you should rethink your life choices. That’s a judgment.

Qualitative data is something that can be observed and described in words. Quantitative in numbers, but you have to keep in mind that all of the quantitative data that we deal with is based on qualitative judgment. We have to understand the underlying assumptions. We humans form the queries.

Whether you should look at qualitative or quantitative data depends on your questions. You can’t use one type of data to answer a different type of question. You can’t use quantitative data to tell you why something is happening or what it means. Then once you know what kind of data you need, once you know what your questions are, then you can choose activities to answer those questions.

I hate this guy. This is Nate Silver, statistician, FiveThirtyEight.com. Led us all astray. He is like the best quantitative mind in America, and he had what he thought was good data and made a bad judgment call. But to even begin to analyze quantitative data, you need to know statistics. It’s quantitative reasoning. That’s what it is.

To get good statistics, you have to very tightly control conditions. You have to hold the other things equal and control for distracting information. This desire to seem scientific and to seem like a logical computer like so many decision-makers want to seem, this gets in the way of doing good qualitative research because people try to tightly control something that you should really be observing in the wild. And so this is why there’s a desire for focus groups and bringing people into the lab for usability testing because you think, oh, I’m doing science. I need to tightly control everything. But really, in qualitative research, you need to understand the real world.

Sometimes you’re dealing with people who get really involved in the size. They’re like, ah, but what about a big data set? If I survey people, I could get so many data points. Well, the way to counter that is you say, well, with qualitative research you can get a very thick description, so for the size queens out there. But you have to be really careful because biases creep in and, no matter what kind of information we’re gathering, we’re going to be biased.

One of the most famous instances in which a type of bias was discovered happened back in the 1920s in the Hawthorne works in America. This was during the time when people in industry had this mania for science. It was sort of like the big data of their time. They said, oh, if we can measure things, if we can measure how workers work and treat it like a science experiment, then we can make our businesses very scientific.

They sent these researchers into this factory to observe people working because they wanted to see, oh, can we make them more productive by changing the light levels, opening the windows, closing the windows, moving the workstations around. They had these researchers in with the white coats and the clipboards. After some time, after doing a few studies, they noticed that they were having some real success. Worker productivity was going up by something like 25%. They said, “This is fantastic. Science. Business. We’re so smart.”

Then the researchers left. Of course, what happened was productivity dropped back down because once there wasn’t the guy with the clipboard over your shoulder making you think you were going to get laid off, you’re like, ah, I can do my job normal again. To this day this is known as The Hawthorne Effect or The Observer Effect. It reminds you that when you’re gathering any kind of information, you have to think about your presence having an effect on what you’re studying.

When you’re dealing with quantitative data, it’s really tempting to not even have a question, to just say, oh, let’s look at the data. Let’s look at the numbers and see what they tell us. But that’s just a fishing trip because if you’re looking at quantitative data without a clear question in mind, your pattern loving brain is just going to see what you want to see, and somehow you’ll convince yourself that that pattern is actually there.

Data doesn’t change minds, and new data can actually contribute to further disagreement when people with different underlying beliefs come into conflict, so we’re in this tough situation. And one of the promises of quantitative data is that we can optimize a system. That’s such a nice word: optimize. We just heard about optimizing images, and that’s great and fantastic because we kind of know what we’re going for. We know what it means to make an image the best it can be. Who doesn’t want to optimize a system? But optimizing through something like split testing means you’ve already decided what the best solution is, and you’re just trying to make incremental improvements to get there.

When you’re trying to make a system work as well as it can, you’ll never discover a different way to solve the same problem, or an even more profitable problem to solve. It’s like when you’re fitted for glasses and they say is this better or is this better. That will never tell you that you should get lasik instead. We can’t split test our way to a better world.

When you set out to optimize anything, you come up against an old philosophical problem, that of the good. What is good? How do you know it’s good? You want to believe there’s one answer, there’s one good goal that you should be going after. But there are often 10,000 worthwhile goals, and they’re often at odds with each other.

When there’s a disagreement about a design, a lot of times people say, oh, let’s just A/B test that. The numbers will make the decision for us. It’s easy when you have something like a landing page and you want somebody to buy something. You’re like, oh, which button is better, the green one or the yellow one? That’s easy. It’s whichever one makes you more money when people click on it.

But should you optimize for more traffic or higher quality journalism? Should you optimize for stability or innovation? Getting people to spend more time using your product or more money? You know? Novelty or convention? Autonomy or collaboration? Growth or sustainability?

The answer is there’ll always be tradeoffs, and we need to find that synthesis because, when we’re designing, the fundamental question of design is how to get from the current state, from what is to what we think ought to be. Optimizing the current system is often beneficial in the short-term, but it’s always a form of short-term thinking. You know? And the universe of things that we can’t measure or even control is always changing. It’s impossible to hold anything constant. You know, anything at all, really.

If data-driven decision-making were all you needed to make something great, then online advertising would be amazing because it’s the most measured thing ever. But it’s not. It still sucks because measuring is not enough, you know, even though it’s the most measured thing ever.

There’s a lot of confusion about asking questions. But when you ask questions, you need to ask specific questions and practical questions, and know what to do with them. A bad question, this is America’s favorite bad question right now. This is a bad question because it’s not specific. You want to know the answer, but just asking it isn’t going to get you a good answer.

You want to know how people behave. That’s a better question because asking people things like their opinions will never tell you how they behave. You need to change behavior. The best question, the thing you need to know is what you don’t know that has the most risk. When it comes to research, you just need to consider that all these research activities are all the things that you could do to get information are simply ways to answer questions.

There are so many different things you could do depending on what your question is, depending on what you want to know, depending on how much time or money, and people really focus on the activities because they don’t want to do the hard, intellectual work because we really want to be dogmatic. We want to find that one right answer. Even about research itself, we want to find the thing we can just do so we can stop thinking and stop debating and stop disagreeing. We just want to settle on it.

I have a special comment about surveys because those are the most dangerous research tool of all. They’re misunderstood, and they frequently straddle the quantitative and the qualitative. And, at their worst, they represent the worst of both because they’re so easy to create and so easy to distribute, and the results are so easy to count. And if something is easy, it feels right, it feels true, and it feels more valid.

The problem posed by this ease, like it’s good if things are easy if it’s the right thing to do, but a lot of times people run surveys because it’s the easiest thing to do, even if the results are meaningless. There is such a thing as a good survey, but it’s actually harder to create and harder to analyze than just doing good, qualitative research. It’s much harder to create a good survey because there’s no way to tell that you’ve created a bad one.

The answers will seem fine. You’ll say, I asked the questions. People answered the questions. The data must be right.

Bad design will fail a usability test. Bad code will have bugs. But there is no way to tell that you have bad survey data.

Most seductively, surveys yield responses that are easy to count and counting feels objective, true, and certain even when what you’re counting doesn’t count at all. It just feels true.

People email me their sad stories about what’s going on when they’re trying to make good decisions at work. Somebody sent me this message. It made me really sad because usability is about behavior. The best way to identify usability issues is a usability test. A survey is a survey. It’s not a good substitute for actually testing your system.

When I hear things like this, I thought of a really silly story. It’s like a child in a fairytale who is sent out to gather mushrooms for dinner. But the mushrooms are way on the other side of the river, and the rocks are right here, so I’ll just gather rocks because it’s easier. Right? That makes sense. Then this is what dinner looks like. It sounds dumb, but this is how people -- oh, it’s easier, so it must be the right thing to do, even if the information is useless.

Here’s another sad message I got. This person’s boss is a very analytical person. What this means is that she has a bias towards quantitative information because there’s no reason why one is necessarily more valid than the other. He wrote to me concerned about popup surveys ruining his experience. He was very concerned because he had no way to convince her that numbers weren’t the right thing.

Customer satisfaction is this widely used metric even though it’s meaningless. It’s a useless abstraction, so you end up with surveys like this, like this is all over. This is a company that makes a tremendous amount of money convincing people that these questions are answerable and useful. How do you rate the options available for navigating on a ten-point scale? How do you rate whether the information is good? What’s the quality of information? A six? What does that mean?

If you ask somebody something like, rate the number of clicks it took, right? If it was good, they might say, oh, it took me two clicks. This is ridiculous. But when you start looking for these surveys, they’re everywhere, and people are making decisions based on them. Surveys are dangerous because they feel quantitative. They feel objective and true. But they’re often qualitative.

Never ask people what they like. Right? That’s a self-reported, meaningless, mental state that doesn’t correspond to any behavior. I always say I like horses. They’re a ten on a one to ten like it scale. I have no monetizable horse related behaviors whatsoever.

Never ask people to remember because, again, we’re really forgetful, and we can just fill in our memories. Especially never ask anybody to predict anything.

Audience: [Laughter and applause]

Erika Hall: Thanks. Yeah, it’s sad. This is unchanged. This is the standard version, right? Got to get this message out there because how often do surveys contain things like, how likely are you to buy this, to do this, to think this? The way the people hear it is, how concerned are you about looking smart? I want to look smart. How likely is it that you’re telling the truth? Not very. It’s an impossible question to answer.

Only when you know the question will you know what the answer means, and it has to be a good question. It has to be an answerable question because the reason you gather information at all is to make decisions. Ultimately, you want those to be good decisions. Making bets based on actual insights from human behavior can be much more effective than basing bets on surveys, abstractions, and bad data that you’re counting just because it’s quantifiable. You know?

You have to measure the things that really matter. You have to define what success is and then measure things against that. Just because something is measurable doesn’t mean it matters. It doesn’t mean it’s useful. It doesn’t mean it’s true.

There are all these metrics that people count like, oh, we got a lot of visits. We got a lot of hits. We got a lot of downloads. We have a lot of people telling us they like something. But those aren’t necessarily meaningful, but they’re big numbers, and they’re attractive, so people start paying more attention to that than paying attention to what matters. What you have to do is count what matters and turn it into something you can actually use.

I’ll tell you the thing that people tell high school students in America all the time: Make better choices. I’m talking about this because I want you to ask better questions, choose better metrics, and choose to do the things that help you meet your goals. Please, I want people to make better decisions in this world because we’re up against this.

This is what we are. We are not perfect data processing machines. We are irrational, irrational creatures. We do not reduce to an engineering problem as much as Google wants us to.

I’ll say this again because it is a difficult thing for people to accept because those numbers are so seductive because they seem so true. And I want all of us to acknowledge that humans will not reduce to an engineering problem to solve. We need to approach the data that we generate, gather, and interpret in context, in a real world context. We need to do this with humility, respect, and a true understanding of our limitations.

This is an understanding that begins with what we can observe but we cannot measure, and that’s the value and experience of our shared humanity. We each need to find our own reason and our own true goal before we can ask the right questions. By describing, measuring, using the best quantitative and qualitative data, and having really true, good goals and keeping each other honest, we can make good choices with real meaning. That’s what’ll give us a little bit of luck. Thank you.

Audience: [Applause]

Speakers