Transcription
(Music Playing)
Thank you.
(Applause)
I will say I had a few calls with Marc. And every time I’m like, your house plants are magnificent. I just have an obsession with healthy house plants, and he has these beautiful plants behind him in video calls. So do a video call with them if you like plants.
So I’m going to be talking about creating your personal design principles with systems thinking. And I’m going to make the joke that speakers always make at conferences. I’m the only thing standing between you and lunch. So I’m going to do this as quickly as possible. But also take time and do it calmly and hopefully keep you somewhat engaged and hope nobody is hangry.
So for those of you who don’t know me, my name is Cheryl Kababa. I am the chief strategy officer at a firm called Optimistic Design. And we do equity-centered design and systems thinking, mostly in the realm of education. That’s my team on that side of the slide there.
And yeah, we’re a small team, all women, a woman of color.
And yeah, I’m realizing we’re also all short. But we’re mighty, though. We’re small but mighty. And I’m also a design educator. I teach at the University of Washington’s Human Center Design and Engineering Program. So getting to kind of keep up with what the kids are doing. And I spend too much time on TikTok to be able to relate to students and whatnot. Not a good idea, by the way. This time suck.
So the other thing that I’ve done recently is last year, I published a book-- ooh, I have a blank bank to the physical copy here--
Closing the Loop, Systems Thinking for Designers. And I’ll talk a little bit about systems thinking, because that’s what part of this talk is, and how I find it really valuable as a designer myself.
So my career as a designer has adapted and shifted over the years. And I just want to give a little bit of context as to what is a design strategist, and how do you do what you do. I had started out as a UI designer, basically working on mostly web-based things.
A couple of decades ago-- I won’t say how long ago, but very long ago-- I definitely recognized the browsers that Miriam was showing this morning as part of what I was doing in the early days. And I moved on to become a product designer at companies like Microsoft, and then eventually Philips.
After that, I was an experienced designer working as a consultant. So designing experiences everything from digital products to services. And then now, I’ve transformed--
or adapted my career into being a design researcher and strategist. And I’ve been doing that for about the past 10 years or so. I worked at a firm called Adaptive Path. I don’t know if any of you have heard of it.
Also, frog and a consultancy called Artifact before I ended up where I am now in my own firm, my own studio with Optimistic Design.
So what this meant was that I have slowly moved upstream in the business decision-making process. So earlier in my career, I was in the downstream of business decision-making. So questions from clients or from my team or company would be, can you design this product feature?
Now, I’m much farther upstream in the business decision-making process using design as a tool to answer things like, can you ensure that end users voices are prioritized in business decisions?
And on top of that, I’m not thinking-- or I’m not describing business in a way that it prioritizes profitability. So my firm is focused on equity-centered design. And what that means is that in North America, I’m working primarily in education. And that means involving and giving power to the most marginalized users within that space. And oftentimes, that means not only students at large, but it’s students who are the most marginalized in the system. So where I’m from, it’s Black and Latino students. It’s children of immigrants.
It’s English as a second language learners. And it’s children who are living in poverty. And so what we do is we elevate their voices throughout the process of philanthropies, nonprofits, et cetera, investing in education and in ed tech. So we learn about their context, but also ideate with them. And what does this look like? What you’re seeing here is an example of a toolkit that my team worked on that we designed with students. And it’s a personas toolkit. So rather than as a design team going off and designing personas after interviewing people, we ideated with students on this. And the idea is that ed tech developers, funders, et cetera, would use this with students to kind of understand their context better, understand why they’re being pushed out of school, or what it’s like to have the kinds of frustrations and hopes that the most marginalized students have.
And we also do a lot of work with philanthropies like the Gates Foundation, working with parents and students and teachers as well to define what the design principles are for things that organizations like the Gates Foundation want to invest in. And that could be ed tech products that are aimed at historically under-resourced and marginalized students. So the way I think about my job is I’ve gone from being a designer as a producer to a designer as a facilitator in terms of facilitating other people’s expertise and knowledge and teaching them how to engage in design.
I just added this, a shout out to Miriam, because I liked how Miriam was giving a shout out to so many talks yesterday. And she said, it takes craft to set up the circumstances that are simple and yet contain the ambiguities and incongruity of human experience. And I’ll go a little bit further and say, we should also be distributing that craft to end users themselves to the most marginalized of end users, including them, throughout the process and making it a disruption of typical power structures.
So I love what I do. I am so excited to work every day. I definitely feel like I’m in a space where I’m a change maker and that I’m contributing to bettering the things that I really care about. I have kids myself, and so working in the space in education is really rewarding.
And I oftentimes get a question from my students who feel like they’re being just pushed into tech companies who they don’t necessarily agree with or just like they’re wondering about like, what are the values at these tech companies? And they often ask me, how do you get to do what you do? So I can talk a little bit about my journey to where I am now and why personal and professional design principles are so important to articulating where you want your career to go.
So Rewind a few years ago, I was a design consultant. I was working on some very diverse projects. On one side, I was doing work with very large tech companies, speculative design work, thinking about how emerging technologies that they were developing would become--
would be used in 10 to 20 years. And what are the ethics around that? And what should they be considering when it comes to things like unintended consequences? And then on the other side of the work that I was doing, I was doing a lot of work in global health and development, and specifically in health care. And I also had some civic design projects. So these all seem like they maybe don’t have anything to do with each other, but I saw some common problems with each of these spaces. So let’s talk about the work I was doing in emerging tech.
One of the things that I noticed is there’s this real flavor of techno-optimism that really infects-- infects, I guess is the right word-- infects teams, like a virus--
infects teams at big technology companies, where if they’re working on something that’s like new technology-- we’re seeing this with AI, for example, Gen AI-- where it’s like, oh my god, this is so amazing. It’s going to change people’s lives. There’s so many possible use cases for this. So a few years back, I was working on a lot of these kinds of projects. But trying to get them to think ethically about it. I was oftentimes called a buzzkill. So I developed a toolkit that I’ll share a little bit later that was kind of a little bit of a Trojan horse. It was really fun and had fun little characters on it. But we were having serious discussions about ethics and inclusion and things like that.
And what I found was that many of these technology teams I was working with, they oftentimes weren’t thinking about things like how marginalized populations can be harmed by products, how not knowing anything about a culture can lead to tragedy, and then also how personal use of technology can connect to negative societal outcomes.
I think I have this anecdote in my book. But I was once working with a team, and we were doing a workshop around imagining consequences of them using this new technology they were developing. And so one of my team members, he showed them a clip from Black Mirror. And this is after we had been ideating on features and things like that. He showed them a clip from Black Mirror. And one of the team members was like-- and this was meant to be a conversation about unintended consequences. And one of the team members was like, ooh, that’s a good one. Can you put that up on the feature ideation wall? And we’re like, put what up? And he’s like, the thing about avatars. And we’re like, we showed you a Black Mirror clip as a cautionary tale, not as a source of inspiration. But they’re so blinded by this idea, like, ooh, look how they’re using that technology. They could not even see the harm that was happening in Black Mirror, which is just like-- oh, it was just beyond. My whole team was just so perplexed. And I think that led us to creating the toolkit that I’ll share with you a little bit later.
On the other side of my projects at that time, I was doing work in global health. And in this space, design thinking, or something at times called human-centered design in that space, is meant to create positive change and impact. But I think where the--
it stopped short for me was that they’re often times still designing things for individual users. It would be a piece of technology that’s meant to be a silver bullet to solve something. It could be like a little malaria device that somebody puts in their homes. But then they weren’t thinking about just the landscape of implementation, the landscape of maintenance, the number of stakeholders involved
to get people to use these devices. And so oftentimes, seeing in global health, there are so many failed interventions, especially technology ones, technology devices, that is just a waste of time and money for a lot of philanthropies and non-governmental organizations. I had a friend who worked for an NGO. And they were like, oh, did you know all these field offices? They all have a warehouse. And they’re all filled with--
they’re in places like Kenya and India. And they all have warehouses full of failed interventions. And so I felt like that was just like, oh, this human-centered design stuff is kind of falling short.
So even when we as designers were maybe thinking more broadly about solutions, oftentimes our clients were not. And that made me think a lot about this quote from Koeven, who’s thought leader, we’re said, Adobe. He said, “We can’t think of ourselves merely as disruptors. When we take on a challenge, we have to commit ourselves to producing complete, viable, sustainable solutions. We have to finish the whole job.” He was talking about Uber and kind of the impact Uber has had globally and on individual cities and everything, and how their focus on disruption was so strong that they almost didn’t care that they’re having such negative impact in different places. And you see this with a lot of big technology companies that scale up, like Airbnb, et cetera.
As a result, I felt like the HCD approach was falling short in all these ways. And these are various models of human-centered design. And you can see they’re all linear. And that’s really problematic from a systems perspective. Because Peter Senji, who wrote The Fifth Discipline, which is a classic in systems thinking, he said, “Today’s--” or, sorry. I’m going to paraphrase it completely wrong. I always do. “Today’s solutions are tomorrow’s problems.”
Because we think we’re solving things, and we’re done. And what happens is they generate problems of their own, and we need to be able to anticipate them, or kind of think about the potential impact, or if they go wrong. And so that’s something that we need to consider all the time.
So I’m going to talk a little bit about why systems thinking is actually a good approach for correcting for some of the ills with design thinking. I’ll start with design thinking, just like I showed all those frameworks. I made it a circle here, even though it’s oftentimes not depicted circularly. But how many of you are familiar with this design thinking framework? Yeah, quite a few of you. You start with empathizing, which is you interview end users. Then you define what the problem space is and where you might need to solve things. You then ideate, go off and create 100 different ideas and narrow it down to three. And then you prototype and test, and you’re supposed to kind of do this all along. The problem is this is oftentimes focused on an individual user in the moment of use. So it falls short in many ways, as I was describing earlier.
And what is missing is that broader context. So I love this quote from the designer Eliel Sarinen, who said, “Always design a thing.” I’m sure I butchered his name, by the way. Any Finnish people in the room, you can come at me later. “Always design a thing, thinking of this next larger context, a chair in a room, a room in a house, a house in an environment, an environment in a city plan.” And the drawing on the other side is from Charles Eames, who was also a modernist designer. And it’s hard to read, but he was saying, there are these different stakeholders you need to take into account. As a designer, you can take yourself into account. You’re the designer, your client, and then society at large. And so a couple of those are very micro. And then another one is very large, society at large. And I think that’s the one I’m very interested in as a designer and as a problem solver and as somebody who kind of wants to be involved in making change in the world. So the way that I think about the concepts of systems thinking that are useful for practitioners is that there needs to be an understanding. And you shift your mindset to understand that everything is interconnected. So even the things that you can’t see,
there’s causality that goes beyond just one level. So there’s second order effects, third order effects, et cetera. We sometimes also talk about this in terms of unintended consequences. And we also think about it as cascading effects or ripple effects. And then lastly, there’s wholeness, that the stakeholders that you ought to be thinking about probably go beyond who you’re thinking about now. So if you’re designing something, let’s say for patient care.
And in health care, I think as technologists, we’re tempted to think about the experience, let’s say as an app that a patient uses. But you might actually be needing to think about stakeholders, hospital administrators, all the way up to maybe health care policymakers. And this helps you to better position your solutions, to think about what are actually good solutions and what aren’t. It’s essentially making the invisible visible.
And I think that’s an important lesson for practitioners, particularly in technology. So what does this look like? I’m kind of like, I’m a systems thinker and a designer. What that oftentimes looks like is systems mapping. Because I’m engaging in the visual skills of my practice, along with using that for things like alignment with stakeholders. So doing this work with stakeholders, as well as articulating the system beyond just the problem solving that we’re used to.
And there’s some examples of this. This is a systems map that I created. And it was about the impact of Facebook and to think about where are the intervention points to prevent the harm that was being caused by not just this product, but this company. Did this a few years ago. I think a lot of this actually still holds true and is maybe even crazier now, to be honest. And that’s one thing is you’re identifying all these different points which may go way beyond a product. And you’re thinking about these things as a designer. We also do things like comparative mapping. This is comparing the different types of hospital systems in India and the different types of stakeholders within them. And then oftentimes I’m using frameworks and teaching frameworks that are around that one on the far-- what is it? Far right is the iceberg model. I use that a lot to talk about root cause. And then here’s another simple stakeholder mapping exercise that I oftentimes do with clients and partners. And that helps us get on the same page about where the intervention points are. Again, a lot of my work is around where to direct money so that things can be more equitable. And these are good starting points for doing that. So now that we have a grip on systems thinking, I want to talk about design principles. How many of you have design principles within your work or for your product?
You want-- not a few.
So I want to talk-- first I’ll explain what design principles are. And then I’ll talk about the importance of creating your own design principles. So design principles are described as value statements that frame design decisions and support consistency in decision making across teams working on the same product or service. So you have seen some of those in the various talks.
It’ll be things like buttons should behave consistently and that sort of thing. That one’s maybe a little too obvious, but it should be things that maybe can be contradictory.
So for example, I worked on this toolkit called Modernizing Math Toolkit. And these were the technology principles for people designing with AI in mind. So how do you design AI with a sense of co-design and transparency? How do you emphasize creativity with AI and not just efficiency?
How do you emphasize active learning? How do you emphasize human relationships or prioritize human relationships? So these are some examples of what design principles are.
And last year I was speaking at a conference and I saw Ovetta Sampson, who is the director of UX AI and Compute at Google. And she gave the keynote address. And she talked about how she felt like designers should always have a set of an ethics statement. And that statement would act as a guide for you to make the personal and professional decisions that you want to make and to direct your career towards what drives you ethically.
And I thought that’s a really good place to take this concept of design principles and think about how I want it to shape my career. And as I was working on my systems thinking book, I was like, oh, this is a really good intersection of these things. How does systems thinking inform design principles?
So when I talk about using systems thinking to inform your own personal design principles, you might have some things in mind like creativity, autonomy, independence. These are all things you want to have professionally. Those are all great. You should have those.
Those are inward-- those are kind of inward facing, right? What kind of qualities will make my life better? I do think there is also this idea of using systems thinking to think outward. What is the change I want to make? Or there’s that saying, what is the change you want to be in the world? And I think systems thinking is a good tool for that. So I have a few tips for using systems thinking to create your own design principles.
There’s three of them. So one is remember that everything is connected. You guys, this is my favorite movie. Just like-- it’s like a Halloween costume. I just crushed it with Halloween last year because I dressed as Joe Butapaki with a bagel.
And yeah, it’s just like my favorite movie. Because I feel like it’s a systems thinking movie. It just shows that everything is connected. What’s really nice is we take that into our work with kids. Not from this movie, but from Spider-Man, the Spider-Verse, in that we’re like, imagine the futures you want to see and create that multiverse. And doing that in workshops with kids, so good. Like, they just really connect to this idea that you can shape your future by doing these small interventions. You can create systems maps that help you do that.
So I shared with you the three concepts of four, interconnectedness, causality, and wholeness. And I think the important aspect of this is figuring out root cause. So what is the root cause of the problems you see in the world that you really care about? And how can you think about how can I as a practitioner be principled about that in a way that I can shape my career in that direction? And so I’m not just thinking about systems level things. If you’re very uncomfortable with AI, with Gen AI, for example, what is the root cause of that? What are the things that make you sit uneasy?
And my colleagues used to tease me, because whenever I’d answer a question about how things were connected, if I wrote it on a panel or something like that, I’d be like, blah, blah, blah, blah, blah, because capitalism.
And that is just like, oh, they feel like you’re always going on about capitalism. Because that’s the way the world works. That’s like everything kind of like all roads lead to Rome. I’m wearing a point of late stage capitalism, so it’s just like, how can it not lead to that?
That said, the follow up question I get, oftentimes from students is like, well, I can’t fix capitalism. So why can’t I do? And I’m like, you don’t have to be the one. You don’t necessarily have to be the one to fix capitalism. But you can understand that things connect to that. And the work I’m doing right now in the intersection of AI and education and helping organizations shape their ethics and their principles around that reminds me of this quote from Ted Chiang, the science fiction writer. He says, “I tend to think that most fears about AI are best understood as fears about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.” And it’s just something that has kind of like drilled its way into the back of my mind. So I’m always a little bit like, what is-- what late stage capitalism thing is this that’s happening anytime I’m looking at something and it’s making me feel unsure or uneasy about the trajectory of the tech sector?
I really was like, I’m not going to talk about the election. I’m not going to talk about the election. But as I was doomscrolling on Tuesday, I saw these two little headlines. And I was like, I have to screenshot this. They were the tiniest of headlines too, as you can see. And they said, “Stocks rise as Trump has strong showing in swing states. And Bitcoin surges to a record as crypto investors root for a Trump win.”
This feels just so distressing to me because I do feel like the technology industry and its leadership just surging to the right and being really obvious about it. And when-- they used to at least have a face of neutrality. And you know what? Now this is their face.
Oh my god. I had-- look, I haven’t seen this-- I looked at this picture yesterday. And I haven’t seen it this big. But he looks even more like a dipshit when I see it, just like, huge like this. Like, what’s wrong with him?
(Applause)
There’s a huge difference between seeing it on my little laptop screen and being like, oh, there he is.
And he’s a bad jumper. OK? He’s a bad jumper. And you’d think a billionaire could have a better jumping coach or something. But no, no.
But it’s not just him. I saw that Nvidia, who makes all of the chips, for Gen AI,
their market value is more than a trillion, more than 3 trillion maybe now. I might be making that up. But it’s so-- it surged.
Because of the election results, which is crazy to me. It just shows me the market itself. It’s a system that is based on emotional decision making because the person just got elected. His economic decision making sucks. So it’s just kind of like, what are you guys doing? What’s going on? Anyway, that’s the richest man in the world. He looks like a complete dipshit. And we’ll just have to replace him at some point.
So back to the iceberg. No, get off my little soapbox.
OK, so the iceberg model is something I use a lot, just like in workshops. And it’s something you can use to do a little bit of self-reflection. Or you can use it with your teams, too. So how do you understand root cause? What’s at the core of what matters? And what might you want to solve for?
And it’s really nice because the events are things that you see. These are kind of like the symptoms. And then underneath that are patterns and trends, structure, and then mental models. So this is a very individualistic example. But it’s a good one to explain kind of the model itself. So the event might be you caught a cold. And then the patterns underneath that are you’ve been catching more colds and sleeping less. And then underlying structures are oftentimes like infrastructure and things like that. And then mental models where it gets really interesting. It says, career is the most important piece of our identity. Healthy food is too expensive. Rest is for the unmotivated. This is really meaty problem space, OK, for solving.
And it takes a lot to kind of dig into what is going on there and what should be corrected. Because catching a cold is just a symptom. That is like societal and cultural. That is socio-cultural information down there that a systems thinker can kind of think about, where does change now need to happen?
So a few prompts for seeing those connections as you think about your own professional design principles. What are the kinds of root cause problems that keep you up at night, just like personally or professionally? And what kind of approach do you as a practitioner need to do to help solve for them?
And just kind of thinking about those things. The second tip for creating your systems level design principles is think long-term. So beyond understanding of root cause,
what do you want the future to look like? And how can you as an individual embody principles that could lead to that future?
Every single talk I give, I include this cartoon. I’m always like, when’s the talk going to happen where I don’t include this cartoon? But it says, yes, the planet got destroyed, but for a beautiful moment in time, we created a lot of value for shareholders.
Anyone who’s worked for a big company has felt this way. Or you’re like, I don’t know about the things we’re creating. What about like the data we’re collecting on people and like there’s been a hack and whatever? Oh yeah, but our stock is through the roof. And I think it’s like there’s that tension for a lot of people,
and we oftentimes feel really disempowered as teams who are working on things like products and features and that sort of thing. And it just, how do you as an individual lead to long-term thinking and long-term decision-making rather than the short-term quarterly earnings and things like that?
So that’s kind of one question. The other is something I think about a lot in my line of work as I work in the space of inclusion and equity is this quote that many of you might be familiar with from William Gibson, “The future is already here. It’s just not very evenly distributed.” And somebody who works in education and sees the disparity between how children learn, not just in my country, but throughout the world, it’s just very distressing. Like we have in the West some access to like the most amazing technologies. I think in some rural places in the United States, some of those schools don’t even have wifi. And so when the pandemic hit, it was just such a disaster and it actually made education so inequitable. And I don’t think that’s just a problem in my country.
It’s a problem globally. So it is something that it’s like, how do we design technology now first and foremost with accessibility in mind and accessibility, meaning like the broad sense of accessibility that is accessible to everyone.
During the pandemic, you’ve all know Harari who wrote “Sabians.” He wrote this piece that was kind of about like the alarming use of surveillance technologies during the pandemic, just to like track
how COVID was spreading, who was getting vaccinated, that sort of thing. And he said, “Temporary measures have nasty habits of outlasting emergencies, especially as there’s always a new emergency lurking on the horizon.” And almost immediately, those same surveillance technologies are being used to track protestors, could be on the massive protests in 2020. And I think as technologists, maybe just like keeping that in mind as well, is like what are the long-term unintended consequences as well to the short-term decisions that we might be making, even to respond to a crisis.
And guess what? I have another framework for you to use. I love frameworks. Systems thinking is like built on the backs of frameworks that you can use in workshops.
And it’s called the Futures Wheel.
So you can articulate an event, maybe that’s something that happens, or an action that you’re taking. And then think about the first order effect, second order effect, third order effect. It’s just like, it’s the simplest of frameworks. But it gets to some really interesting spaces because even when I do this with, as I mentioned before, techno optimists, they oftentimes get into spaces where they start thinking about and talking about unintended consequences. Because if you do this broadly enough, things will come up. This is just a really simple example where I did this with a team around like creating a remote work policy. And there’s some positive outcomes. If you look at the orange like third order effects, it’s like smaller, more practical office space. But then you also have employee burnout. And you end up like the iceberg, you end up having kind of deeper conversations or deeper reflections about what can possibly happen. And this form of speculation is actually really useful.
So how do you continue to think long-term? What kind of future do you want to see?
Do you think we are on a trajectory towards that? And what are you doing to contribute to the future you want to see? Like how are you individually contributing to that? And if you feel like you’re not, it’s like, oh, how can I kind of direct my activities as somebody with skills towards that?
And lastly, articulate your systems level values. So we all know we have personal values. And again, I want to emphasize like, the personal values are really good. Like, if you want to emphasize creativity in your own life, that’s really important. But also mix it up with some systems level values. So some ones that I mentioned earlier are kind of outward facing. So a couple of years ago, I worked on a couple of ethics technology toolkits. One is called the Tarot Cards of Tech that I worked on at my former firm called Artifact.
The other is called the Ethical Explorer. And I worked on that with the Yominiar Network, the big philanthropy. And these are both those things that I described earlier. They’re these kind of cute things that are meant to have, for teams to have serious conversations. And you can find both of these online still, and they could be good at helping you kind of articulate those system level values. So for example, if data privacy is really important to you, the problem now are things like surveillance. How might your technology be used to discriminate, oppress, or target specific groups? And you can ask this of yourself in a way that like, if you’re a designer, how might the technology you’re working on be used for that? How might you as a technologist respond to law enforcement or government requests for user information and what policies might be important to you? Another example is exclusion. If you care about inclusion and equity, you could ask how would society be impacted if marginal groups couldn’t use your products?
And there’s a whole series of these, and I think it can help you articulate what’s most important to you when it comes to kind of systems level values. This is Microsoft’s responsible AI goals. And I also think there are pretty good categories. It’s like this transparency to the people who deserve transparency from you as a technologist. Do you have that? Do you have accountability? And then the ones on top are pretty obvious, fairness, reliability, and safety, privacy and security and inclusiveness.
So in terms of articulating your system level values, what is most important to you on the macro level?
Is it how people interact with each other? Is it fairness, justice, equality?
Is it climate change? Is it sustainability, circularity, and protection of the environment?
What do you want to change in that regard?
So just to sum up, remember everything is connected. Think long-term and articulate your systems level values. And I think to serve as a little example, maybe like a case study, I don’t know. I’ll share with you my own personal design principles. I didn’t really fully bake these until last year, which is a long time coming, considering how just like basically old I am and how long my career has been.
But I think like my purposefulness during my career had been kind of leading up to these. And so I’ll share, I have three of them. My first one is always stand with the egg. What does this mean? Several years ago, the writer, Haruki Murakami, the novelist, he said, “Between a high solid wall and an egg that breaks against it, I will always stand on the side of the egg.” And he kind of left his words open to interpretation, but I think about it and the way I apply it as a metaphor for power. Who has power and who doesn’t? And how do you kind of move that power towards the egg? So when I work in projects in education, I think about all of the adults involved. There are teachers, parents, administrators, policy makers, governments, investors and funders,
ed tech developers, assessment developers. They’re all trying to determine what’s best for students. And even with the best of intentions, they’re the wall. They hold a lot of power. And I think of students and their lack of involvement as the egg. So, and with the students, there’s even like more fragile of eggs. And those are the marginalized students whose voices typically aren’t heard. It’s like refugee students, black and brown students, undocumented students, students with disabilities. They’re the most fragile of eggs. And I just feel like I want my life’s work to be moving from reinforcing the wall to moving to, you know, cradling and reinforcing the power that the egg has and helping it grow into something. I don’t know, grow into a chicken? I don’t know. I don’t think a chicken grow into something beautiful.
And so when we talk about design, we just don’t talk about power that much. And I feel like it’s a worthy discussion to have.
My second personal principle is be a good ancestor.
And there’s this book I really like. It’s called “Low-Tech Designed by Radical Indigenousism” by Julia Watson. And she discusses an example from the indigenous Kazi people in India.
So what they do, the community builds bridges and structures by shaping the direction of trees and vegetation in the forest. And a bridge might take 50 years to form. So the people who are starting this project probably will never even see it. And it’s an example of not only designing long-term, but designing in community with your descendants. There’s this concept in indigenous philosophy called seven generations thinking. And what that means is the decisions you make today, the leadership you hold today, should create positive change for seven generations. And I know that’s really hard to kind of think like, some generations, like what is, that’s literally like hundreds of years. But just think about like the plastic in our environment and things like that. So it’s like bad seven generations. Like we weren’t even thinking about like, what would be happening seven generations later and what they’ll be experiencing. And it’s not just about environmental sustainability. It’s about forging harmonious relationships that will last seven generations. So we can think about things like the technology we’re creating, the impact that we’re having on the environment and how we’re shaping culture in a way that is this way of doing things, sustainable for seven generations, is lay stage capitalism sustainable for seven generations. And yeah, there I go again. It’s like blah, blah, blah, because capitalism. But I think there’s a lot to learn from alternative ways of doing things. And I oftentimes like think about the seven generations thinking. I think even in just like 10 years, what am I working on now that is working towards the future that I wanna see? I heard a story recently about tsunami stones in Japan.
And these are, I just became obsessed and fascinated with it. There’s this town called Morohama that had a massive earthquake in the year 869. The Jogonjishan earthquake, yes 869. I don’t ask me what it was like there at that time. It’s just like more than a thousand years ago. And survivors of that earthquake erected a shrine so that no one would ever forget that two tsunamis came through and killed a whole bunch of people. And so in 2011, when the earthquake in Japan hit,
in this town, everybody in that town knew to go up to this one hill.
And they had a tsunami siren, but it had fallen down during the earthquake. So it didn’t go off. But everybody was up in this hill as they watched the tsunami hit the shrine. And so that is like 50 generations thinking, right? Like they benefited their descendants by erecting these shrines. And it’s just something like thinking forward, they knew they were doing it for many generations to come. So as we design the technologies we’re working on today, can we say we’re good ancestors? Like I’m always thinking about things we’re designing for like efficiency,
the way that we’re all like writing now is so different. And how do you kind of like keep this like philosophical lens on it that I need to be, I need to be designing with my future descendants in mind.
And my last principle is make design an act of freedom dreaming. This is a term from abolitionist education by this, the scholarship from Dr. Bettina Love, who is an activist in this space. And the way she describes it is just like, you have to understand about things like oppression and you can get really overwhelmed with everything that’s wrong in the world. But you can also think about how to solve for that. And that’s like the best way of problem solving. Like rather than just like ignoring the problems in the world and kind of like shutting it off, it’s like, how do you like look those problems in the face and kind of think about how to make things better? And she says, understanding the mechanisms that reproduce structural inequality is an essential component of freedom dreaming. So you start in that place, you start in acknowledging the problems of status quo. And I really love this, just working with, you know, minority students, it’s so meaningful to, you know, they know the things that are wrong. And I loved that, you know, we kind of think about things like Linda said yesterday. I hope I’m paraphrasing this right. She said, “Children are the R&D Department of Humanity.” And they have so many hopes and dreams. They haven’t been jaded yet. They’re not cynical yet. And, you know, we can extend something like freedom dreaming to others. My team recently worked on a speculative design project with students to imagine the future of math education. And you can see it at modernizingmath.com. But one of the things that we worked with students do is like, how do you create vignettes of like, you’re living 20 years in the future and you, what are the kinds of experiences you want to have? What are the kinds of communities that you want to have? And the student just came up with like these really beautiful visions, like combining technology with food cultivation or virtual labs for hands-on learning. And I know there’s like bright spots that are doing that right now, but they were imagining a world where every kid has this. And it’s like, that’s not even like a question to them. It’s like, yeah, every kid has this.
And so, you know, we worked with them to generate these mid-journey images, which by the way, I don’t know if there’s anyone who works on their journey here, but do you know how hard it is to say, I want a future classroom that does this and it’s actually black and brown students in it? It was all white students until you really start like four prompts down. Insane, but yeah, this is what I mean also when like, you have to understand history to understand the future is like, that’s the training data, right? Like the training data is based on history and what we think is important culturally and who is important culturally. So if you work on mid-journey, take note of that. Hopefully you can fix it. It took us a lot of prompts to get these images.
But it’s a reminder that others and not just those with profit in mind can freedom dream, right? Like we can dream about things that are more equitable, that, you know, serve all of us and not just make a shit ton of money.
And, you know, I share this in workshops, like especially when we’re talking about tough topics like equity and like exclusion and things like that and ethics, problems in technology.
You know, you might learn something one day, but then it’s like devote yourself to doing something tomorrow about it. And design is so powerful and we can use that power to dream. And I think, you know, just like think about using this power to to not just dream of, but like to make your way towards a more equitable future.
And, you know, that just like reminds me of this Angela Davis quote, especially in like times like we have right now, I am no longer accepting the things I cannot change. I am changing the things I cannot accept. So I hope that, you know, this inspires you to create your own personal design principles and kind of work towards the change that you want to be.
Thank you.