Thorsten Jonas
Thorsten is a digital Sustainability Activist, responsible AI advocate and Sustainable UX Trailblazer. He is the founder of the non-profit initiative and global community SUX Network and of the SUX Academy, co-leads the UX chapter of the W3C Sustainable Web Design Guidelines and is the host of the SUX Podcast.
Thorsten is guiding, advising and teaching designers and product-managers as well as design- and product-teams or entire organisations in crafting sustainable, responsible and ethical digital products while using tech and AI in a responsible way. His passion for the outdoors and mountaineering drives his commitment to making sustainability and responsibility a fundamental aspect of digital design and the digital world in general.
Talk: Each Design Is a Manifesto for the Future
The world is on the edge right now. Climate crises, rising inequalities on many levels, the return of realities we thought we had overcome. The digital products we design play a role in all this, which leaves us with the power but also obligation to act.
Each design decision is more than a functional choice; it embodies a vision for the future, reflecting and influencing societal values and realities, whether intentionally or inadvertently. Choices - even small ones - can serve as pivotal inflection points that dictate trends, shape technology adoption, and impact social norms.
Join Thorsten on a journey that shows how design and everyday design decisions are shaping the future and how we can utilise this flow of decisions to make a positive impact.
Transcription
(Applause)
Thank you so much for this warm welcome. Actually, it’s my first time here at beyond tellerrand, and it’s a huge, huge honour to be here on stage and to speak to you all, wonderful people.
Yeah, I brought a topic. It’s reflecting my work, but it’s also reflecting a little bit the past months, weeks. So there are many thoughts in there that I want to share with you and why I think that everything that we do, every design that we make, actually is some sort of a manifesto for the future and has an impact for the future, which is maybe even more important these days.
Though there was some hope yesterday, I can tell.
I have a tradition. When I do a talk, I usually bring a picture from one of my non-digital activities.
I’m from Hamburg,(...) but I love going to the mountains, which is a little bit funny because the highest mountain we have in Hamburg is the Wiese-der Berg, and that’s 169 meters high.
But at least once a year, I try to go to the mountains, do some mountaineering, climbing up some mountains. And I do it because it’s so beautiful there, right? It’s these beautiful views. It’s where I find peace and calm actually from the stress of these days.
But there is another reason why I bring these pictures, and I brought a second picture. So this is from this year when I did the Watsmann crossing, a famous one in Berchtesgaden.
I have another picture that’s from last year when I went to the Grosglogner in Austria. So that’s the highest mountain of Austria.
And so this is looking down the southeast side. And you see this little lake here, right? And the interesting thing about this lake is, it’s pretty new.
A few years ago, the glagia tongue of the Pastiätze that comes from here went all the way down there. So going to the mountains is a heavy reminder actually of the big mess we are in. And I think sometimes we do not sense this enough in our beautiful cities, right? But if I go to the mountains and see the glagia smelting, meter by meter, if I see mountains crashing down because the permafrost ground is melting, that’s another sense of urgency. And that’s also one of the reasons why I’m doing what I’m doing. I’m working in digital sustainability and try to bring my little piece to the table that the whole digital world that we are creating, that we are using, becomes more sustainable. So I’m traveling a lot. I’m doing a lot of talks and workshops and things to, well, not only spread the message, but also share ideas what we can do and what we can do better.
And when I speak about digital sustainability,
it’s also a lot to talk about AI, right? Because that’s also a huge thing. And we will come back to that later. Part of my work also is the Sustainable JAX network or a SUX network that I founded with some people some years ago. It’s a nonprofit initiative community where we bring together people. We have a huge collection of resources available for free. We have a podcast. And so if you shoot Slack community, you are more than invited to come by. There’s nothing you have to buy. Well, we do courses. That’s the only thing that’s not for free, but everything else is for free. And so the one learning that I had over the past years is that coming together, and that’s why I like events like this so much, right? It’s so important. We need to gather actually. We need to join forces because what we can see the world is on the edge right now, right?
I just brought some news headlines from earlier this year.
The climate crisis is getting worse, and everywhere in the world we are questioning that. Even here in Germany, I don’t have to tell you. And it’s not only the environmental crisis we are in, it’s also the inequality crisis, right? Inequalities are growing. We are not making the world better at the moment. And I think we are sort of on track or are in danger to turn back the wheel of time when I look at politics, when I look at things that are...
...going on at the moment.
Well, and the bad news is, Gen. AI and agents will not solve this problem, I’m afraid.
So, what is the role of design in all this?
And very important, this is not about making us responsible as designers or as creators of everything. It’s rather exploring opportunities how we can make an impact in all this. Because I think it’s design decisions that we take that shape the world every day, right? It’s the small decisions that we make that can have a huge impact, the butterfly effect you might have heard of. So, what do I mean with that? Well, design makes a carbon impact, for example. I showed you the images of the glazier. And there is an interesting fact that 80% of the emissions of a product, not only digital, any sort of product...
...are usually determined in the design phase, and design phase here means also engineering.(...) Right? So, all the way from someone has an idea to here’s a ready to ship product or ready to use service...(...) ...most of the emissions of this service are determined there. And that shows us the huge potential that we have there, right?(...) So, but I want to show you a very concrete, small example from the digital world.
This is from the University of Copenhagen. I worked with them last year.
So, they are rebuilding their whole web environment at the moment. And they came to me and said, “Hey, can you help us? We want to consider sustainability from the very beginning.” So, in the whole process. I’m one of the co-authors of the Web Sustainability Guidelines at the W3C. So, I said, “Yeah, sure. Happy. I’m really happy to help.” And so, we looked at things, etc. And I want to show you this example. So, this is the page where you can see all the master’s programs of the University. Pretty important page. Many people go there to see what can I study.
And the interesting thing is, this image in the background is not an image. It’s a background video.
It’s not transporting any information. It’s just there for decoration. So, now there are like tools that let you at least estimate pretty well the potential carbon impact of a website. When you do it with this side, you see that four to five grams of carbon are emitted with every single page view of this page. And that’s mainly because of the video. So, up to four grams actually are coming only from the video. So, if you would get rid of that, how much can we save? Just with this very simple change. So, assuming they have 200,000 visits per year, that results in one tonne of CO2 we can save just by getting rid of that video. So, I want to show you the potential of the small things that we can change.
So, why is that such a problem? Or how is that happening? Well, the problem is, behind every great digital experience, behind every great digital service, website, whatever, is the data center pulling a lot of electricity.
I have another example. I’m a lot in Denmark actually, so there are some Danish examples.
That’s from the Danish tax administration, skaat.dk, so that’s like the Elster website here in Germany.
All right, so, and these are numbers, so I visited them last year.
They have usually around 2.5 million visits per month, 10 page views per visit. The homepage has a carbon impact of 0.8 gram of CO2 per page view.
That’s 20 tons of CO2 per month just because of the website. Imagine we could just halve that, then we would save 10 tons per month.
That’s the potential.
And I have one more example that’s from the University of Edinburgh. So, I did not work with them. I know them pretty well, and I really love that because they made some changes on their website.(...) They just changed two things. They changed the format of the images from, so usually we still use PNG or JPEG all the time. If you would use WebP, for example, instead, you get a much smaller file size with the same quality.
So, they are able, by this change, actually, to save 25 tons of CO2 per year. They also installed an upload size limiter, so limiting the upload of images to one megabyte. But again, it’s two small changes that make such a huge impact.
And it’s not only the carbon, it’s not only the CO2.
All these data centers also need a lot of water for cooling.
And the problem with that is the water is not lost in the global context. I mean, it’s still on Earth, but it’s going into the air in the form of steam emissions, for example, and then it’s very often lost in the local context.(...) So, in a region with water scarcity, if you build a data center there, that can, and already is, in different regions of the world, a problem.
And here’s an interesting number from a great person, Jerry McGovern. He wrote with Sue Brantford about all these things, and they are saying, well, we estimate that until 2030,
every one of us in the Western developed world will have a digital doppelganger, meaning all the digital data that is related to us in the Internet that needs the same amount of water than our body needs to live in one year. So, that’s the dimension we are talking about. And the interesting thing here is, actually, most of that stuff, most of the data, most of that content is not used.
It’s there on actively running servers using energy, emitting carbon, using water, but nobody is using it. Imagine we could get with just half of these things.
And here is one more. So, I tried to bring in some practical examples as well. Here’s another example from the University of Copenhagen,(...) because with that in mind,(...) we built a very nice, I think,(...) little process there. So, a university has hundreds of content editors creating sub-pages, right? So, there is a course next semester, and someone creates a sub-page for this course. So, they have hundreds, thousands of these sub-pages. Usually, after one semester, they are outdated.
So, what we built there actually is a very simple mechanism that looks at the usage numbers of each single page automatically,(...) and when the numbers are too low, so very low, below 10 visits or something like that,(...) the creator of this page, the owner of this page, gets an email that says, “Hey, nobody is looking at your page. We will de-publish and delete it if you do not interfere.” So, the default option is to delete the content, right? Giving contents, giving data a lifetime, I think, is crucial actually to make the whole digital world more sustainable or less harmful.
And now you could say, and that’s a conversation I have very often, “Well,(...) sustainability is important. I understand that, but still, it’s not a business case.”
And good news, it is already a business case. I was talking a lot about data, right, and file sizes and things. So, if I optimize, for example, my images, if I optimize my data, I need less servers. Less servers cost less money. Here’s an example from Vitaly Friedman. He did that for one page some years ago of the Smashing Magazine and wrote about that. So, optimizing the images in terms of image format and image size. So, what we still very often do, we deliver images in full size to the client and then resize it on the client instead of delivering it in the needed or the necessary size.
And he stated that with that change on one single page, actually, he was able to save 5.2 terabytes of wasted traffic, 1,000 to 1,650 euros in cost for one single page, right? So, especially when we operate in environments with many users, with many page views, there is a huge potential in cost savings.
Regulation.
We have the new Green Deal. Well, let’s hope we don’t get rid of that, but it’s still there. We have something called CSRD, so it’s a reporting directive. And so, I’m not an expert in that, but the interesting thing here is, so far in reporting, so all the ESG reporting people in this big corporation, they so far do not care about digital. Digital emissions, right? So, they usually care about scope one or scope two emissions. Digital emissions are usually in scope three, so they just don’t care. This changes with CSRD. So, scope three emissions become relevant.(...) Digital emissions will become relevant for the reporting guys. I’m super convinced that this will make a huge push, actually, to the whole thing. I sometimes say to Bill, “Hey, imagine or think about the pain you had with not thinking about accessibility early enough, and now you had to fulfill the regulation, the same will happen with digital emissions in the next year, at least in the EU.”
So,(...) and I said, I will also talk a little bit about AI, and yes, now we have to talk about AI.
Here’s one interesting number.
230 chat GPT queries or prompts emit approximately one kilogram of CO2. That’s much, much more than a classic web search, right? Not a Google search, because Google is firing an AI search anyway,(...) and we should use Ecosy or something like that anyway.
So, it’s much, much, much, much more.
And maybe the results are not even better, but that’s another story.
The interesting thing about this one kilogram of CO2 is, and I want to get back to my image from the beginning, from the mountains, one kilogram of CO2 costs us 15 kilogram of clasurize.
It’s not AI’s fault only, but AI is definitely making it worse at the moment.
Or, I was speaking about the water consumption, and you might already remember the flood of Ghibli-style images we saw some months ago when OpenAI introduced a new iteration of their model. Well, here’s a nice calculation saying one million of these images approximately consume 40,000 liters of water. That could sustain 2,000 people per day.
And now you could say, well, but Google just said in August they made huge improvements on that end, right? So, their LLM Gemini gets much more efficient. So, they stated, well, we see efficiency improvements of 33 to 44 times, actually.
Sounds good? Well, there’s another side of the story.
Because still Google saw an increase of their own energy consumption by 27%.
Same with water. They saw an increase of 28% of water usage.
So, my point is the rise of AI, and especially GenAI that we see at the moment, comes at enormous environmental consequences.
And whatever Big Tech is telling us at the moment, the efficiency gains are still outnumbered by the growth rate of new data centers we are building all the time.
I don’t want to even judge here. That’s just the reality we have to face.
So, the question for us actually is, how big is the environmental impact of my creation and of my doing? That should be a standard question for all of us with everything that we work on or that we build.
But there’s more that we can do or consider. Because we shape user behavior with our design. And that could be a whole talk on its own. So, I just brought one example here to show you what I mean. So, we all order stuff, right? Hopefully not at Amazon, but sometimes also at Amazon, but whatever. And we get delivered lots of stuff to our home door. And even at Amazon, you could say, “Hey, please do not deliver it to my home door, but to the next hub.” I’m from Hamburg. The next hub is like in five minutes walking distance.(...) It’s this more sustainable option to deliver it to a hub instead of to my home door. At least when I live in a big city.
Why isn’t that the default option? Right? There is an example. Our cat.com, they do that. If you order something there, the default option is “delivery to the next hub.”
So, they’re not taking anything away from the user. We are just nudging the user by switching the defaults. And it’s these simple things that we can make or do to make users behave more sustainable.
And there are a lot more things we can do there. So, very interesting field to dive deeper in there.
And it’s not about judging about the user, right? It’s about helping them to make more sustainable decisions.(...) So, the question is, how do we want our users to behave? And that’s a question I think we should ask ourselves.
But design also excludes.
And is there someone from Hamburg here?
Okay, then you probably know what that is, right? So, that’s Moya. We have Moya in Hamburg and in Hanover.
It’s run by Volkswagen.(...) And it’s a pretty nice service. So, they have these Moya cars. I think we have 200 or 300 in Hamburg. Circling around the city, you have an app on your smartphone.(...) And then you can say, “Hey, I want to go from here to my friend’s house on the other side of the city.” And then the app tells you, “Yeah, go to the next virtual Moya stop.” In my case, that’s in front of my door.
And then the Moya will pick you up in maybe 20 minutes. And then on my way to the nearest stop, to my friend’s house, it might take a detour to pick up someone else. So, it’s a ride-sharing service. It’s electric, different cars. Pretty nice service if I want to have less cars in a city, for example. Right?
But who has difficulties using this service?(...) Well, you need a smartphone. And there are still a lot of elderly people who are not very familiar in using that. There is no way to use the service without.
Moya is now existing for, is it eight years? I think eight or seven or nine years. Something like this. It took them until, I think, the beginning of last year to make it wheelchair ready. And the interesting thing is, you only need to equip ten of these 200 Moya cars to be able to take wheelchair riders to make the service available over the whole city.(...) It took them six or seven years.(...) Blind people, how does a blind person find the virtual stop of the Moya? It’s not marked on the ground. Again, it took them until last year to build a navigation for blind people to find the virtual stops. And they did it because they were pushed by the local government, because they’re working together with the local transportation there.
Right? So my point is, and it’s just an example, I think we all know the silicon valley mantra of fail fast, fail often. Let’s build MVPs, let’s learn, and then make the MVP better.
That’s very often very exclusive. Because we are building it for the typical users,(...) or even for ourselves,(...) but not for the people that maybe need the service much, much more. So in this case,(...) the people I just named, they need to save the service much more. I can take my bike and ride to my friend’s house. The people I named cannot do that. Why do we do we exclude them? So the question is, and that’s an interesting conversation I had actually two days ago when I was an R-host in Denmark,
we should ask ourselves, when does the edge case need to be the default?
Maybe edge case is even a bad word for that.
We also, as designers, prioritize users, user needs, and also other actors.
Here’s an example.
You probably all know Uber Eats, or Flink, or whatever.
You could say, pretty nice user experience for me as a user, right? They have a nice app on their phone, super convenient to use, I can just sit on my couch, and the only thing I have to do is get up to my door when the delivery person arrives to get my stuff. But what’s the downside of the service? Well, these companies are pretty good in finding any loopholes to undermine workers’ rights. And I don’t know if they are still protesting the delivery hero riders, I think, at the moment, right? They are working on a pretty bad working condition. At least they are now mostly employed. For a long time, they were self-employed, which is just another word for not socially secured. So they pay the price for the great user experience for me. Or the small grocery stores, the small supermarkets we have in our big cities, they cannot match the, not disruptive, but destructive business model of these companies. They are backed by so much venture capital, they do not have to make money in the first place, but by doing this, they are destroying the nice infrastructure we already have in our city. And my point is, I think that’s a fundamental problem of user experience and user experience design in general, because most often someone or something else pays the price for the perfect or great user experience that we create.
Like in this example, the environmental price also belongs to this equation, right? There is always a price that is paid somewhere else by someone. Here is a nice quote by a person, Kevin Slavin. I saw him many, many years ago at a nice conference in Malmö,(...) and it’s still sort of a mantra for me, because he back then already said, “When designers center around the user, where do the needs and desires of the other actors in the system go?” The lens of the user obscures the view on the ecosystem it affects. And that’s a general problem of user experience design, I think, and that we need to change,(...) and that’s the work we do at the SOX network, for example, and of the tools that we developed to make these things better, to in the first place understand what are existing or potential negative impacts of whatever I’m building here. And just very short, I invite you to have a look. So these are some of the tools that we built. I won’t go through them in detail, because we don’t have so much time. I invite you to have a look. So the idea is to understand what are the negative impacts, what are other actors, for example, that are impacted by what I’m building here. What are, we always look at user needs and try to bring it together with business needs, but what are the potential negative consequences of that? Maybe here’s a super crucial user need, but it has a handful of negative consequences. So that’s what we try to do with these tools, and then we have a second thing. We use user journeys very often because that’s how we can break down what we found there on this very high-level view of problems, where we break that down to user journey level, where it’s much easier to tackle it. If I know my product has a huge carbon impact, how do I tackle that? But if I look at one page in my user journey, one step, and say, “Okay, why is the carbon impact so high here? Maybe I have lots of images.”
The solution could be, “Let’s change the format, the file format of the image.” Just a very simplified example. But that’s what we always try to do, right? Understanding what’s going on and then breaking it down to a level where we can tackle it in our day-to-day processes.
And yes, the same issues pop up for AI as well. And again, that would be a whole talk on its own. So just a few examples. Well, not only for AI, but because of the rise of AI and the rise of all the new data centers we are building, all these data centers need servers that need server chips. And they need rare raw materials actually to build these server chips, and someone needs to mine them somewhere in the world. And that’s happening under pretty bad working conditions.
So these people actually are fueling the growth of our data center world.
And even when we have built all these nice data centers, then there’s another workforce of people, and that’s the same people that were moderating the social networks ten years ago and saw all the pretty bad things so we don’t see that. They now look at the outputs of LLMs so we don’t see the potential bad things that can come out there.
And now I could say that’s maybe pretty far away, and here’s one third very interesting thing that I just recently found.
So I said data centers need a lot of energy.
And what you can see in some regions of the US that what’s happening, the closer you live to a data center, the higher probably is your price for energy.
So there in the end, we by ourselves pay the price.
Real quick, I want to show you one more tool that we built. It’s a very simple canvas where we do something similar that we did with the other tools, looking at potential negative impacts of AI use cases and comparing them to the potential positive outcomes.
Very often it’s about creating an understanding in the first place how big is the mass, because very often we don’t know.
And this is not about blaming anybody for that, that’s just the situation as it is, and that’s also the thing what we can do. Because I think what we do is we speak way too much about opportunities all the time, but not about the consequences that follow out of these.
So the question for us is what is the true price for the great experience for the great product, the great service that we design or that we build and who has to pay for it.
Can I justify that? And I know it’s not an easy question and it’s not an easy answer. And sometimes we are not going to have the power to change all this, but the first step would be to think about this and then to see, okay, where can we start?
But we as designers, and now I’m coming back to AI once more, we also decide how we create, what tools do we actually use.
And you might all remember when Figma introduced their new AI feature last year, so right, so now with one prompt I can create wireframes, designs, everything, and that’s so cool.
Well, just some weeks later a nice thing popped up by Andy Allen because he created a weather app.
Turned out it always looked like the Apple weather app every single time.
I’ll come back to that in a minute. The thing here is when you think about this, and when you think about when Figma introduced that feature actually, they forgot to tell us that by default, they turned on the option that all of our work can be used for training their model.
So what that means is that our creation becomes the future result of the prompt of someone else.
That’s the reality we see at the moment.
And these things do not happen accidentally. We know from Meta that they used one of the biggest piracy databases to train their LAMA AI.(...) Libgen, it includes 7.5 million books, 80 million research papers copied illegally, they used that to train their AI. They knew what they were doing. They needed all the data.(...) I mean, that’s a huge battle we see at the moment, right? Big Tech is arguing with its fair use.
I’m not so sure about that, because what happens in the end is, and we had a nice conversation yesterday about that as well, GenAI creates value based on the work of, well, creative work of humans.
But without compensating them.(...) And there is another interesting fact, because there is some research that shows that if you would train LLMs as they are there now today, not with human made work anymore, but only with AI created work, then the model might collapse at some point.
Interesting philosophical perspective, right? The machine needs the human work, actually. But what happens if we get replaced?
And the question I then always ask in my work and in my workshops is, what do we really get for this? I mean, there is a high price we pay on our own. What do we get for this? And little thought experiment, let’s ask NNI4, create an Italian video game. Any guesses, what will we get?
Exactly.
And it still works. I tried it some weeks ago once more with the latest JGPT version.(...) It still works, right? How creative is that? I mean, it’s the logical result in the end, because it’s the most probable result. But is it creative? I think what’s really important to understand when we use GenAI is that whatever prompt I use, one prompt always creates average, but never excellence.(...) And I’m not saying that we should not work with AI at all. I also work with it sometimes, but it’s always part of a process. It’s never this one prompt.
The narrative of the efficiency gain is just not true, I think.
But I have another example. So another thought game that’s from a friend of mine. He did that some time ago. So asking for great images of a basketball player, great images of a depressed person.
So and Greg did this, and what he got was this.
So interesting thing here is, we see only men.
There is one woman.
Basketball player does not have gender in the world, right? And depressed person, there are only women.(...) Those statistics say more men are depressed than women, actually. So it’s not only that we get average from one prompt, it also leverages the problems that we already have and does not solve them. And why is this happening? Well, in the end, GenAI is a brute force calculating the highest probability machine.
It’s always about what’s the most probable result with some variation, but in the end, you train it with a dataset, it’s learning from the dataset, it’s looking for what’s existing very often in the dataset, what’s not so often, and why this creates, actually, or learns about strong signals and weak signals. And then it’s over time reproducing the strong signals again and again, but not the weak signals. The strong signals will prevail, the weak signals will vanish.
I mean, I think we, from a creative point of view, we have anyway the problem that things are getting more and more similar already, and with AI, this problem gets worse.
If we always get the highest probability result,(...) how variable will the results be in the end?
So I’m your ex-person, so I see a lot of conversation about virtual personas and how cool that is to do the research virtually, how diverse will these personas be?
If always the most probable result will recreate it. How creative will our output with GenAI be in the end?
The thing is, GenAI is pretty good in recombining, it’s not in creating something substantially new. By default, that’s how the technology works. It’s a window to the past, it’s trained on our data,(...) it cannot look into the future. That’s maybe the biggest lie we always hear. Now you could say, well, we also recombine in our creative work. Yes, there’s also sort of creativity, recombining things, and you can use GenAI in that process. But if we are honest, progress creatively, society, society always comes from doing things differently.(...) At some point, someone says, hey, let’s make a new sort of phone, let’s get rid of all the keys and make a touchscreen.
GenAI, as it works, would not have been able to make this step. It would have made the screen bigger and put more keys on the phone.(...) But it would not have been able to make this substantial different step. So I think how we use GenAI today defines our future work as well as the future of design in general.
And as I said, it’s important, it’s not about denying GenAI, it’s about making more informed decisions when and how to use it. And I personally think we need much more of this conversation, because honestly, most people out there do not even really understand how the technology works. And what they can expect from it and what not.
But we as designers, especially as UX designers, we love problems. We love to understand problems and then create solutions for that. And this is actually an example that I saw two weeks ago when I was at a conference in Munich.(...) So I’m getting back to Uber Eats once more. And in a very nice talk about agentic AI, someone brought up the example of, “Hey, yeah, let’s make the order of food or something. Let’s make it agentic.” And that’s so much cooler, because I do not need to click through the whole app. I can just say to the agent, “Hey, please order, I don’t know, three pizzas, blah, blah, blah. Done.”
Well, making an unfair system agentic just lifts inequality to the next level. And I think we should be very careful. And that’s not only an AI problem, that’s a technology problem in general.
But AI makes that really worse.
So, and speaking about it’s not only an AI problem. As I said, I was an Aarhus, came from Aarhus yesterday actually, from an event of a friend. And I got this nice example actually from the train. So what happened there is they asked, “Hey, we make a little research. Can you answer some questions?” Yes.
And I took a screenshot of that, because I think that shows so much what we do when we digitize things, right? We always need to have a list of options and people have to choose from the options. I mean, in this case that still somehow works. It took me some minutes to figure out that every option is there, but it’s super complicated to understand. But I think we all know the cases where we have, I don’t know, to fill out a form and there’s a select field, but the option that I feel is my right option is just not there. And there is no free form field where I can type in that option, right?
Tech in general wants to create a simple world of zero and ones.
And it’s the same with AI. In the end, it’s about zero or one.
Human is always something in between.
And that’s really interesting, I think, because we digitize all these things, but always from the tech perspective, always from the limitation of zero and one of tech instead of thinking about how can we make it more human. And here is a nice quote by Charlie Chaplin that I really like and that I use pretty often because he many, many years ago said so well, more than machinery, we need humanity.(...) And I think we are in huge danger of losing that at the moment. We already were without AI, but AI makes it worse, right? So we, as designers, we are very good at understanding and solving problems, but we should carefully decide which problems we solve. And I think too often it’s probably the wrong problems.
And that’s a general paradigm shift that we probably need because I think too often what we do is we design for the tech world as it is. So we take it as given and design for it. When I see all the conversation discussions about agentic AI, it’s like it feels like it’s God given and now we have to design for it.
Why don’t we design how it should be?
That’s the great power of us designers. And even if we are not designers, we all can imagine how it should be. And I personally think that’s what we’re missing very often. How should things be instead of okay, it’s how it is and we have to work with that. So the question for us, what is the real problem?
I mean the core question of UX designers, but I think we need to be careful or need to dig deeper actually in today’s problems.
And that brings me to the last point in my presentation because design shapes narratives.
And well, you all might know these mantra of the data is the new oil, right? And I just look for one quote of countless quotes. Information is the oil of the 21st century and analytics are the combustion engine. We live in a data driven world. And I have worked in an agency for a long time in the past. So I know that we need to collect more data. We are data driven. Do we really live in a data driven world?
I don’t think so. I think we live in a narrative driven world because in the end the narrative beats the data. Otherwise, we wouldn’t discuss the climate crisis because data is pretty clear on that. Otherwise, I don’t know, people would not vote for fascists and Nazis because numbers are pretty clear about that, right?
A friend of mine, Monica Bielskyte, she’s a future designer and she said very well, once who controls the fantasy controls the future.
And maybe that is the biggest and on purpose I use the word battle. I think it’s the biggest battle we have to fight at the moment is the battle over narratives, right?
So we have we are dominated by all these tech narratives. We are dominated by the false narratives of the fascist and Nazi parties. And that’s interesting because what’s a societal future vision, a societal future narrative? We are lacking that.
We should create that, right? I had a nice conversation. I met a friend in the train from Hamburg to here yesterday and we had a nice conversation about this whole narrative thing. And one thing we so much agreed on is the very important question. What story do we want to tell?
Right?(...) We should not follow stories other give others give us. We should create the better stories.
And I have one very simple example here coming back to AI.
What do you think is the most used emoji for AI at the moment?(...) Any guesses?
Sparkles.(...) Exactly.
Generative AI is magic.
That’s the narrative behind this. Sometimes it’s also the magic wand. And I can I totally get that. I mean, I feel like Harry Potter with that when I can create something with just one prompt. Right?
So it’s a chosen narrative by big tech. So.
Generative AI is magic.(...) Is it really? I think it’s pretty good in creating the illusion of magic.
Someone once said, “Chat GPT is now better than ever in pretending it’s something that it’s not.” Because it’s not magic. It’s a limited machine. It’s a powerful machine for sure. But it is a machine with limitations and that also comes at a pretty high external price.
So coming back to the icon.(...) Instead of these icon, we should maybe chose another one. The tool icon.
And hey, a hammer is a fantastic tool, right?
But not every problem is a nail.(...) And that’s the important thing here. Using AI for the right things and also shaping the narrative around Generative AI. And I think, as said before, it does not stop at the digital world that we are creating. I think it’s a general thing. We need to shape the future at the moment by the narratives that we create.
And why us? Well, because I think design is storytelling, right? The user journey is nothing else than a great storyline. We are all storytellers.
And how can we use this skill actually to change the narratives that are dominating our society at the moment? So what I wanted to say in my talk, we make decisions every day. Small, big, direct, indirect. And these are some of the things I was talking about, right? Very small decisions. Do I use a video here? What’s the default choice for the user?(...) How do I prioritise user needs over negative impacts? Can I justify the true price? When and how do I use AI? What is the real problem here?
What products do I design and what products do I maybe not design? And what stories do I want to tell?
And what’s very important, we are not responsible for everything. So this is, I’m not blaming neither you or myself for that.
I rather see the huge opportunity to shape the world and future through our decisions, right? Our decisions make products more sustainable, drive sustainability of users, create UX and balance with surrounding ecosystems. We can do that with our work. But our decisions also make problems visible that have been invisible before.(...) They spark and drive necessary discussions and can shape the way how technology is used or not used.
And in the end, our decisions shape not only the designs and products, but in the end ourselves, our users, the world and the future, no matter how small they seem.
Each design is a decision for one possible future and each decision designs one possible future.
And I want to end with a quote by Jane Godell, who unfortunately passed away some weeks ago, but she said so well, just remember that every day you live, you make an impact on the planet.
And I think we must not forget that, right? We all can have an impact. And during my work, that’s one of the most conversations I have. People ask me, hey, but what can I do? I’m so small. I’m a junior designer. Whatever.
Every small decision, right? We all can be the small rock that brings the whole mountain to slide, actually. And I really see huge potential there. And this is my invitation to you to follow me on that. Thank you very much.
(Applause)