Transcription
[Music]
Audience: [Applause]
Carla Diana: Hi, everyone. It is very exciting to be here. It’s been an exciting day so far. I am going to be talking about robots and how robotics is going to be influencing product design very soon. It’s already happening.
A little bit of background on me. I am trained as a product designer. My original degree is in mechanical engineering. I studied industrial design because I really wanted to know the human side of what happens when we design a product. I’ve worked for many years at a number of firms, most recently a placed called Smart Design. My focus was always on physical things that had a digital component to them because I always knew that it is really the coding behavior that we give to products that bring them to life.
I’ve since left Smart Design about five years ago and have been doing my own consulting work, and some of the most exciting work has been just sharing what I see coming down the pipeline. I’ve been doing a number of talks and essays in the past few years, and I also teach. I have created a number of courses around the topic of smart object design and taught these at the University of Pennsylvania and, more recently, have joined the faculty at Parsons School of Design.
At the moment, I’m in the process of writing a book that is called the Social Lives of Products in collaboration with a researcher whose name is Dr. Wendy Ju. I’m consulting for a robotics company, and I’ve also just joined as a co-host of a podcast that many of you might enjoy that’s called the RoboPsych Podcast that is hosted by a fellow named Tom Guarriello, who has a background in psychology.
What I wanted to talk about today really is this field of social robotics, which I didn’t know existed until a few years ago, and how it really influences product design. This is a project that I had the benefit of working on about ten years ago at this point, and I’ve continued to work with this researcher. While my practice is designing everyday objects, I have had the pleasure of having a client who was a researcher at the university who was doing social robotics.
I said, “Social robotics, what is that?” It’s actually the study of how we might design machines so that our main interaction with them is primarily social. Instead of learning a mouse and a keyboard or clicking this button twice and then double-clicking here, all we have to do is interact the way we interact with another social being. That means through gesture. It means through sound, through motion and light.
This is Simon. Simon was developed by a team at the Georgia Institute of Technology led by Dr. Andrea Thomaz. I was brought in as part of the core team to be the more human designer, so the creative design part of it. What I really focused on was, does it have ears or not? Is it hair? Is it a helmet? What features should it have? How do people know that you can talk to it? How do they know that you can speak to it? How should it react?
We developed this robot that would have this set of expectations so that people knew that you could interact with it socially, but they also knew it was a machine. We weren’t trying to give people the illusion that it knew everything because the core of it is that it’s a learning robot. I’m going to play a video from its first unveiling at the computer-human interaction conference.
What’s happening here is, there is a person who can talk to Simon and says, “Where does this go?” Simon knows to parse the sentences, and he knows to gesture; like he’ll say, “I don’t know where it goes.” When she says it goes in the green bin, Simon is able to take that sentence and parse the word “green” and then map green to the average pixels that it sees when it holds the object up. Basically, what we have is a set of camera and microphone, as well as other sensors such as pads in the robot’s hands.
Here, it’s doing an exercise where it’s learning to sort objects into a number of bins. There is a team of artificial intelligence specialists that continue to work on projects like this doing different kinds of tasks so that the robot continuously builds its knowledge of the world. That was Simon.
In addition to the tasks, it interacts with people through these various expressions. These are just some of the expressions that I have captured. Simon can look shy. He can look coy. The first time I met him, he gave me goosebumps even though I knew very well what was happening behind the scenes because I’d been working on it for a year or more. Still, this illusion of understanding of life, of intuition, of, “Oh, my gosh! It knows what I’m thinking!” was this remarkable experience.
Whenever these projects have come up, I’ve continued to work on them. This is Simon’s cousin. It’s a great day when you get a phone call that says, “Can you design the cousin?” What does that mean? Are they going to be in the same room together? Do they have the same upbringing? This is Curi going through some testing patterns, actually, but I love this because I feel like it looks like robot tai chi.
The next video is a little bit of some of the benefit that we get from robots because, certainly, we can do a lot with screens, but there’s a lot that happens in a shortcut in three-dimensional, real-world space that something like a robot can do that other things can’t. This is from TEDxPeachtree. That’s Dr. Thomaz, and she has just gone through an explanation of, if the robot was not able to gesture, it might have to say, “Is the object 14 inches from the edge of the table a flowerpot?” But instead, it does this.
[Video played]
Dr. Thomas: Now, instead of a complicated object description, she can just point.
Curi: Is this the flowerpot?
Dr. Thomas: Yes, it is. And, it was a much more reasonable way to ask that question.
Carla: That’s a project from several years ago. The more recent project that Dr. Thomaz has been working on that I have been able to work on in a creative capacity has been Poli, who is a robot that was intended for multiple situations but is finding a huge application in hospital settings. This was a starting point. It’s basically a Kinect camera on a stick with an arm. That’s a pretty exciting moment for me.
She called and said, “I don’t know. It’s quite different,” but my vision has always been to look for where we can simplify and abstract the robot, so I thought this was exciting. We went through a lot of discussions around what is happening there. What needs to be social? What doesn’t need to be social? We have hundreds of sketches of things like, well, if we can have this head, is the head part of a larger body? Is the head actually maybe a smaller entity that’s controlling a larger body, and all of those kinds of questions?
The other thing that I really like to think about is, how do we soften the social impression? We don’t want something that feels like a box that’s lurching towards you, but we want something maybe if it brushes against you it’s a little bit softer. Maybe it gives this illusion, so this was a really fun project in terms of, how do we have an entity that isn’t necessarily exactly like anything you’ve seen before, but yet it is? There is some sense of torso and body. There’s a sense of a neck and, certainly, a head because, again, this is going to be a social robot and people are able to train it to do new tasks. People can guide the arm and tell it where to go.
One of the things that I really focus on and that I’ve been teaching a course on at Parsons is called Design Semantics. How do we let people know? If you simply had a box on a stick, would you talk to it? Maybe not. But, if you saw that it had ears, and if you saw that it had a face, and you knew where it was looking, you might.
That’s one thing that a lot of social roboticists are looking at is this idea of gaze. Is the robot actually engaged with me and talking to me? The idea of attention; is the robot looking there and I’m looking there, and we’re both engaged with that object at the same time? Then there are many, many other nuances that are currently being developed in terms of social interactions.
This is a rendering of our final design for Poli. This one’s name is Poli. What was really exciting is I collaborated with a fashion designer whose name is Amy Lynn Stoltzfus. I knew I was interested in a few materials, and I thought, “Wow! If I could find a fashion designer who has worked with neoprene.” I was really interested in this idea that neoprene was soft and felt like a fabric, like something that an entity would wear, but also had kind of a sheen to it and felt technological and would hold a shape. I actually found someone who had done her entire thesis in neoprene and was able to come onboard.
This is me and Poli. This is a bit of what the final version looked like. This is a short clip. She hasn’t been fully developed with all her behaviors yet.
Female: How are you today?
Poli: Just peachy.
Reporter: Meet Poli, the kind of assistant that would be a welcome gift for anyone this holiday season.
Female: It frees up time for the nursing staff to have more time with their patients.
Reporter: Something that supervisor Jessica Meinhardt is looking forward to having in her unit at Seton Medical Center Austin.
Carla: That is currently in progress, and the bottom line is that we want humans doing the value where humans can be helping other people rather than in the background doing repetitive tasks. Robots are really good at repetitive tasks. One exciting thing for me, as a designer, is that the hospital needed to set aside some time in the schedule for selfies with Poli, so this is one of the nurses who needed to take a selfie with Poli, which I really loved.
Really, when I’ve talked to my colleagues who are product designers and they say, “Oh, Carla, you and these crazy robots. Those are really few and far between. Why so much focus?”
I’ve always felt like, you’ve got to be kidding me! Everything that I learn from these robots can be applied to the products that we design every day. What I was thinking about were just subtle things. Of course, we don’t need eyeballs, a head, and moving arms and fingers. But, there are things that we can learn from there that we can then apply to other things.
For example, when Simon was being trained to learn the colors, I had this amazing moment that happened in a split second where Simon’s ears turned red. Now, I knew that LEDs were in there, but I didn’t know all the details of how Simon would be programmed or how the expressions would be used. I felt like [gasp] he knows; he knows what I wanted him to do. He saw right away. That kind of thing was something that I thought this is the kind of intuitive, split second, at a glance interaction that we can have with a machine that can make it feel like the machine understands us just by us being the humans that we are.
I felt like if we’re designing a camera, a microphone, or anything else interactive, even an automatic door, can we have more of those split second, intuitive interactions? I wrote an essay about it for the New York Times, and then they asked me to do a slideshow, which was, “Really, what do you mean about these everyday objects?” I’ll show you a piece of it.
[Music]
Carla: I’m Carla, and this is A Day in the Near Future. Another Monday morning and my lamp has just gone from dim to bright. I try to turn over, but it rotates to follow me.
I stumble into the bathroom. I brush one side for a while. [Buzz] The toothbrush vibrates, so I know it’s time to switch to the other side.
Now I’m really awake. The bathroom mirror says, “Nice job on your weight, but your heart rate is a little higher than yesterday.”
As I head to work, my bike confirms my appointment. “On our way to Ted’s office?” Yes! When it’s time to turn, my handlebar vibrates, and my jacket lights up to signal a left turn.
At the meeting, I draw some sketches with my memory pen. When we’re done, it emails my notes to my colleagues.
Back at home, the door recognizes me. The lights turn on. The stereo starts playing, and my 3D printer whistles [whistle] to let me know the dog toy I downloaded earlier for Roo is ready.
Carla: So, let me just tell you that’s a fun day when you leave the offices of the New York Times and you go, “Oh, my gosh! They just let me sit there and make silly robot voices for an hour and a half.” One of my favorite days.
What the bottom line is in terms of designing products is thinking about how expression can happen and about how understanding can happen. Understanding happens through how we code the input of sensors. An expression happens through three main means: light, sound, and motion.
When we talk about light, just some examples. A lot of us have experienced the MacBook, the version of the MacBook that does the breathing light. We intuitively again read it as a breathing light. It’s not an interaction that we focus on, but it is one that is significant and communicates with us. If you look up the patent for this, it is actually called the Breathing Light, and it is programmed to happen 12 times a minute. It matches the rate of human breathing.
In one of the projects when I was at Smart Design, I had started an interaction lab where we could pursue concept projects and really study how our ideas might be manifest in products so that, when we were meeting with clients, we weren’t just handwaving. We actually had some hands-on experience. One of the projects that the lab produced was called What’s Up Smart. It was a big, open style office setting. What we decided to do was we knew that sometimes those settings can be distracting when people are heads down and they want to work. Sometimes, you do want people to come over.
What was developed by the team--Nick Remis, Mark Breneman, and Gordon Hui--was a beacon that people could have at their desks, and they could turn to a different light. There are four sides on the cube that has the light exposed, and there are four different settings for: please leave me alone; please come over and talk to me, I’m really working on something; I’m going to be out of the office; or I am taking a break.
This allows the light to communicate in many different ways because we not only want to think about what it’s like at two inches for the person who is immediately using it, but we also want to think about how things happen within the setting of a room. This allows you to have the room and have an at-a-glance beacon of what people’s statuses are. Inside is an Arduino Yun that is connected to the Internet so that it can also broadcast that state online since we had a number offices in different cities, so it functioned on many levels. We could see, oh, if everyone in the London office was heads down, maybe we knew that that wasn’t the best day to interrupt them.
Another project I had the benefit of working on when I was at Smart was a floor cleaning robot by a company that was new at the time called Neato. One of the things we wanted to focus on, again with the Neato, was being social because we had studied the Rumba, and we knew that people attributed human expression to the Rumba even though it doesn’t necessarily seem like it should be that way. People were giving them names. People were forgiving it if it got stuck under the couch and saying, “Oh, poor thing.” As the head of interaction design for the project, I really wanted to amplify that.
We focused, again, a lot on light, as I always do, and ergonomics. The idea that it’s a product that you don’t just use it two inches in front of you. You may actually interact with it across the room. So, being able to see a LED matrix, let’s say, from underneath could be beneficial.
Then, one of the things that it really had to communicate well was its status. We did that through sound. In terms of working on it, in process, what we did was break down all of the messages into human words like, “Oops, I’ve hit a bump. I’m docking now. I’m going to sleep now. I’m going to start cleaning now.”
Then I worked with a composer, whose name is Skooby Laposky, to actually translate those into a musical version of human-ese. This is just a variety of some of the sounds. This is what it will say when it’s starting cleaning. It’ll go, “Off to work.”
[Melodies]
Carla: Or the benefit of the Neato is it actually has a LIDAR and can know if a person is stepping in front of it, so it can say, “Oh, hello.”
[Melodies]
Carla: Then we have a few alerts that are different from one another, but very short so that they don’t continually play.
[Beeps]
Carla: Then there is a wake-up sound.
[Melodies]
Carla: As well as a goodbye, going to sleep sound.
[Melodies]
Carla: It was very fun to develop this musical language.
Another project that I was only involved in very early stages was Jibo, but I wanted to talk about motion, which is something I’m just starting to explore in further depth. The Jibo team did a really good job of really focusing on that. This is from a YouTube video with one of the lead designers. He’s going through some of the motions.
Jibo: Hmm. Don’t miss me too much.
Male: Jibo, show me happiness.
Jibo: [Melodies] Whoo-hoo!
Audience: [Laughter]
Male: That’s funny, Jibo.
Jibo: [Giggle]
Male: Jibo, let’s think about something.
Jibo: [Clicking]
Carla: It goes on. You can certainly see many more videos of the Jibo, but the team has done a brilliant job working with animators to actually guide the motion.
My co-author on the Social Lives of Products book is a woman named Dr. Wendy Ju, and she is now at Cornell Tech in New York. Yay! We’re in the same city together, which is really exciting because she was at Stanford. This is from a project that she and her team--Stephen Yang, Brian Mok, and David Sirkin--worked on in terms of thinking about, how do we apply robotic motion to everyday products and have it be this meaningful interaction, the kind of which I just described to you? Let’s look at this video.
Audience: [Laughter]
Carla: He knows what to do.
Audience: [Laughter]
Carla: She’s not quite so sure.
[Pause]
Audience: [Laughter]
Carla: [Laughter]
[Pause]
Audience: [Laughter]
[Pause]
Audience: [Laughter]
Carla: [Laughter]
[Pause]
Carla: So, this is just part of a series of what we call Wizard of Oz experiments where we can work with research participants and actually change the behavior of the robot on the fly based on how they’re reacting. We do this so that we’re not putting a lot of programming into the project. It’s really going to be exciting to continue working with Wendy and thinking, again, about how we have robots that aren’t arms and limbs and faces, but yet are social entities.
In terms of applying this to products in our everyday lives, one of the key things that I try to tell myself to come back to is the idea of intuitive interactions and then creating forms that again express. When I talked about Poli, I talked about how I wanted you to know where she’s looking. I wanted you to know that you could speak to her because she has ears, even if the hears a hidden microphone in some other location. What I’m really thinking about is the overall context, designing products so that they’re appropriate for exactly when and where they’re being used, as well as how the shape of the thing really tells us how we interact with it in a social way.
This was a very fun project that I worked on that is a bar that is part of the Museum of Sex in New York City. They had approached me because they had this beautiful, old, wooden bar, but they knew that they wanted to create something that was going to be in that clock space in the top that would be a way for the servers to communicate with the back kitchen. If we think about old movies, we see diners. Somebody rings a bell - ding-ding-ding. We were going to do that in a different way.
What I had proposed was that we would have an LED matrix. I really loved this interaction with the old-fashioned bar and the clock, but thinking about how we have that be something that communicates kind of in secret. What happens is that we have the icons of the bar, which are a nose, lips, a tongue, a kissing that were developed by Emilie Baltz, and those will go through this crazy animation when the person in the kitchen flips a switch to let the server know that the dish is ready. Then when the dish has been picked up, he will signal that it’s been picked up.
What happens is the people who are eating at the café might see, from the corner of their eye, like, “Did you see that? I think it just kissed, but now it’s still.” But, the servers will know, so they know to look for the frenzy of activity, and it’s kind of the silent communication between them and the kitchen.
This is a recent project from my studio. What I’m continually trying to do, again, is develop products that examine these issues that I’m looking at, like this idea of context. How do we have only the information we need? We have a temptation to feel like, “Oh, it’ll have my Facebook feed, it’ll have this, and it will have text.”
I felt like, no, let’s just have weather data at the moment that you leave the house. Right at your coat rack, let’s say, let’s have weather data appear so that you make that split-second decision. Here’s a video of how it works. It has a particleboard inside--
[Music]
Carla: That is connected to the Internet--
[Music]
Carla: When you approach, the lights come on. Otherwise, they’re hidden behind the wooden surface.
[Wind]
Carla: It gives you the high, the low, the current temperature, and conditions.
[Birds chirping]
[Music]
[Rain]
Carla: That’s a pretty recent experiment. One of the things I’m also interested in is how do we blend electronics and natural materials.
This is the ClikBrik, which is a project that was developed by a drummer, whose name is Konrad Meissner, and his neighbor Ted Booth. Then I worked on it with designer Mike Glaser. What Konrad wanted was to create a metronome that was specifically appropriate for drummers. He said that the ones that exist now are fussy, you have to press buttons, and you have to kind of put your drumsticks somewhere.
This is a metronome that you control by drumming on it. It becomes part of the drumkit. There is Konrad. You can do everything. You can even adjust the tempo with your drumstick. You turn it on or off. This is currently in development, actually, and being shown in musician conferences. They’ve been developing a small quantity to start and are continuing to grow the product.
The next few are some projects from students of mine from both the School of Visual Arts and University of Pennsylvania. This project is Anke Stohlmann and Richard Clarkson. They were thinking about, again, not being fussy with an interface when you want to, say, have a guest over and select music, so they developed The Cube.
Female: This step is to set up the different sides of The Cube. So, now you’re all set. So, if you take The Cube from the station, you can just roll it. So, to demonstrate the cube….
Carla: This is a large, functional prototype.
Female: So, here you have the color corresponding, and it’s playing your Square playlist.
[Music]
Female: If you want to change anything, you just change the sides.
[Music]
Male: Want to shuffle, you simply roll the dice.
[Music]
Carla: What you see there and what I always encourage my students to do is create a functional prototype, but don’t force yourself to put it in the small space. Then create an esthetic prototype that actually reflects the interaction and experience that you want to have. That’s what’s going on there.
This next project was also from the group at School of Visual Arts by Sam Wander, Leroy Tellez, and Lucy Knaps. This is an interesting project because it showcases how we can use projection mapping to make our physical world smart, so to speak, in terms of interactivity.
[Music]
Male: Enlight is a smart desk light that enhances the reading experience from researchers, students, and book lovers. It brings features previously limited to ebooks to all books, however old they may be.
[Music]
Male: Definitions.
[Music]
Male: Highlights the spot, contextual encyclopedia definitions, and other data.
[Music]
Male: Image scanning.
[Music]
Male: And social highlights from other users.
[Music]
Carla: Those guys did a really beautiful job with harnessing this idea of gesture as interaction, as well as really finessing the demonstration.
This last of the student projects that I’m going to show is something that’s called !@N, and it was by Elan Kiderman and John Johnson from my Smart Objects course at University of Pennsylvania. They wanted to create a robot of sorts that will chastise you if you say a certain annoying word too many times, like “awesome.”
[Music]
Carla: [Laughter] That was for you.
Male: (Indiscernible)
Carla: [Laughter]
[Music]
Male: !@N, learn “fuck.”
[Laughter]
Female: !@N, learn “sleep.”
[Laughter]
Male: !@N, learn “synergy.”
[Music]
Male: !@N, learn “cooties.”
[Music]
[Pop]
!@N: Fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck--
[Pop]
!@N: Fuck, freak, fuck, freak, fuck, freak, fuck, freak, fuck, freak, fuck, freak--
[Pop]
!@N: Fuck, freak (indiscernible)
[Pop]
!@N: Fuck, freak (indiscernible) cooties, fuck, freak, cooties, fuck, freak, cooties--
[Music]
Carla: The idea with this is that it would turn to face the offending person. These guys did a beautiful job with a prototype that has a weighted motor inside that will actually tilt the device in different directions. They were able to find an opensource speech analyzer, which this was a few years ago. Now we’re starting to see a lot more of those kinds of technologies. But, even a few years ago, there were so many tools that we have access to that we can think about exploring as designers before they actually become part of everyday products, which gets me to my next topic, which is this idea of giving ourselves permission to play.
This morning, we heard Chris talk about how, if we can automate some of the work and actually give value to allowing ourselves to have time to explore. This is something that I found is really important as a designer for me to be able to continually advance in my career is giving myself permission to play. It’s keeping my head looking above water like, what is coming next? What are the emerging technologies, and how might we be able to apply those? The way that I play with them isn’t necessarily what the final outcome or the application would be, but I do always have a constant list of things I know I want to explore.
A few years ago, I felt like, oh, capacitive sensing; we are going to see a lot of capacitive sensing. I knew that we were starting to see it in the surfaces of our smartphone screens and that we would see a lot more of it. I felt like I really want to play with capacitive sensing.
I was awarded an artist residency at School of Visual Arts the same time as an artist and designer whose name is Emilie Baltz, who focuses on food. We had put our heads together and said, “What is it that you want to explore?” “What is it that you want to explore?” and started having experiments with people coming to visit the studio around sensors and food, basically, and what the experience was like. We worked with marzipan, and we did peanut butter and fruits.
What we found was really exciting was that there is one way that we don’t really interact with products very much, and that is through licking. [Laughter] We found that ice cream is one of those things that you have to use the gesture of licking to eat. Yes, you can have a spoon and a cup, but the way that we love it, the way that we grow up, the way that we are kids eating ice cream is through licking.
We developed this project that was an orchestra where we had volunteers. They were actually inside of a box where they weren’t able to use their hands. This is what the result was. We worked with a composer whose name is Arone Dyer.
[Cranking]
[Clinking]
Carla: [Laughter]
[Clanking, clinking]
Carla: She created four-part music that we were able to use, and we would have people. We had sensors on the side of the box so that if there was a head that was popped up, there was a background sound that would be activated. We would actually play the instrument like that.
Another thing that I was really excited about a few years ago was the emergence of desktop 3D printers. Now, as a professional designer, we’ve used really expensive, big machines, but I said to myself, “This idea that it could be accessible to the average consumer is really exciting,” and I wanted to do a project. I wanted to experience it.
What I like to do is try to come up with a project that will allow me to really swim in the sea of this emerging technology. In the case of desktop 3D printing, I thought, “What would that be? Should I write an essay, or should I do a big, serious project?” I interviewed a lot of experts around, what do we see as how this is going to affect daily life? This was around 2012.
What I did see was that it was really exciting to talk to kids because kids seem to not edit themselves. A lot of the adults said, like, “Oh, it’s going to be the material is too expensive. It’s too slow. It’s too this.” Now, a few years later, it’s clear that 3D printing has been changing almost every aspect of society.
This is a screenshot from a school in New York. What I decided to do was create a children’s book. It is called LEO the maker prince, and that is LEO, the robot. It also gave me permission to draw robots all day, which is something that I like to do.
It’s a story of a girl named Carla, and she meets a robot on the streets of Brooklyn. She’s never seen anything like him before. She asks him -- he asks her to draw a sheep, which is my nod to The Little Prince, and then he 3D prints it.
What I tried to do was have illustrations that were both hand-drawn as well as photographed. Any of the photographed objects can be downloaded and 3D printed by people who read the book. At the end of the book, there’s a URL. If you’re first learning about 3D printing, you have this collection of objects that can give you your first experience with it.
What I tried to do is develop the character so that his anatomy was necessarily something that illustrated the technology. He’s got a heated nozzle on a tail that can move in two directions, and he’s got a spool of plastic on this back. He’s got a scanner. He’s got this tray that can move up and down and actually present the final piece of 3D printed object.
What I had distilled from my interviews with experts were possible futures. At the time, prototyping was already happening. A lot of people predicted the micro-factory. If you were an independent jewelry designer, you might be able to do a small run of products. What if everybody had a 3D printer at home? Would we just browse products and actually print them, or go to the local store? What would be available for customization if we could just scan our feet and have shoes made, which we’re now starting to see happen with a number of startups?
What about 3D printed food? What’s the potential there? Finally, going to the most fantastic, I had read an articular that researchers were looking at the ability to use solar power on the moon to laser center the lunar dust and create architecture. Those were part of the story.
Also, the experiment for me was, what would it feel like as a designer to be able to create a product and not depend on one factory to make 100,000 of them, warehouse them, and distribute it, but instead, be able to design something and have it, overnight, appear as a physical object in, say, Japan? I was so anxious to do this that, really, I had to create this project. Low and behold, it happened.
Now it’s a few years later, and every week I get at least a few hits on social media from products that have materialized in different parts of the world, all over the world. These are just a few shots of some of the readers’ prints. The sheep is very popular, but Carla has been very popular lately.
What’s exciting is that then people have started to create fan fiction. This is from a teacher in Virginia who goes by DesignMakeTeach. He was able to talk about -- let’s talk about 3D printing in different materials. Leo not only prints in plastic, but maybe he prints in snow.
Then there are also other things that show up all the time, like this is from a Thingiverse user named Manorius and he said, “Aria builds a house for Carla and the sheep,” and I felt like there isn’t even a house in the story. That’s fabulous. They’re really taking these physical things and then going someplace else with them.
Then there’s the nefarious sheep. I saw this one. This is from DanDoubell, a Thingiverse user in the Netherlands. I said, “What is going on in the background there?”
Audience: [Laughter]
Carla: It’s fabulous. They 3D printed this bloody pool. The one that really impressed me was, I got a letter from a central library I Scotland. They said, “Dear Carla. We love your book, and we have been printing the objects and then using the combination of the book and the objects in our sessions with visually impaired children so that when we read the book out loud, although they can’t see the illustrations, they can feel the characters and the objects.” This just impressed me so much because it was not something I had even planned or thought about. It made me really excited about, again, the permission to play and put things out into the world.
I encourage all of you to think about that and give yourself a moment tonight to say, “If I had permission to play, what would I play with?” Thank you.
Audience: [Applause]
[Music]