Jo Franchetti
Jo is a DevRel Engineer at Deno. She is passionate about improving developer experience, teaching good use of JS and TS and building sparkly, wearable, internet connected tech. She mentors junior developers, advocates for mental health awareness and is devoted to improving the diversity and inclusivity of the tech industry.
Talk: RoleplAI
In 2025 it’s easy to think of “yet another chatbot” when you think about AI, but gaming has long been at the forefront of artificial intelligence and content generation. From the earliest applications of randomisation in roguelikes, to procedurally generated universes in games like Elite and No Mans Sky, players are used to algorithmic content adding challenge, variety and fun to their games.
In this talk Jo experiments with using hybrid Procedural Generation and modern transformer models to create narrative storytelling for games that not only plays well, but still feels intentional, human, and engaging.
She’ll cover techniques for content generation that both use and constrain our models to make worlds that make sense – that you could, of course, shift to other non-gaming mediums.
Transcription
(Applause)
Hello, everyone. I am Jo, I am a developer advocate, and I’m also a D&D enthusiast, just in case there was any... Yeah! Just in case there was any questions about my nerd credential.
And I fell in love with D&D this year, and I wanted to get into GMing. If you don’t know what that is, when you play D&D, you have a person who kind of leads the game, the game master, and they’re the ones who are sort of telling the story as the game goes. But GMing is quite daunting when you’re in front of a bunch of people who you have to invent a story on the spot. So I thought, well, maybe I can use AI, and I can write some AI players, and I can create a little world for them. And I can tell them the story, and they can react, and I can see whether or not I can keep the story going enough for some AI players. And after making that little tool, I thought, well, perhaps we can use AI for a few more things. And I know, obviously, generative AI is quite a contentious topic. There’s a lot of noise around AI at the moment, some of it justified, some of it less so. So before we get sort of swept up in on the backlash and you start throwing rotten tomatoes at me,(...) I want to try and ground us in something a bit more familiar, because generative AI might feel like a very new and disruptive force.(...) The idea of machines helping us to create game experiences has actually been around for a long time. This isn’t new at all in gaming. The games industry has been doing it for literally decades.
So what I’m talking about here is procedural generation rather than generative AI. And I’ll talk a little bit about both of them later.
But procedural generation has been shaping games for, well, since the early days of the medium.
And procedural generation, you could argue, is the early form of game AI. It’s this sort of rules-based, deterministic,
often quite elegant output. So you feed it a seed, you feed it a particular idea, you give it a set of constraints,
usually some sort of algorithm that you want it to output information based upon.
So be it, you know, I want to create a dungeon, I want to create a landscape, I want to create a table of random loot.
And yeah, like I say, we’ve been doing this since the early days of gaming. So this talk isn’t about replacing designers or creative people with AI. It’s about how we can understand how these tools work and how we can use them in our own projects, how we can use them responsibly, and how we can create this sort of richer, more responsive experiences. So let’s start by looking at what we’ve seen in the past already. So if we go way back into the 80s, Rogue was a game that absolutely relied on procedural generation, as did Elite.
Elite had these sort of vast universes that needed to be created, but we’re running on like really low memory devices here, like devices that have less computational power than your watch does these days.
And then of course we had Dwarf Fortress as well, which took it a lot further, created much more complex, interconnected systems within the game to create this sort of unique, emergent gameplay.
And procedural generation is interesting because it’s not trying to be clever, it doesn’t understand your narrative, it doesn’t understand your aesthetics, it’s literally just algorithms following rules and spitting out the things that you’ve asked it to do. And yet it has managed to create these like excellent and memorative game experiences.
And it’s also, it’s doing the boring bits, it’s doing the unglamorous like placing blades of grass on the ground or placing trees in a forest.
And if we sort of move forward to the mid 2000s, we’ve got games like Spelunky, I don’t know if any of you played this, where the minds are all procedurally generated, or of course Minecraft where the whole world is procedurally generated.(...) And it creates this sort of, it almost becomes a design choice, like there’s an aesthetic to Minecraft. You know what you expect when you walk into a Minecraft world, but you still get surprise, you still get exciting emergent gameplay, but without you know having to hand place every single tree and every single block in the game.
And when I talk about procedural generation in games, I’m not just talking about content, of course I’m talking about the full infrastructure of the game, it’s this sort of scaffolding that allows us to build cool things on top. And it allows us as designers and developers to focus on the things that actually matter. And the last game I want to talk about that already exists is Wildermyth, I don’t know if anyone has played Wildermyth.
So this is a really fun procedurally generated game that came out a few years ago, in fact it came out before generative AI was really a thing that people were talking about.
And what it does is instead of a fixed narrative, it builds stories around the characters that you create.
So it has this sort of template stories and as you play your character changes the myth of the land that you’re in. So your characters will age, they’ll form relationships, they’ll suffer injuries, they might retire, they might even die. And the game doesn’t just track those changes, it weaves them back into the storyline. So the next time you play maybe a character who you retired in the previous version of the game will turn up as a wise old sage in the next version of the game. And it feels like it’s all hand-authored, it feels like it’s telling the story of your playing, but it isn’t, it’s all generated.
It’s all sort of stitched together dynamically. And what makes it super interesting is how they’ve done this. They’re not generating a new story from scratch every time. They have these library of story outlines, they have a branching tree of choices that you can make as a player, and as you make those it weaves them back into the story. And this is a good example of what I would call structured generative design. The output is flexible, but there’s enough of a framework that actually makes it feel like a playable game.
But there are constraints with generative AI.
You know, I’ve played Wildermyth a lot. I’ve played it many, many times. And eventually you start to see the repetitive nature of the templating. You’ll see the same bits of storyline coming up, you’ll see the same possible injuries for your character happening, and it starts to get a bit repetitive.(...) And that’s the thing about procedural generation, it’s very rule-bound. It can only do what you tell it to do, it can only make what you tell it to make, it can’t invent things. And eventually the system will start to repeat itself, and as a human you will, you’ll notice that as you’re playing, you’ll start to see the seams in the game. And this is where generative AI comes in as a compelling tool to help us actually build upon a procedurally generated base.
And generative AI doesn’t rely on the sort of predefined set of rules.
It doesn’t need a library of altered content, it can generate new material on demand.
It’s non-deterministic, if you give it the same input twice you will get a different output, which won’t happen with your proc gen.
And sometimes the output is brilliant, and sometimes the output is nonsense, and that’s sort of the trade-off with generative AI. The models are pattern matches, as we heard earlier, we’re just talking about a statistical likelihood.
That’s why sometimes generative AI can delightfully surprise you, and sometimes it can give you a load of hallucinated mulch.
And in game design, that sort of unpredictability can actually be quite powerful, and we can make some worlds feel really alive and cool, and make characters feel spontaneous,(...) and make our stories unique, but it does also add a risk into the situation. Like I mentioned, we need to be able to keep it on rails because we’ve all seen the output of bad generative AI, like these worlds could just mean absolutely nothing.
So we need some sort of validation for what kind of output we would expect from it.
So the output of generative AI, not only is it sometimes surprising, it’s also quite computationally expensive, as we heard in the previous talk.
Each prediction that we ask it to make, it takes billions of matrix operations, because the model doesn’t know what it’s going to say next. Every single word that it outputs is a prediction on what happened previously. And if you’re running a large model, something like GPT size, then you’re looking at a huge amount of compute, and not only that,(...) literal seconds to get a response, which isn’t going to make a very real-time feeling game. You’re not going to feel like you’re having real dialogue with a character if they’re sat there going before responding.
And as I mentioned, it’s not just a speed issue, there’s also memory requirements, there’s GPU load.
And these models are super powerful, but they’re not lightweight. And to use them in games, we need to think carefully about how and when and why we generate content. Because not everything needs to be live, some things can be pre-baked in, some things can be cached, and some things can be procedurally generated instead.
So my thinking for this game that I made was, could I take the structured approach of proc gen and the more compelling, imaginative side of generative AI, and kind of smoosh it together to make something like Wildermyth that doesn’t have that repetition, that doesn’t have that sort of...(...) It gets boring after a few plays.(...) So instead of relying solely on pre-authored templates, we can use generative models to fill in the gaps in our story to add flavor, to add variation, and to add nuance. And in this game, I managed to pre-generate thousands of outlines and situations and worlds.
But things like the dialogue and the character creation needed to be done instead of with proc gen, with gen AI, so that they could be generated on the fly so the character can talk to you. But I still have control of the story arc, I still have control of the pacing, and the emotional beats of the game. And then each playthrough feels different because the generative AI is sort of sat on top, making different decisions as it goes.
And that’s kind of the key. Generative AI is not a replacement for design. It’s absolutely not going to replace any of us and our imagination and our creative abilities. But it is a tool for enrichment, and when used well, it can make a system feel kind of alive.
But when used poorly, it can make a system feel incoherent. We’ve all used bad chat apps on websites trying to get tech support or something, and it just can’t answer your question. The trick is knowing how we keep the generative AI on the rails.
And this is sort of where it gets a bit messy. Like I said, generative AI is super powerful, but it’s unreliable. I don’t know, did everybody see the Willy Wonka experience that happened in Scotland?
Generative AI just created a script that could not be followed by humans that made very little sense.(...) The whole thing was a mess, and generative AI can and does often get things wrong.
And it isn’t just a sort of philosophical problem of it getting things wrong, it’s also a practical one. If we’re using generative output in our game, whether it’s for dialogue or quest logic, we can’t just drop it in raw. We need to validate it, we need to double check that what it’s given us isn’t just the nothing or whatever it was called from the Willy Wonka game. And that’s sort of the thing that I want to show you from the code in this game, how I’ve kept gen AI on the rails.(...) So I’ve written some validators, I’ve written some filters and some sanity checks to keep the AI within a sensible guideline.
So how do we tame the chaos? Well, I assume a lot of us have used JSON before, we’re used to this sort of very structured output. And because the model might be non-deterministic, just because the model might be non-deterministic doesn’t mean that the output has to be. We can be quite strict with what we want it to output. We can define a clear structure, we can have fields for character name, what is their motivation, how do they speak, where are they from, what is their history.
And we can contain all of this and tell the model to only respond in this shape. And it makes our validation easier, we can do type checking on it, we can have relationships between different characters because it’s all shaped nicely in our JSON. And it’s not perfect, the model can still absolutely hallucinate and sometimes it outputs something that isn’t the right shape. But then the beautiful thing about the prompting is we can say,(...) we’ll validate your own output and if you output something that isn’t valid JSON,(...) fix your own JSON.
So let’s take a look at an example structure of a universe.
So I want to generate a bunch of universes in which I can play a game. So a universe has a genre, so what type of game is this, is it going to be fantasy, is it sci-fi, is it post-apocalyptic, what kind of tone do we want for this game.
Like Grim Dark, is it going to be funny, like a comedy game, what kind of universe is this, is it single planet, is it a system of planets, is it an entire galaxy.
And we can give it all of these things that we wanted to output for our universe. And then we instruct the model to follow those types really strictly and return JSON only.
And we can also make, we can do some type checking, we can put some strict fencing in to make sure that what we get out at the end is always going to be JSON.
One of the simplest ways to improve the output that we get is not to try and generate everything all at once. So I want to generate a universe, I want to generate a history of this universe, I want to generate a bunch of characters who live in this universe, and I want to generate a whole load of dialogue for those characters to say. And if I try and get the generative AI to do that all in one pass, it’s just going to go off the rails, it’s just going to start creating nonsense. So what we do is we do something called multi-pass,(...) where we think, we think first in rough structure, so give me a universe. Okay, once you’ve got a universe, let’s put some more detail on top of that, so give me the history of this universe that you’ve made. Okay, once we’ve got that, give me the cities and the factions in this universe. Once we’ve got that, give me some characters within these factions and their relationships, and each layer will build upon the last.
And it just means that the AI has less chance to sort of go rogue.
And then,(...) yeah, so this is all I’m doing here, I’m saying like, create a universe, once we’ve got a universe, create some dialogue, once we’ve got some dialogue, create some flavor text. And it is slower, but it’s much more predictable, and the outcome that you’re going to get out at the end is going to make sense to a human.
One of the things that AI struggles with a lot is context, so it won’t sort of remember what you’ve asked, what it said before.
It won’t understand what matters to you, so when you’re writing the context of your prompt, you need to be super explicit.
So what we can do when we’re writing our prompts is we can store exactly what we wanted to output in variables, and make sure that we’re asking for exactly the output that we want. So let’s have a go at actually building a world. So we’re going to generate a game world with our generative AI,(...) but before we can make that world, we need that sort of prompt, what’s the word that I’m looking for? We need grounded realness. We need to give it some information that it can start on, and this grounding is called seed data normally when you’re talking about Procter.
So this is the seed data that I gave it. I basically wrote as many sort of universe seeds as I could think of, and I’m not sure if you can see all of those in there, but so I’ve got a category of contemporary and realist worlds, and then a category of historical and mythic worlds, and basically I just wrote a few hundred of these, and these are what it’s going to start with.
And these are our universe seeds. We give it a huge range of categories so that it can really go wild with the imagination. Then I wrote a prompt which is going to pick a seed and build it out. So build it out with the tone, the scale, the thematic sort of theme.
So this is an example of what one of my prompts would look like. So I give the model very clear instructions on how I want the return data to look,(...) and I describe the shape of the data that I wanted to output, and it performs best with really short, specific tasks and very clear instructions. So if you can’t read that, it says, “You are a world builder AI for a proc gen and gen i assisted RPG. You accept seed JSON and return only a JSON document describing an internally consistent universe outlined to ground subsequent generation runs.”
I’ve then got a list of all of the data that I expect it to give.
And we can do a whole bunch of interesting things here. Like if you see down at the bottom there, there’s something that says “Never” and there’s something that says “Always”, and these are prompts that you can give to the AI.
Something that is super interesting about world building with a lot of the LLMs that are out at the moment.
The name “Eldoria” comes up constantly. If you just open up chat GPT now and ask it to generate you a world, it will probably generate you a world called “Eldoria” in which there is a character called “Elara”. And there’s a whole Reddit community talking about the fact why is AI always creating “Eldoria” and who is “Elara”.
But yes, so I had to specifically tell it not to give me “Eldoria” because otherwise 90% of the world ended up being “Eldoria”. This is what we talk about when we say it’s not going to replace any of our creatives any time soon. It’s still very much tripping up over itself. The output that we’re getting is lovely to build upon as a human, but it is certainly not ready to present this to people as an actual game.
And then always make sure you return “Valor JSON” as just another sort of safety mechanism that I can put in there to make sure that it’s checking its own output.
But once we’ve generated some universes, then we can randomly select one of the universes and that’s the one that we’re going to build our world details onto. So this is how we would generate history.
Your task is to expand each city from the universe generator into a richly detailed local history, give me some points of interest, some culture, politics, aesthetics, that kind of thing.
And we’re just, again, these prompts are super long so I’m not going to show you the whole thing, but what I’m trying to say is be as specific as possible if you want to have an actual decent output.(...) That’s how you get worlds that feel authored, even when they’re not authored.
And then we can start adding in depth. So once we’ve got these history events, we can put them into some sort of narrative, we can start generating characters, we can start generating dialogue as well.
And you can add as much definition as you want to into the schema of your city. Your own imagination is kind of the limit here. The more things that you add, the more realistic this is going to feel.
And what we end up with is a whole bunch of cities that are nodes in a map.
Each city has relationships, each city has little factions that also have relationships. We can have things like trade routes, rivalries.
It’ll start to invent things like festivals and a particular architectural style for each city. We can go super deep if we want to.(...) We’re trying to make something that is as evocative as possible.
And because we’ve already done the work in the previous run-throughs to sort of give the world a flavour and an aesthetic, the model actually has something to work with. So the output is more likely to be something that works with this particular world.
So let’s take a look at an actual generated universe. So here’s one I made earlier, I guess.
Let’s see if we can get this. Yeah, cool.
So if you can all see that.(...) But this is the actual, this is the game, and we have this output folder.
And it is generated literally thousands of these outputs.
And each one,(...) at the moment I’ve got three files. So we have a universe file which describes our universe, as we mentioned earlier. So this one is a single planet called Ashara.
It’s got a whole bunch of little cities in it, a whole bunch of relationships.(...) Then we get a bit of flavour text to describe the universe, and then we get these dialogue trees from the characters who are in this universe.
But let’s talk about cost. So we heard a lot earlier about the environmental impact of using AI.
And if you’ve ever tried to build anything yourself with generative AI using the commercial APIs, you know that the meter is always running. So we’re not only talking, you know,(...) destroying the planet, we’re also destroying our own wallets. Every token that you make,(...) every token, every question that you ask it uses a token which costs money.
And then if you start layering this idea that I was talking about, about multi-pass onto that, you know, our bills are going to stack up super quick.
But the cool thing is that they don’t have to. Everything that I’ve shown so far, the world generation, the character creation, the dialogue generation, it was all done offline.
I’m running a local model on a little Mac mini that lives under my desk at home.
So I don’t need an open AI account. I’m not rate limited.(...) I don’t have any cloud dependency. I have a very low power machine that is just sat under my desk running this thing. And I’m doing that with a tool called LM Studio. I don’t know if you’ve seen it before.
This is LM Studio.
And LM Studio is super cool. You can run open source models locally, and then you can swap out open AI calls for your local interface instead. And we can build systems that scale without having a huge amount of cost up front.
Because the cost isn’t just money, it isn’t just the environment. It’s also the architecture of our app itself. And if we remove the dependency on the external APIs, we can iterate a lot faster. We can build systems that are sustainable.
And in a world where AI is often framed as extractive, it’s actually, you know, this is super important. I think a lot of people are going to talk about this today.
So what I’ve done here, like, where’s my cursor?
So this is LM Studio. This is running locally.
Can I show you the models that I have installed yet? So I have just one model installed here. It’s 12 gig.
This laptop that I’ve got here has a decent sized graphics card in it. And I can prompt it in the same way that I would chat a GPT. So like, tell me a joke.
Select our model.
And this is running on this laptop. It’s not connecting to the Internet at all.
And I can prove that it’s running on this laptop. Let’s watch my CPU usage go through the roof.
Yeah, there we go. Let’s see if it’s given us a joke.(...) Why did the scarecrow win an award?
Because he was outstanding in his field.
But yeah, so all those thousands of worlds that I generated were all using this model here on a tiny little machine that lives underneath my desk. Like, we can be sensible with these decisions that we’re making. We don’t need to spend a huge amount of money.
Yes, I’ve already said all of this. And all of these models are open source. You can absolutely download them. There’s even smaller ones than the 12 gig one. There’s like three gig ones. You choose the model that you need depending on what it is that you’re actually building. And as I mentioned,(...) we can it has the exact same shape as open API’s data. So we can use the open AI. Sorry.
We can use the open AI NPM package and use the exact same shape of data. So we don’t need to really do a huge change. All I’m doing here is pointing the client at localhost instead of at open AI’s API.
So this is just a sort of super basic example of what a client might look like.
Here you can see I’m sending it a prompt.
And yeah, but I’m pointing at a local host. We don’t need to pay for compute.
But you saw that was quite slow even on the little Mac mini that is under my desk.
The generation of a few thousand worlds still takes quite a lot of time.
And like this sort of idea of real time generation is it kind of sounds magical. But the reality is there’s a lot of latency. It’s quite slow.
Because we’re doing something that’s actually really complex.
But like I mentioned, the landscape is definitely shifting.
We can use these sort of small optimized models locally.
And they can run on like 6 to 10 gig of RAM if they want to. We’re seeing inference times like really drop on local models.
So it’s not...
We’ll get a return time of a few seconds, which is an instant, but it is viable. And we can do some clever things in the background to sort of make it feel more real time.
We can...
We can make our NPCs sort of feel like they’re talking to us in real time, even if they aren’t.(...) So the trick is more sort of architectural. So we will pre-compute as much as we can. Things like the generation of the worlds that can all be done, you know, before you even publish the game. You can publish the game with 2000 worlds included in it, because those 2000 worlds are literally like 20 meg of data. It’s so small when you think about the size of games that are out at the moment.
And then we do things like putting in fallbacks for when the text doesn’t return in time. We can have some sort of fallback text if we need it.(...) We can have little loading bars. Just the kind of things that we’re used to doing when we’re building websites, when we know that data is going to take a while to come back. We’re absolutely used to putting in the shape of the UI to make it feel like it’s responsive.
And so just for illustrative purposes, this is a little tiny Hono app. If you’ve not used Hono before, it’s super useful for building a little but powerful server.
And what this is going to do is it’s going to ask for one or two lines of narration, but with a two-second time out on it.
So we’re just sort of saying, like, if the text doesn’t come back within one or two seconds, then give this sort of fallback text instead.(...) And we wrap it all in an abort controller.
So if it doesn’t happen in time, we just fall back to this thing that is generic but kind of believable. It keeps the UI nice and snappy, and it keeps everything nice and predictable.
So let’s talk about the actual game itself. This is the architecture of the game. We have a Dino app that is the thing that actually generates the universes. So we saw some of that code earlier. We have then the API, which is going to actually serve one of these universes. And then we have a web app built on top of that that is going to take that content and actually render it out to the user.
So each run of that author that I mentioned first just outputs files to a folder. I showed you those files earlier. The API then will pick a random one of those outputs and serve it.
The API then will give us access to the universe, the flavor text, and the dialogue.
And yeah, if you’ve not used Hono before, this is sort of what creating a really simple server looks like in Hono. And we’re using Dino’s built-in APIs here just to read those files from the server.
So on load, we’re going to just get that random seed state. We’re going to then drive a chat UI, which is going to take the dialogue that we generated and make a dialogue graph.
At the moment, the game isn’t... There’s not much game in the game. We have dialogue and a dialogue tree, which is a sort of gaming concept.
And a dialogue tree is like a branching way of moving the user through a dialogue. So having a conversation graph. Each node contains an NPC line, so the line of the character that you’re chatting to, and then a set of options for the character to respond with.
And it lets you adapt to your stories to player choices. And then we can do interesting things like maybe add a sentiment onto that. So you’re chatting to your NPC, and your NPC says, “Ah, character, I hear you’re here about the secret stash.” And you can give your character options, like, “Yes, tell me the secret of the stash.” Or, “No, I slap you in the face.” And we can assign different amounts of points for how happy the NPC is with us at this point, and that might change what they then say to us next.
And here’s a sort of visual representation of what a dialogue tree might look like.
So, I’m going to show you the game, and then I’m going to talk a little bit about how we might expand the game in future.
So let’s get my cursor up. Let’s get this game running.
Nothing like real-time demos to put the fear of God in you.
Here we go. Okay, so this is our game. So, at the moment, what I’ve got is, so you can see in the middle, this is our flavor text. We have created this little character called Ga,(...) who is currently eyeing us wearily. And we have a bunch of responses that we can say to Ga. I’ve also got it generating an SVG map of our world.(...) This little character is also SVG, and we can see all of the history of the world in here.
So, Ga is saying, “Who goes there? I’m a traveler seeking shelter.” Okay.
He wants some help in the minds.(...) Do we want to help him, or do we have other things to do?
We want to help him.
Anna doesn’t want to help him.
Cool.
So, we went a bit random because it’s AI. We found a key.
We take the key and leave end of game.
(Laughter)
So, obviously, this is still extremely basic. It’s not really even a game at this point. But what I wanted to do was give you an idea for the kind of things that you can do, and an idea for what the games industry is literally already doing. This stuff is definitely out there already.
There’s no end to what we could do with this. Now we’ve got worlds generated,(...) we can start generating quests,(...) we can start generating storylines, and we can start to turn it into an actual game.
But what you will definitely see and what you’ve already seen is, even with those absolutely sensible guardrails that we gave it, it’s still not generating a coherent storyline. So, humans are still necessary. We still need to be there helping to mould the output that it’s giving us. But it does potentially give us a lot of things to play with. Like me as a GM, now I have 2,000 worlds that I can go through and see if they spark any interest for me if I ever want to be brave enough to actually GM a game. One day I will.
So, like I say, we still don’t have plot points, we still don’t have exciting quest markers, but that’s not to say that those things can’t come in the future.
So, yeah, that’s sort of everything that I have to say.
I hope that it’s been an interesting and different sort of look into gaming. This is something that we don’t talk about very often in the web world and it’s something that I’m super interested in.(...) If you do want to make anything like this, absolutely give me a shout. I would love to hear what you make.
I have a metric fuckton of swag that I need to get rid of before I leave this place. So, if you would like a Dino hoodie or a t-shirt, or I think maybe I’ve only got hoodies and t-shirts, do come up to me in the break and have a chat. I would love to hear about whether or not you’re using Dino, whether you’ve even heard of Dino. Actually, who here has heard of Dino?
Yay!
Okay, Dino is a JavaScript runtime.
Come chat with me if you would like to. There’s some stickers down the front if you would like them, and thank you so much for listening.
(Applause)