"I'm excited to think of new types of games that nobody experienced before"
Brian Tanner of Artificial.Agency on bringing human-like decision-making into any game environment at runtime.
Hello! Welcome to another edition of AI Gamechangers, where we explore the frontiers of artificial intelligence in the games industry through conversations with the innovators shaping its future.
Today, we're pleased to bring you Brian Tanner, co-founder and CEO of Artificial.Agency. Brian and his team are developing AI agents that can make meaningful decisions, solve problems creatively, and interact with game worlds in “common sense” ways that were previously impossible without centuries of training data.
Meanwhile, we’re in San Francisco for conference week, checking out local companies and sitting in on panels about the latest developments at GDC and PGC. The AI landscape continues to heat up with debate around copyright and ethics, even as new developments promise to transform how games are created. So, as always, be sure to scroll to the end for a curated selection of the latest AI and games news, including updates on new tools and funding announcements.
Brian Tanner, Artificial.Agency

Meet Brian Tanner, co-founder and CEO of Artificial.Agency. With 25 years of AI experience, Brian and his team of former DeepMind colleagues are pioneering what they call "generative behaviour" for games.
Their lightbulb moment came after ChatGPT's release in 2022, when they realised foundation models could bridge the gap between AI research and practical game implementation without requiring the prohibitive training time of traditional reinforcement learning methods.
In our conversation, Brian shares how their technology enables game characters to make human-like decisions at runtime, problem-solve creatively, and even function as invisible games masters to personalise player experiences.
Top takeaways from this conversation:
Artificial.Agency is building a "behaviour engine" rather than just a conversation system. Its AI agents can make decisions and interact with game environments in meaningful ways, paving the way for emergent actions and stories, and characters that follow you from game to game.
Traditional game AI using reinforcement learning requires massive amounts of training time. Artificial.Agency's approach leverages foundation models to create intelligent in-game behaviours without the prohibitive training requirements.
The focus is on enhancing rather than replacing human developers. Brian and team are committed to adding generative behaviour to games, rather than using AI for asset generation – the aim is to help developers create new types of experiences that weren't previously possible.
AI Gamechangers: Please give us some insight into your background. What are you working on?
Brian Tanner: The founding team have all known each other for a very long time. Andrew [Butcher] and I have known each other since graduate school 20 years ago! We met Alex [Kearney] five years ago. There’s Mike [Johanson] too. We all had different routes that led us to DeepMind together.
All of us have a passion for games, and as we were working at DeepMind on intelligent agents, there was always this interest in how we could bring this into the games industry to try and allow AI and games to be more engaging and more interesting. And to do things that it hasn’t been able to do before.
We were all working happily at DeepMind, having a great time doing our AI research and engineering! But when ChatGPT came out in 2022, we had a moment where we asked, “I wonder if these foundation models can bridge the gap that we’ve never been able to bridge on our own?” The gap is how to get runtime intelligence into a game without having to do tons and tons of specialised training.
With reinforcement learning, which is predominantly how this has been done before, you have to play through many, many, many sessions of the game. And every time the game changes, you have to retrain it from scratch.
DeepMind, of course, had all the success in go and chess and shogi and StarCraft – but it required centuries, in some cases, of training data of game playing, which you just can’t do in a commercial game.
So we had this insight, as soon as ChatGPT dropped, that maybe you can take these foundation models and actually embody them in a game, give them a role, give them a set of perceptions, give them behaviours that they’re allowed to do. What will happen from there? What we found was that magic happened! We were blown away by how successful it was immediately.
We left DeepMind, started the company, and we’ve spent the last two years exploring, experimenting, gathering data, fine-tuning models to make these agents that can do interesting things.
Without giving away any secrets, please tell us more about this engine you’ve built and its behaviour. How does it manifest in gameplay? What is the user experience like?
The first thing to say is that the behaviour engine is really a way to bring human-like runtime decision-making into any system within a game.
The natural thing that people think about first is smart NPCs. “How can we make the quest giver, or the townsfolk, or whoever, a little bit more engaging, a little bit more reactive?” There’s some work out there where people are making unlimited, open-ended conversations, or voice-to-voice communication with these characters. And that’s all.
It’s scratching the surface of what we want to do. The reason we call it a behaviour engine and not a conversation engine is because, from the moment we started, we wanted to build it to be extensible and flexible and for game developers to be able to use it to do whatever they want. You can make intelligent NPCs with it because conversation is one kind of behaviour. But you can also make embodied characters that can truly interact with the world.
“I’m excited to think of whole new types of games that nobody ever experienced before. You’ll be able to take these characters from game to game, and they’ll be in your Discord, and you can talk to them in between games, and you’ll have a more social relationship with these companions”
Brian Tanner
We have a Minecraft agent that we have not really shown publicly, but we use it a lot for demoing privately; it’s a companion that you can play with. One of my favourite examples was when our co-founder Alex [Kearney] did the demo; she said, “Can you get us what we need to go on a scary mining adventure?” You might be situated in a village, and it has an understanding of what’s in the village and what’s in the different chests, through interacting with them. It goes and gets two sets of armour, weapons, pick-axes, and provisions; it gathers them all up, equips itself and gives the other half to the player, saying, “Let’s go do this.” I don’t think people have seen that kind of experience before.
As we were developing that agent, we had some real lightbulb moments. Alex was hooking up the different behaviours we wanted this character to have inside Minecraft: being able to talk, pick something up, build a block, move to a certain location. She was hooking these up one by one through a player API, so it’s got the constraints that a player has. It has hunger, for example. And at some point, its hunger reached maximum, and it started to take damage. But she’d never hooked up the ability for it to eat the food that it was carrying around! It actually said to her, “Alex, I don’t have the ability to eat. Do you think you can hook that up for me?” This was in the early days, in 2023, and it was a wow moment.
Another experience was when I was debugging one of these Minecraft agents. I told it to give me all the wood that it had. I expected it to toss the wood on the ground in front of me. Instead, it built a chest, transferred the wood from its inventory to the chest, and then told me that the wood was in the chest. I thought it was broken, so I stopped the experiment and looked at the configuration – sure enough, I had turned off the ability for it to toss items! So it knew it wanted to give me the wood, and it came up with a creative solution to do that transfer. It understood that if it built the chest, I’d be able to take it from the chest.
We have those moments here almost every week. We’re excited because there is a lot you can do with companions that play with you, like a friend or an assistant in the game. But additionally, something the studios are telling us is the idea of having a Games Master who’s watching the player, understanding the kind of things they’re doing (the challenges they’re facing, the friction they’re finding or not finding) and then dynamically spawning encounters, or running tutorials to teach the player skills that they’re not picking up on.
These are things that have always been a holy grail for AI and games. It’s been completely inaccessible before. So we’re spending a lot of time working on these non-embodied agents that you don’t even see, you don’t talk to them, but they’re there: they perceive what’s happening in the game, and they can take action on behalf of the studio. It’s almost like having a game designer on the shoulder of the player who’s trying to personalise their experience and make it more fun.
What foundation models are you using? Are you plugging into OpenAI’s API, for instance?
I like to think of us as being model agnostic. I’ve been in AI for 25 years, and it’s an old adage, of course, but the constant is that nothing stays the same! Our entire internal platform is based on being able to use whatever the best up-to-the-minute model is.
That means we focus a lot more on our curated training data set and our fine-tuning and the things that these models need to get good at games, versus the specific model. We have experimented with closed-source models, like OpenAI’s model, or with Llama. We’re playing with DeepSeek.
“We absolutely want to empower developers to do things they could never do before. We are not in the business of replacing people or automating the important work they do”
Brian Tanner
It used to be there would be a significant breakthrough every few years… and then it was every year… and then it was every six months… and then for the last two years, it feels like you have to check every day! Anybody who’s trying to build on a specific [model] is going to get washed away because the new model that comes out in a month is going to make the previous thing feel antiquated.
You obviously want the agent to behave a certain way to keep the game fun. And you also need it to respect any intellectual property you’re working with. How do you ensure guardrails are there? How do you train it to make sure that it complies with the consistent experience you want?
That’s a great question. There are so many fascinating things to talk about here.
If you take these open source models (or closed source models), they’ve been trained to be helpful assistants – but what makes for a compelling character or adversary in a game isn’t necessarily a helpful assistant. You want there to be friction. You want it to argue with you. If it’s a game that’s targeted at an older audience, you might want it to use crass language or talk about violence. A lot of these things have been very specifically ironed out of the available models. How do you actually make the model generate fun experiences? That’s something I don’t think any of the providers are focusing on because they’re not making their models for games. It’s one whole thing we have to tackle.
Another part of it is: how do you make different experiences for different types of games? Or for different types of players? We want to be able to target everything from young players to teens right up to mature audiences, and that has to be very different for those different groups.
“Can you show us how you programmed these characters so that they’d know if there was a fire that they should put out? … No! We didn’t have to program that knowledge! There’s common sense in the system”
Brian Tanner
Finally, you’re talking about the guardrails for the company’s IP. Should it be allowed to talk about the real world or not? There are a few different strategies we have there. Some of them have to do with fine-tuning different models for different audiences, or perhaps even different types of games. Some of them are programmatic guardrails that we can include. I’m sure you’ve seen some of these DeepSeek experiments: people have been asking certain questions, and it’ll start to answer them, then say, “I’m not allowed to answer that!” We can do that sort of thing too. We just need to make sure that we catch it before it starts...
The same thing is true of the OpenAI models and their voice mode. There are certain things that you can ask it where it will start to tell you something, and then the voice suddenly changes to a very serious voice that says, “I’m not allowed to talk about that!” and then it redirects you back. I’ve experienced this first-hand – it’s a common thing. But because we’re not necessarily doing real-time streaming, we have some advantages in that we can catch it before it gets back to the game.
There are lots of other cutting-edge strategies on our side of things. We have both a server-side platform for using big models, and we have a local, on-device platform that currently can use smaller models. There are different tricks you can use; we’re very confident that we can do this the right way.
You mentioned the smaller, on-device models. Local inference models seem to be a way to solve some problems of scale and cost. What’s your take on that?
We started the company in April of 2023. There were no on-device models, and there were no open-source models. There was no way to do inference in any sort of cost-effective way!
We put our heads together and made a roadmap of what we expect to happen over the next five years in the broader industry. Part of that roadmap is cloud inference getting more cost effective. And then models coming to the device. Part of the ways cloud inference becomes more cost-effective is that smaller models can pack the same punch as previous larger models, in addition to all the different hardware and other optimisations. We still think there will be a place for larger models running in the cloud, but we absolutely believe there will be a place for smaller models running on device.
“It’s almost like having a game designer on the shoulder of the player who’s trying to personalise their experience and make it more fun”
Brian Tanner
I think what’s unproven, and one of the things we’re looking to find out this year, is this: is there an appetite for games to give up some of their local resources to machine learning models? If you talk to game developers, they want to use almost every kilobyte of VRAM on those graphics cards. They want to use that to generate visual fidelity in that experience. And so if you tell them, “I can run this AI on device, but I just need eight gigs of VRAM from you…”, is that a non-starter?! Until they can truly see what the tech can provide their game, will studios be open to it? I think it’s going to be hard to answer that question. So we’re pursuing both of these strategies to make sure we can cater to everybody who wants to use the tech.
What is your roadmap at the moment? When will people be able to get their hands on your product?
We have a private alpha release that’s coming later this quarter. We have validation work out there, but this will be the first version that we consider “the real thing” that people can use – we’ll get it into studios’ hands later this quarter. We’re going to show some things off to the media, some internal demos.
We’ll be a lot less stealthy once we’re at that phase. When will it be something that anyone in the world can play? That will take a little longer, either later this year or into next year.
Let’s talk about AI more broadly. AI is facing some pushback from folk in the creative industries. SAG-AFTRA are still striking over it. With all the job losses in the games industry recently, people feel AI is a threat to them. How do you address concerns about AI?
This has been a deliberative strategy that we’ve had from the beginning: we love games, we love game developers. Almost half our team are AAA game developers that have come to join us. We absolutely want to empower them to do things they could never do before. We are not in the business of replacing people or automating the important work they do.
If you’ve ever taken the time to work with traditional gameplay AI, a lot of it (especially with characters) is very tedious! It’s challenging in ways that aren’t interesting. This work takes a lot of time, and so we’re very focused on helping those people do things they couldn’t do before.
There is obviously a big change coming with generative AI assets (images, text, voices), and I think there are a lot of things that the industry – all industries – are going to have to grapple with. This is not what we want to be doing to the industry. We’ve very firmly planted ourselves, and we use the term “generative behaviour”. We just want to make games better by empowering designers.
Matthew Ball’s report on the state of the video game industry identifies various growth engines that the industry needs to take it forward. One of the things he mentions is AI. But we need to ensure AI provides new gameplay experiences, not just do existing stuff faster, right?
One of the most exciting things about working with people (either those we bring on the team and introduce to the tech, or external partners) is the first thing they want to do is say, “Here’s what I would have done before. How can I use it with your technology?” People have been making games for a long time, they know the way to do certain things. We can show them how to do that with our technology, and it’s a delightful experience.
What’s been exciting is at some point, the penny drops, and they start saying, “Could we do this or could we do that? How long would it take?”
There was an example where we had a small office environment that felt like The Sims – there are a few characters that are living their life in the office, and you poke and prod them and give them goals and watch them try to figure them out. One of the things we did was spontaneously use a special command to set the photocopier on fire. And right away, the boss character starts looking for a fire extinguisher because they want to put the fire out. They’re expressing this and looking around. Every single time, the question comes, “Can you come show us how you programmed these characters so that they’d know if there was a fire that they should put out?” It’s like, “No! We didn’t have to program that knowledge!” There’s common sense in the system. If a regular person would know it, these characters would know it.
“We had this insight that maybe you can take these foundation models and embody them in a game, give them a role, give them a set of perceptions, give them behaviours. What will happen from there? What we found was that magic happened!”
Brian Tanner
And when that really sinks in, people start to think about all the things they could never do before. And we truly believe that whole new genres are going to come out of this. Social-based games, games that require more creative thinking, or more creative collaboration between characters and players. There’s a whole world here to explore. And the first step is getting the designers to break out of doing things they’ve done for so long.
Please gaze into the crystal ball and think about the next few years – do you have thoughts about where AI is going? What impacts and disruptions are still to come?
It’s going to be so many things, and which of them will be the most fundamental, I’m not sure. I do think there’s going to be a place for more custom, personalised, specialised games. The generative AI is going to make some new types of games happen, and I’m sure there will be an audience for that.
There are people who have been talking for years about user-generated content, and there are a lot of really important things happening there. But most people don’t have the creative skills and vision to create a truly meaningful and emotional experience, like you see from the very best games out there. So I don’t think we lose that. I think we will still have many, many games that are created by people who have a passion for storytelling.
The question is: how does that all fit together? I’m excited to think of whole new types of games that nobody ever experienced before, and these characters and these agents are going to live not only within the game you play, but you’ll be able to take them from game to game, and they’ll be in your Discord, and you can talk to them in between games, and you’ll have a more social relationship with these companions.
We have internal demos doing some of these things, and as soon as people start to feel what that’s like and try it out, I think there will be things we’ve never seen before. I don’t know exactly what it’s going to be, which is why it’s so important for us to get our tools into the hands of creative people as soon as we can.
Further down the rabbit hole
What’s been happening in AI and games? Here’s your essential news round-up:
Mike Verdu, the VP of generative AI at Netflix Games, has departed after just five months.
Former Disney and Pandora execs launched Operative Games at GDC. It will build immersive experiences with AI, where you can text and call characters to advance the plot. We have an exclusive interview lined up with them for an upcoming newsletter.
Roblox announced it’s launching its 3D model, Cube, to enable creators to build 3D objects using generative AI. There’s also an open source version. We’ll be checking it out at GDC this week - watch this space.
Microsoft showcased its Copilot AI (offering real-time coaching and hero selection advice in Overwatch 2) despite third-party tools like this being banned in similar competitive games. The company clarified that this is currently an "exploration", and ultimately, developers must decide whether such AI assistance is an advantage or not.
PhilosopherKing has secured $3 million in seed funding to develop an AI-driven gaming platform that generates adaptive storylines, quests, and character interactions in real time.
The latest book by Kelly Vero was published last week, celebrating the work of women in tech. Breaking Through Bytes: Women Shaping the Digital World provides portraits of 18 female pioneers across centuries of innovation in games, music, AI, science and more.
Wolf Games has secured $4 million in seed funding to create AI-generated daily murder mystery games, co-founded by Elliot Wolf (son of "Law & Order" creator Dick Wolf).
Lovelace Studio is using generative AI to create Nyric, a toolkit for building community-driven multiverses.
Pokémon Go creator Niantic is spinning off its geospatial AI business into a new company called Niantic Spatial, following a deal to sell its games business to Scopely for $3.5 billion. Niantic Spatial is to be led by current Niantic CEO John Hanke. The platform provides tools and services designed for various industries like construction and tourism.