Roblox Corporation (RBLX) AI Discussion with Morgan Stanley Conference (Transcript)

Roblox Corporation (NYSE:RBLX) AI Discussion with Morgan Stanley Conference Call June 28, 2023 12:00 PM ET
Company Participants
Daniel Sturman - Chief Executive Officer
Michael Guthrie - Chief Financial Officer
Conference Call Participants
Matthew Cost - Morgan Stanley
Matthew Cost
All right. Thank you, everyone, in the room and on the webcast for joining us. My name is Matt Cost from the Morgan Stanley U.S. Internet team. I am thrilled today to be joined by Dan Sturman and Mike Guthrie, the CEO and CFO of Roblox. Thank you so much for being with us. So I mean, as a programming note, the way that we're going to do this is Dan is going to run through a presentation. That should be visible both in the room and on the webcast. And then after that, we'll move to an open Q&A. So Dan, please take it away.
Daniel Sturman
Okay. Great. So again, yes, I'm Dan Sturman. I'm the Chief Technology Officer at Roblox. What I'm going to take you through today is kind of the Roblox's view of where the -- how and where the current trends and excitement around generative AI and large language models are impacting us. As I do this, I enter first a fair amount of history to give context to where we're going because I think it makes it much more understandable both on where we're at, what we're doing today and where we will go in the future. We'll touch a little bit on some key elements about Roblox business. I'm assuming most folks here kind of know what we do but I'm going to highlight the elements that I think are most material to where the technology can take us just so that we're all on the same page. I'm also going to touch a little bit on the history of AI and why we're having this discussion now and what's going on that have kind of led to now being the time we're having all this excitement as opposed to years from now or a few years ago. And then I'm going to give you some examples of things we are doing and where we intend to go. I have even one demo that's a little bit lengthy but it's about 3 minutes that I'll show you folks that's very, really constant but it helps illustrate where things are happening.
So with that, I just want to talk a little bit about our creators because this is the back story to all of this. Roblox has always been focused on its creator community. It's the lifeblood of what we do. All our content is created by the community. We don't build it ourselves. So with that, we have always invested in making creators lives easier, so they can create content better, faster with as low a skill level as is possible and is reasonable. So we have here kind of an animated screenshot out of Roblox studio. You can see fairly rich 3D manipulation techniques for building. You can see the ship, you can see the terrain. You can see the volcano going. Not shown here also is the entire coating environment that set up for both individuals and teams to be able to build and code in the system. And this is robust. We give this away for free. There's no money we make off this. And we do that because giving great tools to our creators and which enables them to create great experiences. And we have a very large team that focuses on these tools. And it's not just studio but we have very rich and powerful back end that enable things that would normally be very hard to be done very easily gain on the service of the idea that the easier we make things for creators, the more great content they'll produce, the more that will be attractive to users and that drives Roblox success.
There's another element to this behind the scenes which is kind of our scale. So as you're all aware, we have about 6 million daily active users across the globe in a number of languages. But backing all this is what we like to think of as a 3D interactive cloud. It's a set of computing resources that our creators are leveraging, in many ways in a similar way to the way you might leverage a public cloud provider, except it's very specialized for delivering these interactive scenarios. So we are different from the way you might think of an AWS. And that yes, we have what our core data centers look more like what you might find in AWS or GCP or a Microsoft Azure environment. But we have a real investment also in the edge. Why? Because we're talking about 3D interactive content at 60 frames per second, that edge presence is really important. And right now, this gets us to like over 100,000 servers. So it's actually quite large, quite a large footprint. But there's another layer in this infrastructure as well that I want to talk about which goes back to what was saying before, we have APIs and systems that make it far part easier to scale experience. So in Roblox, you are never worried about something that looks like a database. You're never worried about come for a bit of container or how do I manage compute, how do we scale the service. Roblox is doing all of that for you. And again, this is all in the spirit of how do we take friction away from our creators. So all the creatives I've written this cool thing and I pressed the published button. And next thing you know, we've scaled it to 1 million users for them, right, in real time.
Roblox also isn't new to AI. AI has been at the core of a lot of what we've done. And I just want to call out a few examples. They're actually fairly recent prevents we made to the platform. The first is auto translation. This is something that launched probably a year, 1.5 years ago. We -- translation is very important on the platform. If you are a new creator with the platform, even an experienced one, most of our creators are not in order to get global reach going to go hire expensive translation teams to take their experience that was playing well in the United States and the United Kingdom and Canada and put it in Japanese, for example. We do all the translation for them and it's all automated. And up until a few years ago, we were using publicly available best-in-the-industry translation tools and we realized this just wasn't cutting it. And it wasn't cutting out because the AI behind them wasn't good. It was because those tools are trained to do things like translate documents or translate e-mails or translate search results or web pages. And that is not what a Roblox experience looks like. So this is a case where we got our own AI models. We use our own data set which was all the experiences that have been translated and in many cases, have been touched up by their creators of the community around those creations. And we use that to build a model and we saw a dramatic improvement in what auto translation started to look like which directly reflected in our growth numbers in a lot of these countries. We saw immediate return. If you only know Japanese and you're playing in Japan, an experience that doesn't seem like the translations are all wonky is a lot more appealing. So that was one example of where relatively recently, we've really invested in AI. And it was one of the first times we built an AI model like from scratch.
A big area for us is around trust and safety, where AI has had a long history is getting increasingly sophisticated. Text filtering, again, a few years ago, like 4 years ago, we used to rely totally on industry standard tools. We've realized that building our own tech filtering tools will be able to use state-of-the-art translation -- not translation but AI techniques, we're able to look at context around tech filtering as opposed to individual words which is where most of the tools and the industry have existed. It makes tech filtering both safer and more precise. So fewer false positive and also fewer false negatives. I mean, that's a win-win. Normally, you're trading this to off. Type on the slide but I meant to say content moderation. We have millions and millions of 3D models submitted to the system. Many, many each day. That can't be something where you just rely on humans to see if something is okay. We have dramatically improved our systems around detecting a bad object. And in some cases, computer does a better job than humans. People are trying to subvert our moderation systems. We'll do things like crank the alpha in an image way low. So you don't see the offensive symbol on a shirt, for example, until ends up in accretion and then they -- with a simple API, I'll take that alpha back up of it's kind of like a lighting parameter on a 3D image. Computer doesn't care about the Alta, it just gets a bit and can learn about these things. And we do a huge amount now of automatic moderation of 3D content.
And then one of the trickiest things we've always been working on is what we call inappropriate or suboptimal experiences, people who actually build experiences to cause whether they're about violence or about dating the way that's all out. They're intentionally trying to burn our rules and launch them in on Roblox which should not be allowed. These can be tricky because their whole experience is a clever, clever individuals trying to be subversive, can actually write code that says, you're on my white list, you see one world, if you're not, you see a completely different world. How do we detect those? And we've really upped our ML around that. We launched that at the end of last year, pretty much brought the number of inappropriate experiences that were to the platform down to 0. They've got a little bit smarter then. There's a few that are popping through but we just adjust our model and us go away quickly. So I'm not going to say how we did that because who knows who will view this presentation. But really a use of much smarter signals and much smarter machine learning techniques to understand holistically what's going on with an experience. So we've been doing AI for a while and we've had an increasing investment over the past 3 years, I'd say, to really aim to be top of game in the AI space.
So now I come to general AI and why is this the point for Roblox? And if there's only one slide you take away from this presentation, it would be this one because this is kind of the core of it. The rest is just detailed. If you look at what it means to create on Roblox, you kind of have to be able to do 2 things. You have to be able to code and you have to generally be able to obtain or create artwork in some way because they're 3D graphical world with complex behavior. So those behaviors, code and then the beauty of the world that you're in is the artwork. And the reality is as folks approach Roblox, it can be a lot of work to get good in both of these domains. And in fact, most of our creators tend to approach the platform with a strength in one place or another. I'll use myself as an example. I'm the CTO Roblox. I'm not bad at writing code, right? I can kind of do that. My engineers might differ because they spend a lot of time on management. But I'm not bad at writing code, specifically code at Roblox script in Lua. Now, I'm going to compare something here. I hope, Mike, this is a material information but the lowest academic grade I've ever gotten to my life was in the middle school art class. So the class that everyone else can sleep through and get an A, I nearly failed. And I'm that bad in artist, right? So approaching Roblox when I get to say, "Hey, I want to make it look pretty, I need to find a partner. I have no way out. Either I can use our Creator Marketplace and go find models to the people have built or I have to find someone with some artistic ability to work with me on this.
Right now, I might be an extreme case, very strong on the other. But what we think about with Generate AI is an opportunity to drop the barriers on both of these. We're skill starting to get out of the way. And when we want this to become is the genius idea someone has in their head is very easy to bring the live as a 3D interactive experience. That's the angle. There should be almost no skill needed at the end of the day but just the genius thought and that's what you determine what a successful road locks experience is. And I'll show you some examples we're tackling the outside and we're tackling the coding side. But in both cases, we think we can bring our creators along in a way where it's easier and easier for them to be on the platform and therefore, broaden who is a creator, down to the point where a big initiative Roblox is a drug towards every user could be a creator. So we move away from creators and users as kind of 2 distinct pools to just using the platform is it of itself to some degree of creation exercise down to having experience creation experiences where you can build rollercoaster, you can build Hansen houses, you can build whatever you feel like building from within the experience, you're not having to drop in the studio to build these things.
So before I get a few specific examples, I just want to now give a little bit of context and I hope everyone can read the slide tests a little bit small. But how do we get here? Why are we having this discussion now? And there's 2 main technical time lines that I'm just going to call out. One is what I'm calling the theory timeline, the theory really of deep learning of neural networks of AI that's had a lot of advances and I'm going to go back to the 60s, we could go back actually further on this one to like the 40s. Then there's a systems time line which may be more familiar to all of you which is just the rapid advancement of hardware capability and how these 2 have come together in some pretty interesting ways relatively recently. That brings us to today. So if you look at the theory of AI, 1965 is a good starting point. That's when the first deep learning models were talked about. And all deep learning means that rather than having a single computational neuron, you're stacking them. You're having one feed into another, you're having multiple layers in your neural network. That's what deep means. It sounds super sci-fi, sounds super sophisticated but all set really means that you're stacking layers or in together. And that was first proposed pretty much in 1965. So for a long time ago. From there, the history in all net is kind of fraught with a little bit of trouble.
There were 2 what are called AI winters, one from about the 70s up into the mid-80s where folks said, this tech is going nowhere. And then they did it again from kind of the late '80s up until the late '90s where this tech is dead and people explored other things. But there is some solid researchers that kept going. We got techniques like back propagation, how do you train and set the weight on these neural networks. Different models evolved. Basically, late 90 things start to get really exciting. People started to have some breakthroughs. We started to see some of the things that we originally saw around identifying tax on YouTube videos and stuff like that which was kind of one of the first real exciting breakthroughs of neural nets coming back into play with manipulation of images or understanding of images, categorization of images, recommendation systems which have provided our lives now from Netflix to search results to whatever, to things like the fusion models like Dolly where all of a sudden the fusion also first of all, really something is being created from what seemed like nothing. You would type in a text prompt and you would get a cool image of a cat with a Bazooka or something like that out of nowhere.
Meanwhile, we have the systems time line, going back to 64 multiprocessor, supercomputers with Cray Corporation, they pioneered that. They also pioneered the idea of Pall Datapath processors in the '80s and this kept growing. In the 90s, 2000s, we started saying, hey, you don't need big expensive machines like Cray anymore, we can start using commodity servers up into about 2004, when it was really Google who pioneered the idea of really large scale out -- hyperscaler data centers with not amount of compute capacity, great networks but none of the individual computers were particularly special, right? And that's starting to bring to bear this idea that we can from a lot of computation of problems. Google originally did it to compute better search results. But the amount of computation available for a particular sort of problem starts to get really, really big at this point. We have introduction of GPU hardware and the observation that, hey, GPUs graphic processing units aren't just for pictures. That's what's the GPU doing? It's doing matrix map really, really fast. What is the neural net. Oh, it's just matrix now and the fact that you can do it, the better you can train. And those being deployed into AI systems. All this kind of comes to our head in 2016 with Google will translate, where for the first time, deep neural networks outperformed all the prior art in previous natural language processing. And they did it in a way that was so much simpler. There was no understanding what part of the sentence was, what sort of vocabulary was this? What are phrases? There's a huge amount of work done into NLP before this time. And they just kind of took some mill networks. They trained on some sample data. It was incredibly simple and outperformed everything we had seen in translation up to that point.
That continues to advance 2017 -- let me add. And the reason I want to do that was taking AI but also taking the hyperscaler data centers and the compute capacity they had and just throwing a lot of compute and a massive data set at the problem. And that was the difference. We can now deal with these massive data sets that we found you can have bigger networks with big data sets gives you really interesting results. They took back another step out of Google Research in 2017 with an ML model called the Transformer. And what was unique about Transformers is they weren't actually architecturally better than what came before them on a case-by-case basis but they scale more easily. They were built to actually be maybe a little bit worse but they're scaling in to their ability to throw the hyperscaler data centers on them was easier. And they said, that's a trade-off that's worth it for us. And it soon became worth it for everyone, the sort of compute became more available. You move forward and you start to see things like GPT, Stable Diffusion. And next thing you know we're talking to Chat GPT which is really just a combination of massive training sets, a lot of hardware in the training and incredible results. And that's what gets us to where we are today. So you kind of see these things come together, why we're at this moment which I think is pretty special. And it's not rare for computing. Now all of a sudden, we hit a point where computation gets to a point we can do things we never dreading before. But in this case, it was advanced in both the theory and the computation in parallel.
So with that history lesson, where are we going with this at Roblox? And I kind of touched on this before but we're really imagining a world where anyone can create anything. That looks like something they never would have had the skill to build on their own before. So we have -- this is just a mochi not a real demo but I'll show you a demo that isn't too far from this, where someone would tighten, hey, I want to see with the forest, a river and a large rock in the middle. I think this goes to an extreme at some point, you take the setup chapters of your piece of fiction and then we're not that far away from being able to build the world around that. Example I like to use is the first chapter of the Hobbit and you build the Shire, right? Before that would have taken many, many person hours of work, detailed artistic work and so on. And I expect it to bone when you want a professional results. Your artists and coders will come in a cleanup crew and make things and customize and put that unique twist on it. I don't think that's going away but it's a massive, massive accelerator. We already see this in Roblox community with creators using generative tools to even just give them inspiration, sometimes to bring assets along but they almost always touch them up to put their own style on it. But this is where we're headed. And we're not that far away from it.
I mentioned we've been investing pretty heavily from a talent and technical capability point of view in AI. Even in general, the AI we've had a few published novel results. These are both efforts that we were part of. So we didn't do it by ourselves. We were kind of a consortium or group of authors working on this. I want to talk about Star Coders. Start of the Coder is a state-of-the-art open source LLM that's -- and the key here is open source. It was done as part of the big code initiative that was a partnership both between industry and academia. Some of the key academics on that are now working at research in Roblox Research. Star Coder was trained primarily on coding examples. A large part of the Corpus was Roblox Lua. And it's a great example. If you've all heard of GitHub copilot. This is kind of driving towards an open-source version of that and I'll show you a demo of using this in just a little bit. Also ControlNet, this gets a little bit hard to explain, technical. ControlNet is you view it as the idea of using a small neural network to better fine-tune a larger neural network when it's running. So you have these generative models, diffusion models like DALL-E that can create alterative images. Often, you want to control them in some way, whether it's the type of content you want to guide them on the style, possibly even the appropriateness. ControlNet is some work that Soma Roblox Research recently published with some others on how do you build a smaller network that's far easier to train that can kind of give that control over the larger network? And you can build these general-purpose networks and then fine-tune them without a massive retraining exercise.
We've also been innovating in infrastructure and this is important because there's a lot of powerful things you can do with generative AI and large language models but they can also be competitionaly very expensive. So it's important to be advancing both at the same time. One event we have at Roblox is we have control over our entire stack. We can use some third-party cloud technologies we want to. Most of our computation is done in our own cloud. That could be in our core data centers. As I mentioned, this look pretty traditional. But we also have a lot of edge capacity which is very close to the user and is kind of computationally intensive. And lastly, every time a user opens up the Roblox app, they're kind of lending as a little bit of compute capacity at the time which we can use for all sorts of things, it's primarily to run a game server but we'll also have some examples of running a small AI model that's appropriate for, let's say, a mobile device but does something local to that user that embedded from latency and privacy benefits and so on. We also have a lot of very unique data. We, of course, have the social graph like any sort of social network might have that between who knows whom at Roblox. But we've also -- we've recently launched voice in the United States to older users. We're getting a lot of good voices that helps us make voice data better, allowed us to talk about moderation of data, possibly automatic translation and so on. 3D objects, we have a huge, huge library of 3D objects created by creators that we can use as we think about how we optimize AI model.
And then, of course, code, like the example with Star Coder. We have a lot of code in the system. And with interest about our code repository is all that code is in a single Permian language, foreign building interactive 3D experiences. So it's a very large collection, a very single purpose sort of code which is exactly sort of things we need to learn from if we want to make creators successful. So here's a little demo of Voice -- face animation Roblox, something we've launched with Voice. It's right now limited launch because we still have some moderation tools to build around some of the parts of Voice. But what you see here is we're using a camera, we're capturing facial expression. There is a small model running on the machine that translate that into what we call bone movement on Roblox. In the one we have what we call a Dynamic Head, that's just the head you have here but back by the end of the year, every Avatar head in Roblox will be a Dynamic Head. We have [indiscernible] the eyebrows. We're actually moving. Like in a sense, they were moving the muscles within an eyebrow within the smile. There's many of these in a face. It's not just we're plastering different smiles on you but we're actually looking at the facial expression translate that to a fairly fine grain detail. And we can transmit that across Roblox so that if you're an experience with someone else, they can see your facial expressions and most relevant to where we're using this today, your avatar lip sinks with your voice stream. So when you're talking employees, you just don't hear voice coming from someone spatially in a 3D environment, you can see who's talking. And it really, really does add to the realism level, so to speak, or the immersiveness of it. We run this model primarily on the local device you're running your phone, for example. Of course, we train it in our cloud. So we're using that breadth of options across the board.
As these models get more expansive, like the voice [ph] moderation model. We expect on the voice moderation model. And by the way, this is new. No one's really ever done real-time voice moderation before. We have our first running version of it. It will probably run in a few places of the limit. We're probably running your device to protect you against the stuff you really don't want to hear. We'll probably run our cloud on the voice streams. It may even run on the source device at some point some prep work. So we're careful about that in moderation. Generally, the source device is not the safest place to be running something you want to prevent someone from doing that activity. But by and large, we can use the full gamut of our extended cloud to run these models. And we're building infrastructure that allows our data scientists and NL engineers to build and deploy in the appropriate place without a lot of rework in order to make this work well.
Next slide, getting to the art side which is, as you heard, is near and dear to my heart. About a year ago, we launched these new materials on Roblox. So traditionally, Roblox materials look like probably what you would expect them to. It was kind of a bit map of color that you could, in a sense, paint it on any object. But there's a lot of advances in the world of graphics and in particularly this idea of what's called a PBR material, physically-based rendering material. And the idea behind a PBR materials is not just a layer of color but there are additional layers that talk about how does light bounce off this object? So you can go from having a gray story, for example, to a shiny Excalibur that lens in the sunlight based on where the sun is located in the world. So this was a great improvement in the sorts of quality you could have in a given world and experience but they're hard to make. So only all these layers requires understanding the properties of how kind of graphics works and how light might reflect and so on. Some people could do it but most of our creators to take advantage of PBR materials had to resort to googling PBR materials, finding examples that were in the open source on the web and download them into Roblox. And that was a good first step. But in the early part of this year, we deployed a generative model that allows you to type in tech. So for example, the example I've done is hot, bubbly lava and it will create a bunch of example, materials that look like hot, bubbly lava. You can bring them into our material manager and I can put them on whatever surface I want. So now we've taken something I never would have had the skill to build one of these materials. I can now go build any sort of interesting material I want. I can look at a bunch of different versions. I can iterate through them. It just dramatically dropped the skill level necessary to make a very realistic but gained Roblox experience.
Doing the thing for code and I apologize that this is a little bit hard to see when we get to the code example. So I'll just walk you through what's going on. You can see this character was walking into these bubbles and they were changing color and then they were popping. That gets coded here. You can't see it in great but grades just a comment that says exactly that, it's English. And then it generates the code for this automatically. That can then get loaded directly into the spheres you saw and you'll see it here. You just walk up to it and it changes red and it disappears, right? I mean that's a simple behavior but the idea of moving towards this is what it takes to code. And the benefit is and you always have the code when you're done. So we haven't dumbed down the coding language or anything like that because we find this experience at scale. People want to take a modifying tweak and go a level beyond where the AI might take them. So it's all there. It's also great, by the way, since a lot of our creators start their first coating experience. It's a great way to learn, right? You type what you want and you see how the system does it and you can learn from that as you go which makes you a stronger creator as you go down the path. So this is something we've launched. We launched it with some open models and then we trained them on our code. We're now bringing Star Coder into this. So it's kind of a wholly owned within Roblox sort of solution.
Then here's -- we're going to roll a demo in a second but not quite yet. Let me first stress, it's really, really early. This was what one of my engineers packed up over a weekend but it shows the power of these generative technique. It's a little bit long, so bear with me, it's about 3.5 minutes but it was so impressive. I knew I was coming here to give this talk, I wanted you guys to be able to see it because I think it does a good job of painting the future. I don't think the folks on the cast will hear anything. I voiced over. So I'll try to minimize it. For the folks in these 1 or 2 places, I didn't realize that they wouldn't be able to hear me when we did this. It's 1 or 2 places I can clarify what's going on but you'll get the general gist. So let's roll the demo. Okay. So look, that's still very early. Obviously, there was no code injected into that. But as I showed you before, like that's not a far way thing to start doing. See how it's voice controlled. You could see how -- in after distant future, you wouldn't even need something like studio to do that. That could be something that's just in the Roblox experience and we could be creating. I don't know about you but it would have taken me a long time to build that out compared to what was done right there generatively right? So this is pretty exciting from my point of view on where the future can go and the fact we can put this demo together today, there's a lot of sophistication. You saw how the AI was almost coming back to say, wait, is this what you mean? Is this what you're interested in? It's helping to create or think about things they may not even be thinking about on their own because also the creator is not an expert on Roman history when they approach this sort of problem. They just try to make something look cool, right? So I think there's a lot of opportunity here.
So then just to sum up. We really think there's an opportunity to accelerate creation with all these techniques. And that's really the goal. The shorter we can make the path from I have a great idea to its resolved as a 3D experience, the better. And we think there's an opportunity where that just become something where we don't have millions of creators and then hundreds or tens of millions of users but they're all one pool. And so it's coming together and people are just having awesome ideas, possibly just for so maybe their friends, their coworkers and advertising came in, they want to do and they're just able to bring them to life with the saying, oh, now I got to learn how to code or maybe I want to go out to a firm or something like that. It's just that everyone's fingertips. And we call this democratization of creation.
And the last thing I want to touch on because it's important for Roblox. We always talk about respecting the community. We're starting this journey with a big focus on being thoughtful and ethical. So for example, all the code samples we used for our training already were in the public domain, given there. Everything we're doing from a creation point of view involves opt-in by our creators. We have found our creator community is really excited about this. opt-in rates are very, very high. So we're not losing a lot in there. And a lot of folks who think they have very special IP, not to worry about it appearing in someone else's world because it kind of translated through a machine learning system at some point. So I think the potential here is huge for Roblox. We're incredibly excited and motivated by it. And I hope what I talked to kind of explained where we were coming from, how we got here and like where we're going with this technology.
So with that, thank you.
Question-and-Answer Session
Q - Matthew Cost
Dan, thank you so much for that. There's a lot to go through. I think before I give the people in the room a chance to ask some questions, I do want to follow up on some of the stuff we just saw. There are a lot of different tools and capabilities that some of which are farther in the future, some of which are nearer term that you addressed. I guess maybe break down for us what are the things that are the near-term focuses for the next year or 2 that we're going to see rolling out onto Roblox soon? And then what are 3 years out or longer or the things that we just saw?
Daniel Sturman
Yes. I think what you'll see in the next year or so, we're going to continue to push on in studio sort of creation. We have other tools coming to mind on maybe a text prompt for some early 3D images. A big one will be Avatar creation. We are open up the whole ecosystem, so creators can come and bring any Avatar body, any head. We can make it a dynamic head kind of automatically. We can make -- if you bring a mesh of a body, we want to be what's called auto rig it. So turn it from an image, a 3D image to something that has working arms, legs and you can put items on. So that's going to be a focus for this year. It's just trying to allow that level of creation on the platform. You'll also -- well, you won't see it. but it will impact you if you're on the platform, there's a big push on some of the things around further automating trust and safety which is not generative AI but it uses large language models. So just the same technology, different applications. We're not generating things from a trust and safety point of view but the power of these model allows us to understand context in a way that can make the entire experience much safer for everyone and also reduce false positives which are frustrating creatives when they get flagged to something that's totally legitimate.
Going forward, though, I think the sky is the limit. I talked about being able to drop the Hobbit into experience the Hobbit have and create the world around it. I think you'll see more and more creation move to users who will have these tools directly in their hands. So they'll be creating experiences, creating experiences within experiences. Things like on the more Avatar. So let us say, I have a Wild West game and I have an avatar that I've addressed up to kind of look like Punk. You don't want Punk showing up in a Wild West game but can we keep the spirit of the avatar build and translate that into a way that works with the creator's experience. I think that's an exciting area to go for all of this. So I can go on and on but I think that's kind of roughly the time line. I think there's going to be, in many ways, incremental, like there's not a -- I think actually will be dramatically different from this year. We're going to gain experience with this. We have a lot of experience with our data sets. Those data sets are going to get richer and we'll just be able to crank out more and more interesting capabilities that do all this. Mostly in making things safer and accelerating increases.
Matthew Cost
And then one follow-up. So I mean there's a lot of different functionalities and use cases that you touched on there. I would think it would take a lot of different tooling to accomplish that. How much of this is something that your intention as CTO is to build it from the ground up at Roblox as opposed to partnering with other models that are out there?
Daniel Sturman
Yes. So I think a few months ago, even just 6 months ago, I just said, oh, we're probably -- it looks very hard to get to this building these large language models on your own. In the past 6 months, the open source community has kind of gone wild and shown that while they may not be exactly where the top models are at, they'll catch up pretty fast, right? And that's not 100% surprising if you think about it because compute is becoming more and more available, a few folks pull together and you have a lot more of this large computation capacity. Techniques are getting better which means you don't necessarily always need to have as much compute, things are getting more efficient. That's just the history of computing, right, going back like to the first reline these machines. So things are always getting faster, Moore's Law and all that. So I think right now, what you'll see us probably doing is working closely with the open source community, probably taking those models and we have some really unique data sets that probably won't be fully in the open and using those to build some very special capabilities inside Roblox. But we know it's important to keep collect like what you take from open source, it's generally good to get back something. So we'll probably be working with them in a collaborative manner as we move forward. We think we can get a lot of mirage on that.
Matthew Cost
Happy to turn it over to the people in the room.
Unidentified Analyst
So I think in 2 different places, you talked about AI accelerating 3D content creation and monetization. And I think the creation part was really obvious. On the monetization part, can you just...
Daniel Sturman
Yes. I think that was more -- what I really meant by that is generally learned when people can create experiences better, like in -- you get better experiences that leads to accelerate monetization. I think there is also some -- there will be some interesting things as we get better at building more dynamic marketplaces. So for example, we just released what we call UGC limits on the platform. Before that, the only limit could be built by Roblox. We've now opened that up to everyone. Getting the right economic models behind that's looking pretty sophisticated because the -- it's much easier to duplicate a Gucci bag online that is like in the real world. And we all know it's pretty easy to do in the real world, right? So I think there's a lot of things around better detection of knockoffs that will accelerate, make it worth the creator's time to go do things. And even the pricing models themselves. We recently brought a few economists on to Roblox team as we think about deeper systems there and we're seeing the application of AI to do things like this object is like that object. Like in the real world, I think we kind of all know a purse when we go to look at something and say that's a purse, right? It's a little bit harder to do in the 3D world to do better classification and understanding and then understand markets behind it. I think we'll power economy as well. But the biggest driver is just, if I can create faster, if I can create better and there's less friction, we're going to monetize better.
Unidentified Analyst
Maybe let's talk about what table stakes versus what accelerates. It feels like we're seeing a lot of these tools proliferate in the industry right now. There's a lot of focus on them in the game industry and many other places. In order to stay on the path that Dave has talked about to get to 1 billion users, are these things that you need to do just to keep up with the Joneses? Or are these things that will really move the needle in terms of growing the user base, commercializing the ecosystem, etcetera?
Daniel Sturman
Yes. I think in general, this idea of dramatically drop the skill level need to be a creator is something that is relatively new. I mean it really came with the move towards the first -- for me, it was when I saw DALL-E for the first time in kind of like the summer of 2020, right? So that is all, I think, new and something that is purely additive to everything we've been thinking about. It's not something we've jumped on because I think it helps all these other goals and will accelerate that growth. In terms of table stakes, there's not a lot of things in this space that I think are like, oh, we weren't knowing how we're going to get it done. But for example, I mentioned voice, our ability to build voice moderation correctly now because of this sort of technology is far better. In building voice moderation, we are using a large language model for what's called synthetic data and training. So it's rather than having to have millions of millions of hours of well classified and scored. This is good voice. This is dad voice. We're able to, in a sense, generate enough on a realistic enough way using these large language models to train a model that can eventually deep you out when you're saying something you shouldn't say whether it's bad words, that's pretty easy. But bullying, explicit sexual content, anything like that. And doing it's a voice over is something that would have been very, very difficult to do before. I want to slow this down on [indiscernible] -- now it's going to accelerate that, right? So it's not so much table stakes for future but I think it's all kind of blending a little bit but it's just on an accelerant to where we want to go.
Unidentified Analyst
There are quite a few studios that have built their entire business around Roblox. You named them also like in your partner program. How are you kind of working with them or trying to be collaborative in light of maybe there's more competition out there because you're democratizing creation tools for everyone and it's not just their scale necessarily anymore?
Daniel Sturman
I think the studios that we're working with us, they're bringing more to the game than just technical understanding of the platform. They get the user base. They get what the -- how the ecosystem works. And yes, they have skill but I don't think what's distinguished them is necessarily to be the most tackle skill. So I think these tools are going to accelerate them. And in fact, it's where a lot of these ideas came from. A lot of ideas around here and the team's ideas came from being an RDC 2 years ago and we saw artists who are building things just for the overtime marketplace and how they were starting to use DALL-E as store boards and idea boards and bringing that into items they were going to create, right? So I think it's going to be very collaborative. I think those folks are absolutely going to be successful. I think they're going to find they can run a lot faster as we deploy tools that enable the creation. Nowhere inherent, we thinking we become the creators. We still think all content creation is by third parties on the platform. Yes, there might be more competition. I think competition has always been a fact. I mean, we have millions of creators already. So there's already a lot of competition.
There's a lot of folks who tend to bubble up and someone they come in something like go. It's always the same properties or studios that are doing well on the platform at a given point in time. We have a fairly rapid turnover there. But there's a core group that kind of get what this ecosystem is about and are thriving on the platform. I think that's going to stay. They have assets today, big users, loyalty and understanding of what monetizes and what doesn't monetize but they carry with them. This just accelerates their ability to grow. And so they'll be just as innovative and they'll want to accelerate their leads to the extent possible. So they want to continue to be innovative, just like any other company operating in an environment. So they'll use the leads that they have and try to extend that.
Unidentified Analyst
As the bar for creation comes down, does this change how you guys think about sort of your developer economics? Properly unit economic side of the business?
Daniel Sturman
Yes. I mean I think we tend to think more about unit economics for on the user side than on the creator side, right? But what I think it does mean -- I think we're in effect is in a different place. I think our recommendation systems, for example, have the rise to the challenge that there might be more and more bespoke narrow reaching but incredibly compelling to a euro group sort of audience for these creations. That's where I think it's going to go. Like I think maybe a set of investors get together in a bar one night and put together experience to collaborate going forward, right? And that's not going to have broad appeal to 1 million users but might have a lot of appeal to that small group. So I think it's more like the systems around how do we take content on the platform and put it in the right hands is where the most of that pressure will be versus I think we are -- this goes to the infrastructure point I was making. We're always thinking about what are the costs of running these things at scale. And we're thinking about the fact that it's a tool that could be used by everyone that we need to build it in a way where it makes economic sense for us for it to be used by everyone. I don't think we tend to think much right on economics on both sides.
Michael Guthrie
I think what it will do is accelerate great content. Whenever we accelerate great content, we see more engagement, where we see high engagement, we see conversion. So what I -- I think the place we'll see it really is twofold. One will be, again, more and more content. So that will drive more users to become payers because the content is better and it's worth worthy of payment. And the second area which is a little more on the cost side and probably mundane will be on safety. The requirement to have as much manual moderation over time will be reduced. I see a world where the creation is also not just our developers but also one of the -- like last year, about 100 brands built persistent experiences in Roblox. So it's a big number. This year, that number is growing. We made some announcements about a week ago. The faster brands can do that and build persistent experiences in their voice and their brand, the faster that will grow as well. And so I'm a firm belief that changes the units I've talked about before, that changes the unit economics of the business as well in a really favorable way. So those are probably the 3 things that I think jump out.
Matthew Cost
Mike, maybe I'll stay with you because Dan brought up the point about the philosophy at the top of the company potentially changing on working with open source and building as opposed to maybe licensing tools. Does that change the hiring plan or how much investment is required upfront in order to make sure you have the resources to do that internally now? And is that maybe different than it was 6 or 12 months ago?
Michael Guthrie
Yes, something we're looking at in very much in real time. As you know, we've always leaned towards innovation and investment in innovation and investments in things that we help grow the business at high rates. So that's generally our bias. We've had periods of sustainably good margins and we've had periods and being able to be self-sufficient. We've had a period of very, very high margins and recently reinvestment in the business. So I think what we'll -- it's something that we're talking about a lot. We really want to stay on the innovation edge and continue because ultimately, we believe that's the biggest barrier that we can build in the business. But I don't have nothing to report today that says we're changing our cost structure as a result. We're clearly very interested in the technology. We've been implementing it for years but it's something we're definitely going through right now.
Matthew Cost
Dan, can you talk about -- you guys been doing it for a long time but can you just talk about some of the ways in which you're managing costs through the distillation, the quantization, some things that go with the model to sort of that controlled cost on them?
Daniel Sturman
Yes. So we kind of jumped on early that you can't just take a large language model to something cool and check into production and smile. These models are expensive. So there are some well understood been emerging in the past few months techniques around how do you get an affordable outcome from these? Those tend to refer to as distillation which is the idea that you're basically using a model to train a much smaller model but use the ability to generate content from this large volume essentially get a distilled version of that model that's much smaller and much more efficient. And quantization is the idea that rather than everything being a 3 or 16-bit number represented in these zero networks, you can find ways because you have all this data to fine-tune that down, use fewer bids, fewer bids means it's -- you need smaller machines to basically run it more if it's in memory or in cash or on a GPU and so on. Anything we have at scale, we have a general rule that like you're not going to just talk and all on into production and do it that way. We will have a few except as we're looking at one application, for example, where we thought it could help on our creator portal because the QPS there is very small. It's like less than 1 per second. Like there's no point goes through all the optimization techniques where it's computationally negligible even to run the ALM. So we will right-size. But almost anything we're putting in front of our entire creator community at once or particularly our user community ones, only deploys when there's a TCO argument that can be met. And that slows down a little bit of technical progress sometimes. But I think it's also good for the teams. It means it sharpens their game. They better understand what they're deploying, they're more focused on their objectives. It all works pretty well.
So a great example of that is where we are with voice moderation. We got pretty quickly, like in a matter of short weeks to the point where we had an LLM that could tell you what this is inappropriate thing you just said. We now have something that runs fairly efficiently, right? And that is starting to move towards deployment but we had to take the extra month to get ourselves there. And that's the distilled and monetized model. And these techniques will evolve both in Roblox and outside of Roblox. The community is very excited about this approach. The reality is generally from those things don't need the full power of LM for any human problem, particularly as they get multimodal. You're probably not using all those types of media at the same time. It's great they're there. It can do interesting things. response to a text pop, you get a picture or maybe a song or whatever it is. But for our applications, they tend to be a little bit more focused. And when that's the case, we'll build models that represent that and do that efficiently.
I guess one other thing I should call out, sorry, that I preheat people do not know. A lot of the folks still use these models, the ability to do inference which is the execution of the model in response to a query on CPUs, not just GPUs, right? So that's important for us too, helps the scale. We run our own cloud. We got -- we have a lot of CPUs kind of sitting around, depending on where the sun is at a given moment, right? There's parts of the world that are sleep on our edge and parts that are awake. And we're actively looking -- and we've already deployed some of our first models in a CPU basis into this cloud to start to take advantage of that. So there's cost advantage is there. And when you have a team that has experience looking at how do you build out global compute infrastructure and dialing to understand hardware and TCR arguments and the trail between hardware and bandwidth and so on, this falls very naturally that's where we tend to expand and think about how do we make sure we get the most bang for the buck from AI algorithms.
Unidentified Analyst
Probably in video holders, you have to -- got a way to trading on CP. Is that right?
Daniel Sturman
I think there's work going on. I mean the difference is training tends to be a batch process, not a real-time process. I think there's differences in the architectures that right now, I mean GP is a pretty efficient way to do that. But we're also being careful on how we use GPS win and so on. And we do we pull the maturity data center or not? Like what's the best way to model all that.
Unidentified Analyst
Related. Going back to your question for a second. You're saying that next year is going to be dramatically different. Help us understand, is that difference going to pop more on the professionally developed sort of content side or on that longer tail of like not as developers sort of side?
Daniel Sturman
Yes. I think it is both but a different way. So the most obvious place is what you might call novice or what does it take to not be a novice. You saw what I shared with the materials. All of a sudden, some that would have been a novice look like it's produced by a novice because things look pretty awesome, right? Gameplay may get richer because coding speed goes up, you don't need as big a team to get things done. One advantage of Roblox, you don't need a huge team to do a lot of these things already. But even on the top end, like I think about some of the folks in our game fund and their ability and their access to PBR materials and how quickly they can move on that. These were never huge teams. They didn't want to these huge teams. We all know like any organization feels less efficient as it grows, right? So in terms of people. So they're able to bear with fewer folks. I was just -- before we came down here, I was out there with Frontlines which is one of the newer experiences. If you haven't checked it out, you want to understand where Roblox is going, check out Frontlines. I'm terrible for person cheers, almost as bad as I am at art. But seeing how it plays and what the world looks like and everything is really kind of eye-opening. They're what I'll call a very professional team, even they will benefit from all these sorts of tools. So I think it hits both but I think the democratization is going to be the biggest maybe seismic change for the platform as a whole.
Unidentified Analyst
I have a question sort of on the competition. You have cited the big data set in your stack. And I'm just wondering how does that help you to utilize generative AI versus some other people can utilize this? Think about Fortnite and how they can utilize generative AI? And can they like copy you or do the same thing as you do? And how do you really like differentiate yourself?
Daniel Sturman
Right. So I think there's a few pieces to all of this. So first of all, we've had creators on the platform. That's where our data sets are coming from and that is certainly a leg up. But I think it's more than that as well, like we understand where creators are coming from. We're working with them a long time. We're not competing with them. We don't produce any codes ourselves. So we're not competing with our creators. We've done a lot of thought on what an economy should look like. So I think generative AI is a piece of it. I think we have some really unique data sets. I think we're well positioned to be a leader in things like real 3D object creation. I mean, when I say object, I don't just mean image. I mean a car with wheels that span and a steering wheel that controls it be able to understand that's what a car conceptually is as opposed to a photo of a car being translated into a 3D image of a car, right which is an empty mesh with nothing. It's not a full-on object. So I think we're in a great position there. But when you talk about competition, I think it's about accelerating our creators on a platform which already really gets creators. And I think that's as much of it, the fact that we already get creators and we understand how they think and they work and it drives our -- what sort of tools we think they need and so on. That's the whole package of what makes Roblox successful.
Unidentified Analyst
So you guys sit around when you invite strategy group sort of sit around and talk about some of the things that worry you about this technology and where your own vulnerability is within the ecosystem. Where is that? What is that?
Daniel Sturman
Yes. That's a good question. I think we worry a lot about a lot of things is what keeps us sharp. Like we always -- we have a healthy dose of paradigm and the way we think about strategy. Obviously, we started this a while ago but it was do we have the talent we need? I think we've done a great job improving that talent. I think some of the things we've done recently, we wouldn't be able to do even a year ago but there was a focus on understanding we need the sort of talent we started going and bringing these folks on board. That's going very well. We obviously want more and there's going to be more of this done in the company will have to do it. But we've -- from my point of view, when I think about running the most recruiting organization in the company, what I think we really doubt in is things like how do we differentiate good from not great talent? And what are we looking for? How do we do interview loops? All that sort of stuff. And I think we've nailed that pretty well. But it's something we talk about, watch the comp ranges and so on. We got to make sure we get good talent here.
Second big one is running this at scale. And yes, everyone is super excited. Every team in the company wants to go do something with this tack, Wait a sec. Let's talk about things like running on CPUs, distillation, quantization, what are the tools in the process so that our company can scale without deploying an amount of compute that's just unrealistic and doesn't deliver the value based on that. And I think just making sure we are staying in tune with where the technology is going. So I mentioned we started Roblox Research just a few years ago. That was done with the stated purpose, not specifically for AI but I said, hey, there's a bunch of open problems we're going to be going after and we need to be well connected to where the state-of-the-art in the world is on that. And I was mentioning to some folks before, it's kind of interesting, like 10 years ago, industry in really track what academia is doing in computing. I know that having coming out of academia and myself, like just -- it just wasn't really tracked in the class of the way it is, let's say, in medicine where there's a very distinct pipeline from like academic labs into biotech start-up into a medicine that we can all get at some point. That's changed. So we've built up an organization whose job is in part to make sure we absolutely know what the best results out there are and then we can bring to bear that we're solving a different problem, how do we cherry-pick from these state-of-the-art techniques and bring them to bear to solve the problem that maybe has ever been solved before, right which -- and we will have problems that we have been solved before. We are much more in a 3D than others. We're much more this content creation of 3D than others are. We're doing things with facial expression in that realm. These are not -- we can't wait for someone else to go to solve these problems for us. We basically have to take the lead. So those are the primary things we worry about.
There's also enough like we think about what I call the safety side. Obviously, society as a whole is worried about things like deep fix, right? And that's a little bit less concern for us because I'm not sure what a deep fix means in the context of Avatar. Avatar is already kind of a phase. It's a persona you put out there to be who you want to be. And I may want to look like you and I might build an avatar that looks like you and that is just kind of expected in the virtual world that we run in. But we have to keep it on like these have helped our safety model, these sorts of technologies have helped our safety models a lot, where might they hurt, where might get more adversarial, where might in a sense, the bad guys get more clever because of this? I think we're doing pretty well in upfront. And so far, we haven't seen any indicators of that being a big concern. But it's something we're always going to keep an eye on.
Michael Guthrie
Joe, one thing the company is relentlessly focused on innovating and staying on the edge of innovation. And a year ago or 18 months ago, we were getting as much pressure as every company out there to cut back, cut heads, do whatever. And we really didn't, right? We chose to continue to innovate, continue to invest, to be consistent through the cycles rather than sort of stopping and starting. And we're also a company that's much smaller than 2,500 people wasn't -- isn't that many folks. So in some ways, it's not that we're not worried. I think we're always worried. And the default has been to be pushing innovation at the edge as much as we possibly can. We have a very good unit economic business model. At times, it's had a certain margin structure. It's periods of time a massive margins when the top line grew really faster in COVID. And throughout sort of the last 6 quarters, we basically run the business at sort of neutral cash flow neutral, basically. But that's always been with an eye towards staying on the innovation treadmill, if you will. And I think that's one of our biggest assets is for better for worse. We believe the minute we step off of that in almost any company is inviting obsolescence and new entrants to come in to the classic innovator's dilemma. The ones that stay on the innovation treadmill have the opportunity to stay on it. But that's really as much a mindset as anything. And Dave is just really relentless on recruiting and hiring the best talent to solve more and more problem because this is one of many over the 20 years of this business' evolution that has been an opportunity to keep investing and keep advancing the platform, there'll be others. And so I think that mindset is the thing that helps us guard against becoming obsolete and letting somebody else bubble up. But it's -- anytime you have this kind of new disruptive technology, it's exciting times.
Unidentified Analyst
Can you just maybe a little bit more of a case study on the cost side, specifically, obviously, maybe give us a sense of the size of the team there is today. Obviously, you guys have rolled out some innovation here to more going to come out but just sort of the level of debt you guys can make on that expense base and then sort of even beyond trust and safety, sort of the room that you guys could have on just general productivity gains across your entire state and how that sort of gives you room to do some of the things you're talking about here, just give us a little bit of...
Daniel Sturman
Yes. So when you say trust and safety, this kind of like there's the engineers who build our trust and safety system. And then there is a fairly large community of moderators and customer service agents and then like the kind of the team that supports and manages and trains them and all that sort of stuff. I don't expect to see us being smaller in the -- on the engineering side. But to Mike's point, that's where the innovation is coming from. We see a lot of opportunity, a lot of opportunity to fly this technology on the moderation side, making them more efficient, right? I mean, voice is a really good example of if you just give someone a 2-minute voice clip, they can't be a very efficient moderator. If you can narrow it down automatically to the -- if someone followed an abuse report, this offended me. If you can narrow it down to the 8 seconds that was offensive automatically and bring in -- and this is why we think it's something because they use don't tell us their use reports aren't necessarily high quality. We say we think there was bullying going on in this conversation. They come to our effective that moderator is going to be when you do that, right?
Obviously, on the customer service side, like it's not just hard to like everyone is going to get benefits from this there as well. So I think I'm not going to comment on how small do we think it will get her because there's this weird dynamic of there's more going on and they're getting more efficient and like those curves cross. But overall, I think we see a lot of line of sight both on moderation quality and efficiency down the road on at least a cost-to-serve basis, let's say, per user hour basis, right? It kind of has to. If we go to every user is a creator. like I can tell you, I'm not watching that feature until I have a moderate story that works and using the same moderation techniques we're using today, we'll not work there. It's going to be automated, right? And it's going to have to be auto. And moderator's role will change to be either the most egregious cases or the cases where these neural networks always keep you really. They give you probably, they don't give you yes and no answers. They give you probabilities. So maybe in a certain probability range, a human gets them bid or something like that, where we want someone to look at it. Humans aren't full proof ether as we've learned, right? So -- but there's just a huge amount of opportunity and it comes down to, I think, understand the context.
There's also an interesting ability for us to change our trust and safety rules more dynamically. We ran some experiments just using the standard all am conversational and conversationally building a set of moderation rules with it around text, just as an example and how well it was able to learn that. As we move to different communities where trust and safety stats may be different. It's going to be different on how you moderate for an under 13 community in the United States and how you might moderate a 25-plus community in, I don't know, pick a country, Israel, right, or something like that. Those are going to be different. The ability to take policy and translate that kind of automatically into implementation is extremely attractive as well. It might be something that allows to be much more dynamic and have maybe an order of magnitude or 200 magnitudes more distinct communities than we thought we would as we started thinking about differentiation between these groups.
Michael Guthrie
And all of our moderation costs are embedded in the line item and for trust and safety on our P&L. I think that's been running about 17%, 18% of bookings. So the infra piece of that is the bigger of the 2. So the savings there will be on the smaller number but it will be meaningful cost savings, whether we reinvest that or what we do with that savings is vague.
Unidentified Analyst
Just is that something that switches on over overnight or is it like how long is the development improvement process for [indiscernible]?
Daniel Sturman
It's gradual.
Unidentified Analyst
I feel like 2 use cases that feel like for AI that feel like they are commercially important. And one of them you talked about last year in the shareholder letter, I think at the beginning of '22 are discovery and personalization. So I guess can we talk about where we are on the journey to discovery, just getting the right experience in front of the right user at the right time which feels important to get them to spend or at the very least to spend time. And then I'll follow up on personalization.
Daniel Sturman
Yes. I think it's somewhat different space than some of these others because getting Discovery right it's a little bit more of a slog. The techniques are well known. It's a matter of applying them better and better and getting better signals. I think we made some incredible protests just in the past 6 months, like Discovery is getting better, particularly for when we think about a up. And I think the sort of signals we're giving folks there, cold start has gotten a lot better. That's always a hard one. You have no data on the individual except what they tell you when they join and they're trying to give them a reasonable recommendations. But the things really like, well, you make friends really bring more social signals, for example, that stuff. I think that's going well but I don't think there's a quick seismic event around personalization Discovery. It's an integral part of what we're doing is we imagine more and more bespoke experiences and being able to support that.
Unidentified Analyst
So on the personalization front and this is probably going years into the future but I'm curious how many? I can imagine playing frontline and it doesn't seem too farfetched that if Roblox knows a lot about me and cosmetics that I bought in the past that it could, in fact, not only get the right offer in front of me but perhaps generate the cosmetic that would appeal to me while I'm playing that game using some of these generative AI tools combined with the data that the platform has on me and my interest. I guess, how far away are we from tools like that or anything else you would add that could drive monetization through better personalization of Discovery?
Daniel Sturman
Yes. No, I think that's a very good question. I don't have an act like I can't say like a date will be that we'll have that. But I think it also comes back to how do we enable our creators to get access to a set of tools that may be first party from us, that may be third party that help them understand that in a way we're being safe and careful with the data behind it. The acer described, I think, would be absolutely a tool that created decides to bring into their experience, help them understand what they might want to purchase, make a better recommendation from a purchase point of view. That will also come we are starting to enable. We've had examples of this, bringing marketplace in experience as well. That's something that's been done with a lot of the brands work we've done and definitely all the music work we've done, like you can go get concert any music concert on Roblox. And that was up from when you could have done even I guess before -- right before COVID. That launched right during CIVID. It was our first ability to do that. So I think there's some real opportunities to kind of sub personalize the experience for someone and to pick up signals about what they're interested based on other behaviors that's had on the platform. I think there's also an interesting opportunity. One of the unique challenges for us on personalization is we get fewer data points than extreme in a web search, right? Like because you're in an immersive experience, you hop between experiences let up. And but we're starting to look at what signals kind of pick up in the way you are in experience? How you behave in experience that can inform -- you may be in an experience but what are you enjoying doing really in that experience? Can we pick up on that? Can we understand it and take that back into personalization. So we get more signals per hour, so to speak, than we typically do -- if we just say, these are the things you've played in the past month, right? And we see some real opportunity to start to get those kind of micro signals.
Unidentified Analyst
Just to go back to the question on Discovery. One multiple conclusion is that the determinant for success for developers is less about sort of their really integrate content and more about distribution, right? How do they get their experience in the front of the right people and effectively monetize it. So can you just share more about how you're solving that program? For example, as every measure becomes a creator, distribution, maybe...
Daniel Sturman
Count a little bit. There's only so much screen real estate, right? So I do think there's 2 areas I think that ties on. And I should have mentioned one of these in the prior question but it gives me an opportunity to hit on it now. One is really thinking about personal recommendation is much more personalized, much, much smaller kind of -- your results will look very different from mine and so on. I think that's how we kind of get around the fact that you may -- an experience may not get as much broad-based exposure but it will get to the right folks and it will vary from person to person. The other one I recommend search is an integral part of it. So we've made some -- and this is using some of the technology that's starting to emerge now, much more semantic type searches. So for example, we have an experience called the OK Corral when someone searches for Western, understanding that's what it is without the creator having to say, oh, this is a Western and be able to bring that up as an experience I might be interested in. And we made some really positive advanced structure and we'll continue to work on it in search which is another way people find what they're excited about it. And I think the third way that methods is the social network and we're really working on enriching the social network. So what are your friends doing? What are they up to? Because generally Roblox, people want to do these experiences together, not by themselves and understand where your friends may be in experience right now or something like that can really drive a lot of well and feeding that in the recommendation system with much heavier weight.
Unidentified Analyst
Okay. And you said earlier about sort of your North Star, your have story. Can you sort of share that how the vision you have with this group? And then like just technologically, what are sort of the hardest part in doing that that you want to do with the hope so we can understand like what has to happen for that to happen.
Daniel Sturman
Yes. So this is -- that demo showed at the end on steroids. What I challenge the team is I want to be able to take the first chapter of the Hobbit which is mostly descriptive of what the Shire looks like if you've read that, be able to drop down the road blocks and have an entire wind pop up, not just with the terrain that looks like the Shire but we have Hobbits wandering around and they have probably chickens and goods because it's -- you can pick up it's a rural society and they probably have some cards and stuff is just happening automatically. And it's all been built encoded and it's live, right? There's a lot that has to happen to get us there. I think the first step is going to be kind of terrain building which I think is not quite a line of sight but something is not that far away. How do you start to know understand what a Hobbit is and create that an animated and give it realistic behavior. I think that could feel like it's far away but you can start to see some of these techniques where generative systems will recognize patterns may pull from even just a set scripts that they know are available to start to automate these things. Eventually, they'll be generating these scripts themselves. So I don't have the exact time line. What I do know about this space is everything should be happening faster than we thought it could. Like that was the base that demo the end of the entire putting together that quickly. More of the fact it was interacted back and was asking questions back. He kind of understood the capabilities behind Roadblock Studio without a lot of programming behind that. I think this stuff is going to happen a lot faster than we all expect. And we'll be able to do that sort of scenario. And I think when that opens up, that can then open up. How do authors engage with the platform? Aesthetic creators who probably haven't approached the platform before maybe they can start to approach this in a different way.
Unidentified Analyst
As goes down the country, what are the constraint you see the amount of images that's out there, the assets to the images?
Daniel Sturman
I think there's a lot of core science. That's the constraint. Like I don't think we've developed the machine learning techniques that can do that right? And they're going to have to get better. And look, capital is always a constraint in the sense that these problems are easier to solve with more compute horsepower than was less, right? But even then, even given infinite compute horsepower probably couldn't solve that problem today. right? So I think there is science that have to happen in order to get us there. But I think the scientific community both in industry and academia is so engaged. We're not the only company like ramping that up and be more aggressive in how we think about it. I think I'm optimistic it's going to get there. Yes. It's about -- at the end of the day, this all comes down the trailer [ph].
Unidentified Analyst
So one of the things is everything we talked about is something that's tangible, we can see today. But sometimes with the technology information, it's an imagination of what it hasn't been. So should about 10 years, what would you -- what's the vision there?
Daniel Sturman
I'm not making 10-year bets at this point because I think everything is changing so quickly. Even 5 years is very, very hard. The space is moving so fast. So I mean, I would not have imagined where we are now 5 years ago, right? I think I saw other things that were leading up to it, right? And some aspects I saw like 5 years ago, I was looking at what does it take to do a better job of summarizing documents? It's just a side hobby project of mine. It was hard. It was been very good and the techniques weren't really there. And now they're absolutely there. Take any large document and drop in and Chat GPT and it can summarize for you. It does a very good job of that summary, right? So I'm not going to try to predict 10 years out. I don't think -- but I think I will say what's exciting about this, I kind of view generative techniques on plough with giving automated machinery to farmers, like before when all they had was ox and a plough that can drag behind it and then you get a tractor, think about what that's been able to do to make food production much more productive and how you can approach it and the economics level thing. I view this technology as the first real lift for creators. I think we all know because all of you probably have more economics than I have. Like the internet, how do we think didn't really raise productivity? I think this one is absolutely going to raise productivity. I think it's a different sort of technology. And I'm really excited about where that can take us. I mean we're in a creative business. So I'm like really lucky. I wish I could say, I saw this coming, so I joined Roblox so I can revolution it. I joined Roblox because in general, is a very innovative company. I was excited to join it. But in no general was coming in this focus on creators and that's exactly who we are. Like I feel like we're ground 0 for the acceleration of the creator, not just on Roblox but kind of worldwide.
Unidentified Analyst
You talked about the mobilization thing. So what is the optimum entry in your mind?
Daniel Sturman
I think that's what it's going to come down to, right? Like I mean, I don't know if any of you ever go to art galleries or aren't museums. Sometimes it's the technique of the artist but often, more often than not with modern art, it's the thought I had, right? I mean we've all heard people criticize modern art like my fire could have done that but not really because there's the message and the genius behind what they're doing. And I think that's what we're distilling towards what's going to differentiate content is going to be the genius versus what I'll call most the artificial barrier of scale.
Unidentified Analyst
Do you use the technology aside from content creation. Can you use it to help developers better monetize their experiences?
Daniel Sturman
I expect we'll find a way. Those ideas -- I think we're cutting out a little bit of this before but they're not popping in my head directly, except that I know we can use it to build smoother economic systems. So one thing we try to do at Roblox on the economy is how do we build the simplest system that leads to complex emergent behavior, right which is kind of how our real world economy kind of what keeps all of you in business actually. So I think there's a real opportunity to that. That said, the ability to -- as ML gets better, the ability for creators to more easily understand their -- who their demographic is, their behavior, what they're looking for, be more in touch will definitely to better monetization. And I think that's something and that's not new at all with these general techniques. But think about the power behind these things, that technology to will get better.
Michael Guthrie
I don't have any question that we will be able to do that. Conversion rates will go up. Yes. The technology will enable creators to figure out what's the most engaging thing for the user and how they convert them. There's no doubt in my mind.
Unidentified Analyst
In the past, you've talked to being the Rule of 40 company many times. It sounds like there's a benefit on the top line side and also potential on the bottom line as well as the cost. Do these generative AI tools at the end of the day, I would make you more comfortable in that vision longer term?
Daniel Sturman
Well, the fastest way to get to being a rule of 40 companies to grow very fast. Obviously, the top line absorbs costs and so you're just better off in general. So given our business is entirely based on creators making great content that's appealing to users, this makes me, yes, much more optimistic about our ability to do that on a sustained basis for sure.
Unidentified Analyst
Yes. And then one quick one, financially. There's been -- we have seen now in the past few years in 3Q, we seem like last year, back-to-school inflected. Was there something in particular that drove that given 3Q was a lot stronger sequentially than we've seen in the past?
Daniel Sturman
September is inflected in what way?
Unidentified Analyst
For 3Q of the hold is up like 9% sequentially on bookings versus the prior few years before that. We didn't see that seasonality. So trying to understand.
Daniel Sturman
You think September was 9% ahead of August?
Unidentified Analyst
Back-to-school in dep within all of 3Q.
Daniel Sturman
Okay. I have to go back and look at the numbers. I don't remember an inflection in back-to-school but it could be a whole host of reasons. It could be in comparison to '21. Our Q3 is usually a very strong July and August for obvious reasons. And then September, things slow down again. And I don't see any change in that, whether the exact sequential percentages or year-over-year percentages are changing at all? I'm not sure but I don't have to go back and look at last year and get to your question.
Unidentified Analyst
Yes. Historically, one of the advantages of Roblox is that has been the easiest kind of what are the easier languages to create in and before people start with like development of Luau tool, that kind of like makes it easy to create in any environment. No one no longer come necessarily easy create it. It seems like that could be a competitive barrier going away. How do you think about that?
Daniel Sturman
Yes. So look, you're right, Luau have a few properties. One, it's usually to get started on. It also has enough features that you can be sophisticated in it. And it also has a really compact run time which is the key for us right now on a wide range of devices. But if you look at the demo game before, I do think we're going to be in a world where people will want to go back and fine-tune their code, no matter what's generated and tweak it. So, if you go to a world where you're not writing the code but you're reading it more often, having a language that is easier to understand actually accelerates that because if you think about the -- I'm not sure if you've ever coded for a living but it's very hard to understand codes sometimes that you didn't write. It's like writing paper sometimes. Like this works but I don't know how the student got here. You're just trying to understand what they did. And I think having a language that is straightforward and pretty easy to understand, helps with that process. The AMI during this code. I want to go tweak this one spot, how quickly can I understand what happened here to go tweak it. You're right. At some point, it's possible Premium lingers just completely dropped out of the picture. I still think we'll then benefit for the fact we have, for example, a very compact rent time that we can run any device and easily port to new devices and all that sort of stuff. I think we're a ways away from the code dropping all together. I'm viewing again more like [indiscernible].
Michael Guthrie
Also, we have a developer community that is fairly large. It's been built over a really long period of time. There's an audience on Roblox that they're building for and now we're adding these tools and capabilities to their tool set. So it's not as that we're staying static and all of a sudden, you can create a creator community out of thin air, right, our creator community has been large and long tail. Now, we're going to have capabilities. We're not standing still, right? We're going to continue to advance that creator community and we have more and more creators that can build in our platform. Our platform has a pretty long track record of growing the user base, aging up across the world. The community has seen its earnings grow at really high rates over a long period of time. So adding this on top of takes our community to a different place. So we're not going to sit around and be static while other people try to create or use this dislocation and technology to create their own community. That's great. It's evidence of a great model but our model is not standing still.
Unidentified Analyst
Last week, you guys made some announcements at Kansas enhancements to the immersive ads platform. Can you elaborate on some of these changes? And then just maybe thinking further down the line, how could Gen AI be a tie into the ad platform and you thought about implications there?
Daniel Sturman
I was just going to say, first of all, let get to remember about ads on Roblox is the focus is around things like brand experiences and how to get brand experience you have to create it. And so everything we've been saying about creatives, first and foremost, I think, absolutely applies to that domain, particularly if you argue brand experiences may change more. They may be less persistent. I don't know this yet. We're still trying to figure what they want but the classic one is, oh, I'm going to create a Super Bowl experience, right? Like Super Bowl experience is not very relevant in July to the typical users. It's relevant when the Super Bowl is happening. And I think with brands and so on, to be able to create experiences as fast as you might create an ad campaign seems pretty exciting and compelling. I'm not sure if you want to add to that.
Michael Guthrie
About 100 brands built persistent experiences last year. A barrier to there being hundreds more is the ability to build experiences. So anything that reduces that hurdle. Some of that can get done inside the brands themselves. Some of that can be done within their ad agencies and some of that can be done with developers on our platform who are working for higher with some of these brands. So all of that speaks to faster creation and ability for brands to leverage the platform more quickly. So 100 last year is great. This year will be more than 100. So we're moving in a healthy direction, in general with brands. This is about brands and agency and developers partnering with us and saying, we're in on this platform. We're testing the platform. We're really excited about what it can do for advertising and for creating experiences where we see enormous amounts of engagement for brands. So that's really what this was all about. And there were some real commitments to people to work on Roblox. So it's an exciting start to what we've been trying to do.
Unidentified Analyst
Can you comment a little bit on the engagement, the fill rate of the 100 branded experience that you act today sort of where that sits now call it 4 or 5 months ago and sort of how we should think about it?
Daniel Sturman
Just starting. So yes, I don't have any data to share with you today.
Unidentified Analyst
All the things you showed in today. I was just a presentation in that. What changed the most of what you would have showed in November of last year?
Daniel Sturman
Well, I mean, a lot of -- everything I showed. Well, the only thing that we knew we had last year was the facial expression capture and that was something we'd actually be working for. And that came actually came out of an acquisition we did a few years ago, Loom.ai. Everything else, one of is, I don't think we've been trying to do it. So yes.
Matthew Cost
I think that's everything in the room. So maybe we'll wrap the webcast there. Dan and Mike, thank you so much for being with us.
Daniel Sturman
Thanks for having us.
Matthew Cost
Thank you.
- Read more current RBLX analysis and news
- View all earnings call transcripts