Celebrating Women in AI: Q&A with Leading Character.AI Research Engineer Julia Reisler

At Character.AI, we believe in championing the people who power our technology. In honor of Women’s History Month, we caught up with one of our leading research engineers, Julia Reisler, whose work bridges product intuition, applied research, and cutting-edge ML techniques. From training efficient models to building playful, immersive games, Julia’s career experience illustrates how technical depth and creative ambition go hand-in-hand.

How did you first become interested in AI? 

I first learned about AI when I was a student at Caltech, where I studied computer science and built a strong theoretical foundation. After graduation, I worked as a machine learning engineer at Apple, and ultimately realized I wanted to gain a deeper understanding of the theory behind machine learning and more exposure to the startup ecosystem. So, I went back to school and earned a Master’s Degree from Stanford. During my time at Stanford, I was exposed to the world of startups and developed a new perspective on product thinking, which made me very interested in how research could be shaped by what users want. 

What does your research at Character.AI focus on? 

My research focuses on applied AI as I care a lot about bridging the gap between technical development and user needs. I’ve recently been interested in what makes entertainment compelling, like when to shift the pacing, the balance of showing vs. telling, and what feels original or captivating. I think this understanding can directly shape how we evaluate and train models.

One of the most exciting things about working at Character is that we’re not trying to build models that just solve math problems or write perfect code. We're building models that entertain and help explore creativity. To accomplish building entertaining and creative models, our research is unique — we optimize for safe, engaging, and coherent responses.   

How did games become part of your work at Character.AI?

One of the things I love most about working at Character is the freedom to pursue ideas you’re passionate about, even if they fall outside your formal role. I believe AI will play a central role in the future of gaming, both as characters you play with and as mechanics that drive or enhance gameplay. With that in mind, I stepped outside my usual scope earlier this year to teach myself React and lead a project focused on bringing games to Character.AI. We started with simple text-based games, which are now available to CAI+ users, and we’re already expanding into more immersive RPG-style experiences. As we continue to roll out multimodal features that incorporate audio, visuals, and eventually video, the possibilities get even more exciting.

From a technical standpoint, one of our biggest challenges was designing a flexible game infrastructure. We eventually settled on a state machine-based architecture, which made it easier to scale across different game types. Another key decision was figuring out when to use an LLM as a judge. In Speakeasy, for example, we opted for simpler techniques to keep responses fast and accurate. But in games like Match Me If You Can and War of Words, where humor and subjectivity add to the experience, we leaned into LLM judges to provide more personality and flair.

How has your work with distillation and reinforcement learning helped in producing smaller but faster models? 

In my role as a researcher, I worked on distilling a smaller model that was roughly 3x more efficient than one of our larger ones. The goal was to reduce inference costs while maintaining strong performance. We explored a range of techniques: mid-training checkpoints, different data recipes, weighting strategies, and most effectively, distillation from a larger, high-performing model. While the smaller model doesn’t completely close the performance gap, it was strong enough to launch as a chat style—and people are using it!

Reinforcement learning also plays a key role in our pipeline. While distillation happens during the supervised fine-tuning phase, reinforcement learning comes after and helps align the model to human preferences. A lot of our work focuses on optimizing data filtering between these stages, but we're also starting to experiment with new algorithmic approaches, like GRPO vs. DPO.

What advice would you give to women who want to work in AI? 

My biggest piece of advice is to find good mentors and sponsors, people who will advocate for you, challenge you, and help you grow. It makes a huge difference when you know someone in the room is asking you tough questions because they genuinely want you to succeed.

It’s also important to stay curious, not just about what’s new, but about how we got here. Even though the field is evolving quickly, a lot of progress comes from understanding the foundations and reevaluating older ideas in a new light. Many breakthroughs in machine learning come from revisiting past research that has become more tractable with modern tools and computing. Gradient descent, for instance, dates back to the 19th century, but remains at the heart of how we train modern neural networks today. Staying curious sometimes means looking forward, but just as often, it means looking back with fresh eyes. 

Last, don’t be afraid to put yourself out there and cast a wide net when looking for a role. Even if a job opportunity doesn’t seem like a perfect fit on paper, reach out to hiring managers, ask for quick chats, and show genuine curiosity about what different teams are working on. A lot of doors open just by expressing interest and taking initiative.