About the Role
Joining us as a Research Engineer on the Multimodal team, you'll be at the forefront of building and advancing video and image generation models that bring AI characters to life in entirely new ways. Your work will directly shape how millions of users experience rich, expressive, and visually compelling AI interactions every day. The Multimodal team is responsible for training, fine-tuning, and deploying cutting-edge image, audio and video generation models that power Character.AI's visual experiences. We work across the full model lifecycle — from data pipelines and training to deployment and product integration. As a Multimodal Research Engineer, you will own and advance our video model training efforts, including joint audio-visual generation and image-to-video. You will collaborate across research, product, and infrastructure to push the boundaries of what AI-generated visuals can look and feel like at scale.
Responsibilities
- Lead fine-tuning and continued training of video generation models, including image-to-video and joint audio-visual generation.
- Design and experiment with novel model architectures for multimodal generation, including multimodal conditioning (voice, structured text, reference images).
- Leverage techniques such as LoRA, RLHF, and full-parameter fine-tuning to improve model quality across diverse visual scenarios.
- Design and build large-scale data pipelines and automated annotation workflows to support continuous model improvement.
- Explore model compression, inference acceleration, and serving optimizations to enable efficient real-time video processing at scale.
Requirements
- Strong passion for pushing the boundaries of visual AI, with a self-driven, hands-on approach to solving complex technical problems
- Proficient in PyTorch with end-to-end experience across data processing, model training, and deployment
- Solid understanding of video and image generation architectures, including diffusion models, DiT, ControlNet, and SOTA video generation models
- Experience with multimodal model training, including working with audio, vision, and language modalities together
- Experience with distributed training tools (FSDP, DeepSpeed, etc.)
- Experience with large-scale data processing, dataset construction, and automated data cleaning
Qualifications
- Experience with joint audio-visual or speech-conditioned generation models
- Experience with AIGC, video effects, character animation, or asset generation products
- Familiarity with ML deployment and orchestration (Kubernetes, Slurm, Docker, cloud platforms)
- Publications in relevant venues (NeurIPS, ICLR, CVPR, ECCV, ICCV, or similar)
Benefits
- $225K – $400K • Offers Equity
- Top-notch health coverage for you & your family, with majority of the premium covered
- We invest in your future with a generous 401(K) contribution
- New parents, we've got you covered with incredible paid leave -up to 20 weeks
- 4 weeks of PTO to explore, unwind & come back recharged
- Daily in-office catering plus a monthly Doordash stipend to help keep you fueled no matter where you are
- Monthly wellness stipend to support you in your health journey