About the Role
At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world. With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert. Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date. Anyscale is actively seeking talented engineers to join our team and contribute to the development of next-generation, high-performance machine learning serving systems. We value diversity and inclusion, and we encourage individuals from underrepresented groups to apply. Many existing ML serving tools are inherited from previous infrastructure generations, but emerging ML applications present new requirements, such as high compute demands, specialized hardware needs, and the integration of multiple models and business logic within a single request. At Anyscale, our mission is to provide a powerful yet simple set of tools that enable the seamless deployment of complex ML applications in production. The Challenge: What if you could build the infrastructure that powers AI applications for millions of users worldwide? Ray Serve is the production-grade serving framework that makes this possible—and we need exceptional engineers to push its boundaries. You'll be working on problems that sit at the intersection of distributed systems, machine learning, and high-performance computing. This isn't about maintaining CRUD apps or tweaking configurations—this is about solving fundamental computer science problems that directly impact how the world deploys AI.
Responsibilities
- Sub-millisecond Model Routing: Design and implement intelligent request routing systems that dynamically balance load across thousands of model replicas while maintaining strict latency SLAs
- Zero-Downtime Model Updates: Build sophisticated traffic management systems that seamlessly transition between model versions at scale, handling terabytes of inference requests without dropping a single query
- Autoscaling at Scale: Create reactive systems that predict traffic patterns and scale model replicas from 1 to 10,000+ instances based on real-time demand signals
- Multi-Model Orchestration: Architect frameworks for complex ML pipelines where dozens of models need to communicate, share resources, and maintain end-to-end latency guarantees
- Observability & Debugging: Build deep introspection tools that make it trivial to debug distributed ML applications—because "works on my laptop" doesn't cut it at scale
Requirements
- Strong Systems Fundamentals: You understand operating systems, networking, concurrency, and distributed systems at a deep level
- Production Experience: You've built and maintained systems that serve real users at scale
- Code Quality: You write clean, tested, well-documented code that other engineers love to work with
- Ownership Mindset: You take responsibility for your code in production—from design to deployment to incident response
Qualifications
- Experience with distributed systems frameworks (gRPC, Ray)
- Background in ML/AI systems or serving infrastructure
- Contributions to major open source projects
- Experience with performance optimization and profiling
- Knowledge of cloud-native technologies (Kubernetes, Istio, etc.)
Benefits
None mentioned on this page.