About the Role
- Together AI is looking for an ML Engineer who will develop systems and APIs that enable our customers to perform inference and fine tune LLMs. Relevant experience includes implementing runtime systems that perform inference at scale using AI/ML models from simple models up to the largest LLMs.
- Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure.
Responsibilities
- Design and build the production systems that power the Together Cloud inference and fine-tuning APIs, enabling reliability and performance at scale
- Partner with researchers, engineers, product managers, and designers to bring new features and research capabilities to the world
- Analyze and improve efficiency, scalability, and stability of various system resources
- Conduct design and code reviews
- Create services, tools & developer documentation
- Create testing frameworks for robustness and fault-tolerance
- Participate in an on-call rotation to respond to critical incidents as needed
Requirements
- 5+ years experience writing high-performance, well-tested, production quality code
- Bachelor’s degree in computer science or equivalent industry experience
- Familiar with LLM inference ecosystem, including frameworks and engines (e.g. vLLM, SGLang, TRT, ...)
- Demonstrated experience in building large scale, fault tolerant, distributed systems like storage, search, and computation
- Expert level programmer in one or more of Python, Go, Rust, or C/C++
- Experience implementing runtime inference services at scale or similar
Benefits
- Competitive compensation
- Startup equity
- Health insurance
- Other competitive benefits