About the Role
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI. We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers. Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products. Join us on our mission and shape the future! This role offers the opportunity to ship state-of-the-art models to production, design and implement novel research ideas, and build elegant training/deployment pipelines. Interns will join at a pivotal moment, shaping what Cohere builds and wearing multiple hats. Cohere's recruitment process will involve careful review of applications and assessment of potential candidates for our internships.
Responsibilities
- Design, train and improve upon cutting-edge models.
- Help us develop new techniques to train and serve models safer, better, and faster.
- Train extremely large-scale models on massive datasets.
- Explore continual and active learning strategies for streaming data.
- Learn from experienced senior machine learning technical staff.
- Work closely with product teams to develop solutions.
Requirements
- Currently enrolled in a post-secondary program.
- Available for a full-time 3-6 month internship, co-op, or research work term.
- Proficiency in Python and related ML frameworks such as Tensorflow, TF-Serving, JAX, and XLA/MLIR.
- Experience using large-scale distributed training strategies.
- Familiarity with autoregressive sequence models, such as Transformers.
- Strong communication and problem-solving skills.
- A demonstrated passion for applied NLP models and products.
Qualifications
- Experience writing kernels for GPUs using CUDA.
- Experience training on TPUs.
- Papers at top-tier venues (such as NeurIPS, ICML, ICLR, AIStats, MLSys, JMLR, AAAI, Nature, COLING, ACL, EMNLP).
Benefits
- An open and inclusive culture and work environment
- Work closely with a team on the cutting edge of AI research
- Weekly lunch stipend, in-office lunches & snacks
- Full health and dental benefits, including a separate budget to take care of your mental health
- 100% Parental Leave top-up for up to 6 months
- Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
- Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
- 6 weeks of vacation (30 working days!)