About the Role
- Build ML serving infrastructure at
- NVIDIA.
- *You'll:**
- Develop Triton Inference Server
- -
- Optimize model deployment
- Work with
- customers
Qualifications
- 3+ years MLOps
- -
- Kubernetes expertise
- Model serving experience
Get notified about new MLOps Engineer roles.
Click the "Apply Now" button on this page to be directed to the application. You will be taken to the employer's application page.
Yes, this role is listed as a remote position.
The listed salary range for this position is $160,000 - $280,000. Final compensation may vary based on experience, qualifications, and location.
This position was posted about 1 month ago. We recommend applying promptly as positions can fill quickly.