About the Role
This role involves building and deploying robust, secure, and performant ML systems by focusing on adversarial machine learning techniques. The engineer will design, develop, and test algorithms to defend against various adversarial attacks and enhance the resilience of ML models in high-stakes environments.
Responsibilities
- Design and implement adversarial training techniques to improve model robustness.
- Develop and deploy ML models that are resilient to adversarial attacks.
- Evaluate model vulnerabilities using red-teaming and adversarial attack generation.
- Collaborate with cross-functional teams to integrate secure ML practices into product development.
- Research and stay up-to-date with the latest advancements in adversarial ML and defense mechanisms.
Requirements
- Master's or Ph.D. in Computer Science, Electrical Engineering, or a related field.
- Strong experience with adversarial machine learning, robust AI, or related security fields.
- Expertise in Python and ML frameworks such as PyTorch or TensorFlow.
- Demonstrated ability to develop and deploy production-ready ML systems.
- Experience with large-scale data processing and distributed computing.
Qualifications
- Experience with reinforcement learning or game theory.
- Publications in top-tier ML or security conferences.
- Experience in defense or aerospace industries.