About the Role
- Artificial Intelligence is considered one of humanity’s most useful inventions. At Google DeepMind, a diverse team of scientists, engineers, and machine learning experts collaborate to advance the state of the art in artificial intelligence. They leverage their technologies for widespread public benefit and scientific discovery, working with others on critical challenges, and ensuring safety and ethics are the highest priorities.
- The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Its core mission is to bridge the "Agentic Launch Gap," which refers to the critical period where novel AI capabilities emerge faster than traditional security review processes can address. Unlike typical red teams that merely deliver reports, this team operates with extreme agility, embedding directly with product teams as both a consulting partner and an exploitation arm. They function as a "special forces" unit, capable of rapidly engaging in high-priority launches. This approach allows them to focus exclusively on risks at the model and agent layers, as they rely on Google Core for fundamental system-level protections.
- As a Senior Security Engineer on the Agentic Red Team, you will serve as the primary technical executor for adversarial engagements. Your role will involve working "in the room" with product builders from the initial design phase to identify architectural flaws long before formal reviews commence. A central part of your focus will be to conduct complex, multi-turn attacks on production-level AI models, specifically targeting agentic behaviors such as tool usage and reasoning chains. Beyond merely discovering vulnerabilities, you will actively contribute to closing the loop by helping to develop "Auto Red Teaming" frameworks and defensive strategies, ensuring that your findings are codified into reusable guardrails for all Google agent developers.
Requirements
- Bachelor's degree in Computer Science, Information Security, or equivalent practical experience.
- Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.
- Strong coding skills in Python, Go, or C++ with experience building security tools or automation.
- Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.
Qualifications
- Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).
- Experience working in a consulting capacity with product teams or in a fast-paced "startup-like" environment.
- Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.
- Ability to translate complex probabilistic risks into actionable engineering fixes for developers.
Benefits
Bonus, equity, and benefits are included. Specific salary range for targeted location will be shared during the hiring process.