About the Role
- On the Frontier Strategy and Governance Team, we help Google DeepMind and society prepare for a world with advanced AI. To do this, we conduct research, brief senior leadership on AGI governance priorities, and drive strategic initiatives in partnership with GDM’s policy, safety and responsibility teams. We are a collaborative team, with expertise spanning political science, international relations, security, economics, history, technical AI safety, and other domains. We are interested in recruiting a wide range of expertise, including on forecasting, geopolitics, international relations, international security, global governance, political economy, technical governance, risk management, and the history of technology. We have a strong preference for people who can join in London, though we also have hubs in NYC and the Bay Area.
- Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Researchers join Google DeepMind to work collaboratively within and across a range of research fields. They develop solutions to fundamental questions in machine learning, computational neuroscience, AI, and AI policy and governance. We are looking to hire in mid-career and senior levels, though will also consider early-career candidates. Those coming from academia may join us from their PhD, post-doc, or professor positions.
Responsibilities
- Prepare briefings and recommendations for Google DeepMind leadership.
- Proactively identify research and company initiatives that will have a meaningful impact on executive decision-making.
- Monitor trends and developments in the AI landscape and implications for AI safety, strategy and governance; cultivate relationships with external domain experts and partners; and share targeted updates with internal audiences.
- Produce insightful, engaging and actionable memos, research, and risk analysis that are easily digestible by internal decision makers.
- Propose, contribute to, and lead research projects related to the governance, safety, and security of advanced AI at Google DeepMind.
- Build and contribute to internal and external collaborations, through involvement in working groups, presentations, and the writing of memos.
Requirements
- PhD or MA+ equivalent research, or practical experience in a relevant field.
- 5+ years of experience in positions related to governance, policy, or strategy research, or technical fields like technical safety, security, and hardware.
- Generalist with a breadth of abilities. Expertise in a relevant field (e.g. forecasting, geopolitics, international relations and international security, global governance, economics, technical governance, risk management, and the history of technology).
- Knowledge of the technical AI landscape, policy-making and history of major government decisions relevant to AI, and the global governance of AI.
- Professional communication, writing, and presentation skills. Ability to quickly get up to speed and synthesise complex material into accessible documents, tailored to different audiences.
Qualifications
Technical expertise (such as ML, AGI safety, security, forecasting).