5h ago

Threat Modeler, Preparedness

San Francisco

$325k-$325k / year

full-timeai-ml

💼 About This Role

You'll own OpenAI's holistic approach to identifying and forecasting frontier risks from AI systems. You'll ensure evaluation frameworks, safeguards, and taxonomies are robust and forward-looking. This role shapes the rationale for prioritizing catastrophic risk mitigation across technical, governance, and policy domains.

🎯 What You'll Do

  • Develop threat models across misuse areas like bio, cyber, and attack planning.
  • Model alignment risks such as loss of control and self-improvement.
  • Forecast risks using technical foresight and adversarial simulation.
  • Translate threat models into actionable mitigation designs with technical partners.

📋 Requirements

  • Deep experience in threat modeling or adversarial thinking.
  • Understanding of frontier AI risks and AI alignment literature.
  • Knowledge of how AI evaluations connect to capability testing and safeguards.

✨ Nice to Have

  • Experience in security, national security, or safety domains.
  • Ability to communicate complex risks to non-technical audiences.
  • Systems thinking with anticipation of second-order risks.

🎁 Benefits & Perks

  • 💰 Competitive compensation with equity.
  • 🏖️ Flexible PTO (not explicitly listed, but implied in tech culture).
  • 🧠 Work on cutting-edge AI safety.
0 0 0