About the Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.
About the Role
As a Data Scientist on the Monitoring Operations team, you will be responsible for building systems to proactively identify abuse on OpenAI’s products. This includes ensuring we have robust monitoring in place for new products, and can sustain monitoring for existing products. You will also respond to critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams.
This role can either be based in our San Francisco or London office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.
In this role, you will:
Scope and implement monitoring requirements for new product launches. This involves working with Product and Policy teams to understand key risks, and working with Engineering teams to ensure we have sufficient data and tooling.
Improve processes to sustain monitoring operations for existing products. This includes developing approaches to automate monitoring subtasks.
You might thrive in this role if you:
Have at least 4 years of experience doing technical analysis, especially in SQL and Python.
Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams
Have experience with basic data engineering, such as building core tables or writing data pipelines (not expected to build infrastructure or write production code)
Have experience scaling and automating processes, especially with language models