Protection Scientist Engineer, Intelligence and Investigations

OpenAI
London, United Kingdom
6 months ago
Job Type
Permanent
Posted
8 Oct 2025 (6 months ago)

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.

The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.

About the Role

Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within Integrity and Investigations, you will be responsible for designing and building systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams.

This role is based in our London office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

  • Scope and implement abuse monitoring requirements for new product launches.

  • Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.

  • Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.

  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.

You might thrive in this role if you:

  • Have at least 4 years of experience doing technical analysis and detection, especially using SQL and Python.

  • Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.

  • Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.

  • Have experience scaling and automating processes, especially with language models.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Related Jobs

View all jobs

Platform Engineer

Healx Cambridge, United Kingdom
Hybrid

Machine Learning Engineer

Healx Cambridge, United Kingdom
Hybrid

Senior Cyber Security Engineer (AI Safety)

Faculty London, United Kingdom
Hybrid

Software Engineer

Faculty London, United Kingdom
Hybrid

Research Engineer/Research Scientist - Red Team (Misuse)

AI Safety Institute London, United Kingdom

Lead ML Engineer (SageMaker)

TEKsystems London, United Kingdom

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise Machine Learning Jobs in the UK (2026 Guide)

Advertising machine learning jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly specialised and in demand across AI labs, financial services, healthcare, autonomous systems and consumer technology simultaneously. Machine learning engineers and researchers move between roles through professional networks, conference communities and specialist platforms — not general job boards where ML roles compete with unrelated software engineering positions for the same audience. This guide, published by MachineLearningJobs.co.uk, covers where to advertise machine learning roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about hiring across different role types.

New Machine Learning Employers to Watch in 2026: UK and Global Companies Driving ML Innovation

Machine learning (ML) has transitioned from a specialised field into a core business capability. In 2026, organisations across healthcare, finance, robotics, autonomous systems, natural language processing, and analytics are expanding their machine learning teams to build scalable intelligent products and services. For professionals exploring opportunities on www.MachineLearningJobs.co.uk , understanding the companies that are scaling, winning investment, or securing high‑impact contracts is crucial. This article highlights the new and high‑growth machine learning employers to watch in 2026, focusing on UK innovators, international firms with significant UK presence, and global platforms investing in machine learning talent locally.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.