Research Engineer/Research Scientist – Model Transparency

London, United Kingdom
Today
£40,000 – £70,000 pa

Salary

£40,000 – £70,000 pa

Job Type
Permanent
Work Pattern
Full-time
Work Location
On-site
Seniority
Mid
Education
Degree
Posted
24 Apr 2026 (Today)

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

The deadline for applying to this role is Sunday 24th May 2026, end of day, anywhere on Earth.

Team Description

The ability to effectively evaluate and monitor AI systems will grow in importance as models become more capable, autonomous, and integrated into society. If models can detect and game evaluations, obscure their reasoning, or behave differently under observation, the safety claims that governments and developers rely on become unreliable. Understanding and addressing these risks is essential to ensuring that oversight of advanced AI systems keeps pace with their capabilities.

The Model Transparency team is a research team within AISI focused on ensuring that evaluations, assessments, and monitoring of frontier AI systems remain reliable as models become less transparent. We research how and why oversight is declining – through phenomena such as evaluation awareness, unfaithful chain-of-thought reasoning, and changes in model architectures – and develop methods (including white and black box methods) to detect, measure, and mitigate potential issues. We share our findings with frontier AI companies (including Anthropic, OpenAI, DeepMind), UK government officials, and allied governments, and publicly to inform their deployment, research, and policy decisions. We also work directly with safety teams at frontier labs, contributing to safety case reviews and helping improve their alignment evaluation methodology.

Our recent work includes auditing games for sandbagging, reproducing natural emergent misalignment from reward hacking, and identifying open-weight language models that game propensity evaluations.

Role description

We're looking for Research Scientists and Research Engineers for the Model Transparency team with expertise in technical AI safety – such as interpretability, capability or alignment evaluations, model transparency – or with broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of high-quality research in technical AI safety or adjacent fields.

  • Research Scientists, drive the technical substance of our work – staying abreast of the literature, proposing and designing experiments, conducting rigorous analyses, and owning the evidence stack from experiment through to written output. They write, critique, and strengthen the team's reports and publications.
  • Research Engineers, build the systems and tooling that make our research possible and fast – scaling experimental workflows, automating processes, solving infrastructure challenges, and creating systems that accelerate the entire team's output.

We're interested in candidates along the spectrum between Research Engineers and Research Scientists. The application form will ask you to indicate which role you lean towards.

The team is led by Joseph Bloom, advised by Geoffrey Irving. You'll work with talented, mission-driven technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external research teams including those at frontier AI labs, METR, and FAR.

We are open to hires across a range of experience levels.

Representative Projects You Might Work On

  • Developing a chain-of-thought monitorability benchmark and comparing monitorability properties across frontier AI systems, leveraging AISI’s unique access to reasoning traces from multiple labs.
  • Designing and running experiments on open-weight models to study alignment and oversight-relevant phenomena – such as reproducing emergent misalignment from reward hacking, or red-teaming techniques like inoculation prompting and character training.
  • Using white-box and interpretability methods – such as activation oracles, sparse auto-encoders or probes – to detect misalignment that isn’t visible through behavioural evaluation alone.
  • Building tooling and infrastructure for our research – including agent orchestration, large-scale RL pipelines, mechanistic interpretability methodologies, and auditing agents.

The work could also involve:

  • Reviewing frontier lab risk assessments and safety cases, providing independent analysis of alignment claims before deployment decisions.
  • Conducting literature reviews and expert interviews to map the state of model transparency risks and inform AISI’s strategic priorities.
  • Translating technical findings into actionable insights for AISI evaluation teams, UK government officials, and international partners.

What we’re looking for

If you’re unsure whether you meet the criteria below, we’d encourage you to apply anyway – we’d rather you erred on the side of applying than not.

Requirements for both roles:

    Related Jobs

    View all jobs

    Senior Software Engineer - Core Services

    PhysicsX London, United Kingdom

    Principal AI Engineer

    PhysicsX London, United Kingdom

    (Alignment) Research Engineer/Research Scientist - Red Team

    AI Security Institute London, United Kingdom

    Capacity Planning Data Scientist

    Datatech Middlesex, UB8 3QX, United Kingdom

    Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

    By subscribing, you agree to our privacy policy and terms of service.

    Industry Insights

    Discover insightful articles, industry insights, expert tips, and curated resources.

    Where to Advertise Machine Learning Jobs in the UK (2026 Guide)

    Advertising machine learning jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly specialised and in demand across AI labs, financial services, healthcare, autonomous systems and consumer technology simultaneously. Machine learning engineers and researchers move between roles through professional networks, conference communities and specialist platforms — not general job boards where ML roles compete with unrelated software engineering positions for the same audience. This guide, published by MachineLearningJobs.co.uk, covers where to advertise machine learning roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about hiring across different role types.

    New Machine Learning Employers to Watch in 2026: UK and Global Companies Driving ML Innovation

    Machine learning (ML) has transitioned from a specialised field into a core business capability. In 2026, organisations across healthcare, finance, robotics, autonomous systems, natural language processing, and analytics are expanding their machine learning teams to build scalable intelligent products and services. For professionals exploring opportunities on www.MachineLearningJobs.co.uk , understanding the companies that are scaling, winning investment, or securing high‑impact contracts is crucial. This article highlights the new and high‑growth machine learning employers to watch in 2026, focusing on UK innovators, international firms with significant UK presence, and global platforms investing in machine learning talent locally.

    How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

    Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.