Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Member of Technical Staff - Machine Learning, AI Safety

Microsoft
London
2 days ago
Create job alert

Overview

Overview

As a Member of Technical Staff – Machine Learning, AI Safety, you will develop and implement cutting-edge safety methodologies and mitigations for products that are served to millions of users through Copilot every day. Users turn to Copilot for support in all types of endeavors, making it critical that we ensure our AI systems behave safely and align with organizational values. You may be responsible for developing new methods to evaluate LLMs, experimenting with data collection techniques, implementing safety orchestration methods and mitigations, and training content classifiers to support the Copilot experience. We’re looking for outstanding individuals with experience in machine learning or machine learning infrastructure who are also strong communicators and great teammates. The right candidate takes the initiative and enjoys building world-class, trustworthy AI experiences and products in a fast-paced environment.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.

Responsibilities

Responsibilities

Leverage expertise to uncover potential risks and develop novel mitigation strategies, including data mining, prompt engineering, LLM evaluation, and classifier training.

Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes.

Build automated safety testing systems, generalize safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems.

Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters

Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users.

Embody our culture and values.

Qualifications

Required Qualifications

  • Bachelor’s Degree in Computer Science, or related technical discipline AND technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience.
  • Experience prompting and working with large language models.
  • Experience writing production-quality Python code.

Preferred Qualifications

  • Demonstrated interest in Responsible AI.

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Related Jobs

View all jobs

Member of Technical Staff, Data Engineering

Machine Learning Engineer

Principal Data Engineer. Job in Glasgow Education & Training Jobs

Staff Data Scientist

Principal Data Engineer

Senior Research Scientist: Data Science and Machine Learning AIP

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Neurodiversity in Machine Learning Careers: Turning Different Thinking into a Superpower

Machine learning is about more than just models & metrics. It’s about spotting patterns others miss, asking better questions, challenging assumptions & building systems that work reliably in the real world. That makes it a natural home for many neurodivergent people. If you live with ADHD, autism or dyslexia, you may have been told your brain is “too distracted”, “too literal” or “too disorganised” for a technical career. In reality, many of the traits that can make school or traditional offices hard are exactly the traits that make for excellent ML engineers, applied scientists & MLOps specialists. This guide is written for neurodivergent ML job seekers in the UK. We’ll explore: What neurodiversity means in a machine learning context How ADHD, autism & dyslexia strengths map to ML roles Practical workplace adjustments you can ask for under UK law How to talk about neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in ML – & how to turn “different thinking” into a genuine career advantage.

Machine Learning Hiring Trends 2026: What to Watch Out For (For Job Seekers & Recruiters)

As we move into 2026, the machine learning jobs market in the UK is going through another big shift. Foundation models and generative AI are everywhere, companies are under pressure to show real ROI from AI, and cloud costs are being scrutinised like never before. Some organisations are slowing hiring or merging teams. Others are doubling down on machine learning, MLOps and AI platform engineering to stay competitive. The end result? Fewer fluffy “AI” roles, more focused machine learning roles with clear ownership and expectations. Whether you are a machine learning job seeker planning your next move, or a recruiter trying to build ML teams, understanding the key machine learning hiring trends for 2026 will help you stay ahead.

Machine Learning Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK machine learning hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise shipped ML/LLM features, robust evaluation, observability, safety/governance, cost control and measurable business impact. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for ML engineers, applied scientists, LLM application engineers, ML platform/MLOps engineers and AI product managers. Who this is for: ML engineers, applied ML/LLM engineers, LLM/retrieval engineers, ML platform/MLOps/SRE, data scientists transitioning to production ML, AI product managers & tech‑lead candidates targeting roles in the UK.