Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Machine Learning Infrastructure Engineer [UAE Based]

AI71
London
7 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer

Machine Learning Engineer (Databricks)

Sr. Machine Learning Engineer London, UK

Senior Machine Learning Engineer

Deep Learning Engineer - Manipulation

Senior Machine Learning Engineer - ML Infrastructure

Job Title: ML Infrastructure Senior Engineer

Location: Abu Dhabi, United Arab Emirates [Full relocation package provided]



Job Overview

We are seeking a skilled ML Infrastructure Engineer to join our growing AI/ML platform team. This role is ideal for someone passionate about large-scale machine learning systems and has hands-on experience deploying LLMs/SLMs using advanced inference engines like vLLM. You will play a critical role in designing, deploying, optimizing, and managing ML models and the infrastructure around them—both for inference, fine-tuning and continued pre-training.


Key Responsibilities

· Deploy large-scale or small language models (LLMs/SLMs) using inference engines (e.g., vLLM, Triton, etc.).

· Collaborate with research and data science teams to fine-tune models or build automated fine-tuning pipelines.

· Extend inference-level capabilities by integrating advanced features such as multi-modality, real-time inferencing, model quantization, and tool-calling.

· Evaluate and recommend optimal hardware configurations (GPU, CPU, RAM) based on model size and workload patterns.

· Build, test, and optimize LLMs Inference for consistent model deployment.

· Implement and maintain infrastructure-as-code to manage scalable, secure, and elastic cloud-based ML environments.

· Ensure seamless orchestration of the MLOps lifecycle, including experiment tracking, model registry, deployment automation, and monitoring.

· Manage ML model lifecycle on AWS (preferred) or other cloud platforms.

· Understand LLM architecture fundamentals to design efficient scalability strategies for both inference and fine-tuning processes.


Required Skills


Core Skills:

· Proven experience deploying LLMs or SLMs using inference engines like vLLM, TGI, or similar.

· Experience in fine-tuning language models or creating automated pipelines for model training and evaluation.

· Deep understanding of LLM architecture fundamentals (e.g., attention mechanisms, transformer layers) and how they influence infrastructure scalability and optimization.

· Strong understanding of hardware-resource alignment for ML inference and training.

Technical Proficiency:

· Programming experience in Python and C/C++, especially for inference optimization.

· Solid understanding of the end-to-end MLOps lifecycle and related tools.

· Experience with containerization, image building, and deployment (e.g., Docker, Kubernetes optional).

Cloud & Infrastructure:

· Hands-on experience with AWS services for ML workloads (SageMaker, EC2, EKS, etc.) or equivalent services in Azure/GCP.

· Ability to manage cloud infrastructure to ensure high availability, scalability, and cost efficiency.


Nice-to-Have

· Experience with ML orchestration platforms like MLflow, SageMaker Pipelines, Kubeflow, or similar.

· Familiarity with model quantization, pruning, or other performance optimization techniques.

· Exposure to distributed training frameworks like Unsloth, DeepSpeed, Accelerate, or FSDP.

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Neurodiversity in Machine Learning Careers: Turning Different Thinking into a Superpower

Machine learning is about more than just models & metrics. It’s about spotting patterns others miss, asking better questions, challenging assumptions & building systems that work reliably in the real world. That makes it a natural home for many neurodivergent people. If you live with ADHD, autism or dyslexia, you may have been told your brain is “too distracted”, “too literal” or “too disorganised” for a technical career. In reality, many of the traits that can make school or traditional offices hard are exactly the traits that make for excellent ML engineers, applied scientists & MLOps specialists. This guide is written for neurodivergent ML job seekers in the UK. We’ll explore: What neurodiversity means in a machine learning context How ADHD, autism & dyslexia strengths map to ML roles Practical workplace adjustments you can ask for under UK law How to talk about neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in ML – & how to turn “different thinking” into a genuine career advantage.

Machine Learning Hiring Trends 2026: What to Watch Out For (For Job Seekers & Recruiters)

As we move into 2026, the machine learning jobs market in the UK is going through another big shift. Foundation models and generative AI are everywhere, companies are under pressure to show real ROI from AI, and cloud costs are being scrutinised like never before. Some organisations are slowing hiring or merging teams. Others are doubling down on machine learning, MLOps and AI platform engineering to stay competitive. The end result? Fewer fluffy “AI” roles, more focused machine learning roles with clear ownership and expectations. Whether you are a machine learning job seeker planning your next move, or a recruiter trying to build ML teams, understanding the key machine learning hiring trends for 2026 will help you stay ahead.

Machine Learning Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK machine learning hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise shipped ML/LLM features, robust evaluation, observability, safety/governance, cost control and measurable business impact. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for ML engineers, applied scientists, LLM application engineers, ML platform/MLOps engineers and AI product managers. Who this is for: ML engineers, applied ML/LLM engineers, LLM/retrieval engineers, ML platform/MLOps/SRE, data scientists transitioning to production ML, AI product managers & tech‑lead candidates targeting roles in the UK.