Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Deep Learning Solutions Architect – Inference Optimization

NVIDIA
City of London
1 day ago
Create job alert

NVIDIA’s Worldwide Field Operations (WWFO) team is seeking a Solution Architect with a deep understanding of neural network inference. As our customers adopt increasingly complex inference pipelines on state of the art infrastructure, there is a growing need for experts who can guide the integration of advanced inference techniques such as speculative decoding, request scheduler optimizations or FP4 quantization. The ideal candidate will be proficient using tools such as TRT LLM, vLLM, SGLang or similar, and have strong systems knowledge, enabling customers to fully use the capabilities of the new GB300 NVL72 systems (for example work on efficient KV cache offloading or help with inference of new architectures like hybrid or diffusion models, or architect the pre‑ and post‑processing pipelines).


Solutions Architects work with the most exciting computing hardware and software, driving the latest breakthroughs in artificial intelligence! We need individuals who can enable customer productivity and develop lasting relationships with our technology partners, making NVIDIA an integral part of end‑user solutions. We are looking for someone always passionate about artificial intelligence, someone who can maintain understanding of a fast paced field, someone able to coordinate efforts between corporate marketing, industry business development and engineering. Solutions Architects are the first line of technical expertise between NVIDIA and our customers. Your duties will vary from working on proof‑of‑concept demonstrations, to driving relationships with key executives and managers in order to promote adoption of NVIDIA based AI technology. Engaging with developers, scientific researchers, data scientists, IT managers and senior leaders is a significant part of the Solutions Architect role.


What you will be doing

  • Work directly with key customers to understand their technology and provide the best AI solutions.
  • Perform in‑depth analysis and optimization to ensure the best performance on GPU architecture systems (in particular Grace/ARM based systems). This includes support in optimization of large scale inference pipelines.
  • Partner with Engineering, Product and Sales teams to develop, plan best suitable solutions for customers. Enable development and growth of product features through customer feedback and proof‑of‑concept evaluations.

What we need to see

  • Excellent verbal, written communication, and technical presentation skills in English.
  • MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering fields.
  • 5+ years work or research experience with Python/ C++ / other software development.
  • Work experience and knowledge of modern NLP including good understanding of transformer, state space, diffusion, MOE model architectures. This can include either expertise in training or optimization/compression/operation of DNNs.
  • Understanding of key libraries used for NLP/LLM training (such as Megatron‑LM, NeMo, DeepSpeed etc.) and/or deployment (e.g. TensorRT‑LLM, vLLM, Triton Inference Server).
  • Enthusiastic about collaborating with various teams and departments—such as Engineering, Product, Sales, and Marketing—this person thrives in dynamic environments and stays focused amid constant change.
  • Self‑starter with demeanor for growth, passion for continuous learning and sharing findings across the team.

Ways to Stand Out from The Crowd

  • Demonstrated experience in running and debugging large‑scale distributed deep learning training or inference processes.
  • Experience working with larger transformer‑based architectures for NLP, CV, ASR or other.
  • Applied NLP technology in production environments.
  • Proficient with DevOps tools including Docker, Kubernetes, and Singularity.
  • Understanding of HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.

Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/


NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.



#J-18808-Ljbffr

Related Jobs

View all jobs

Deep Learning Solutions Architect – Inference Optimization

Deep Learning Solutions Architect – Inference Optimization

Senior Team Leader - Data Science

Senior Team Leader - Data Science

Senior Team Leader - Data Science

Machine Learning Operations Lead

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK machine learning hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise shipped ML/LLM features, robust evaluation, observability, safety/governance, cost control and measurable business impact. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for ML engineers, applied scientists, LLM application engineers, ML platform/MLOps engineers and AI product managers. Who this is for: ML engineers, applied ML/LLM engineers, LLM/retrieval engineers, ML platform/MLOps/SRE, data scientists transitioning to production ML, AI product managers & tech‑lead candidates targeting roles in the UK.

Why Machine Learning Careers in the UK Are Becoming More Multidisciplinary

Machine learning (ML) has moved from research labs into mainstream UK businesses. From healthcare diagnostics to fraud detection, autonomous vehicles to recommendation engines, ML underpins critical services and consumer experiences. But the skillset required of today’s machine learning professionals is no longer purely technical. Employers increasingly seek multidisciplinary expertise: not only coding, algorithms & statistics, but also knowledge of law, ethics, psychology, linguistics & design. This article explores why UK machine learning careers are becoming more multidisciplinary, how these fields intersect with ML roles, and what both job-seekers & employers need to understand to succeed in a rapidly changing landscape.

Machine Learning Team Structures Explained: Who Does What in a Modern Machine Learning Department

Machine learning is now central to many advanced data-driven products and services across the UK. Whether you work in finance, healthcare, retail, autonomous vehicles, recommendation systems, robotics, or consumer applications, there’s a need for dedicated machine learning teams that can deliver models into production, maintain them, keep them secure, efficient, fair, and aligned with business objectives. If you’re hiring for or applying to ML roles via MachineLearningJobs.co.uk, this article will help you understand what roles are typically present in a mature machine learning department, how they collaborate through project lifecycles, what skills and qualifications UK employers look for, what the career paths and salaries are, current trends and challenges, and how to build an effective ML team.