Senior Data Engineer

Pantheon
City of London
4 days ago
Create job alert

Pantheon has been at the forefront of private markets investing for more than 40 years, earning a reputation for providing innovative solutions covering the full lifecycle of investments, from primary fund commitments to co‑investments and secondary purchases, across private equity, real assets and private credit.


We have partnered with more than 650 clients, including institutional investors of all sizes as well as a growing number of private wealth advisers and investors, with approximately $65 bn in discretionary assets under management (as of December 31, 2023).


Leveraging our specialized experience and global team of professionals across Europe, the Americas and Asia, we invest with purpose and lead with expertise to build secure financial futures.


Pantheon is undergoing a multi‑year program to build out a new best‑in‑class Data Platform using cloud‑native technologies hosted in Azure. We require an experienced and passionate hands‑on Senior Data Engineer to design and implement new data pipelines for adaptation to business and or technology changes. This role will be integral to the success of this program and to establishing Pantheon as a data‑centric organization.


You will be working with a modern Azure tech stack and proven experience of ingesting and transforming data from a variety of internal and external systems is core to the role.


You will be part of a small and highly skilled team, and you will need to be passionate about providing best‑in‑class solutions to our global user base.


Key Responsibilities

  • Design, build, and maintain scalable, secure, and high‑performance data pipelines on Azure, primarily using Azure Databricks, Azure Data Factory, and Azure Functions.
  • Develop and optimise batch and streaming data processing solutions using PySpark and SQL to support analytics, reporting, and downstream data products.
  • Implement robust data transformation layers using dbt, ensuring well‑structured, tested, and documented analytical models.
  • Collaborate closely with business analysts, QA teams, and business stakeholders to translate data requirements into reliable technical solutions.
  • Ensure data quality, reliability, and observability through automated testing, monitoring, logging, and alerting.
  • Lead on performance tuning, cost optimisation, and capacity planning across Databricks and associated Azure services.
  • Implement and maintain CI/CD pipelines using Azure DevOps, promoting best practices for version control, automated testing, and deployment.
  • Enforce data governance, security, and compliance standards, including access controls, data lineage, and auditability.
  • Contribute to architectural decisions and provide technical leadership, mentoring junior engineers and setting engineering standards.
  • Produce clear technical documentation and contribute to knowledge sharing across the data engineering function.

Knowledge & Experience Required
Essential Technical Skills

  • Python and PySpark for large‑scale data processing.
  • SQL (advanced querying, optimisation, and data modelling).
  • Azure Data Factory (pipeline orchestration and integration).
  • Azure DevOps (Git, CI/CD pipelines, release management).
  • Azure Functions / serverless data processing patterns.
  • Data modelling (star schemas, data vault, or lakehouse‑aligned approaches).
  • Data quality, testing frameworks, and monitoring/observability.
  • Strong problem‑solving ability and a pragmatic, engineering‑led mindset.
  • Experience in Agile SW development environment.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non‑technical stakeholders.
  • Leadership and mentoring capability, with a focus on raising engineering standards and best practices.
  • Significant commercial experience (typically 5+ years) in data engineering roles, with demonstrable experience designing and operating production‑grade data platforms.
  • Strong hands‑on experience with Azure Databricks, including cluster configuration, job orchestration, and performance optimisation.
  • Proven experience building data pipelines with Databricks and Azure Data Factory; integrating with Azure‑native services (e.g., Data Lake Storage Gen2, Azure Functions).
  • Advanced experience with Python for data engineering, including PySpark for distributed data processing.
  • Strong SQL expertise, with experience designing and optimising complex analytical queries and data models.
  • Practical experience using dbt in a production environment, including model design, testing, documentation, and deployment.
  • Experience implementing CI/CD pipelines using Azure DevOps or equivalent tooling.
  • Solid understanding of data warehousing and lakehouse architectures, including dimensional modelling and modern analytics patterns.
  • Experience working in agile delivery environments and collaborating with cross‑functional teams.
  • Exposure to cloud security, data governance, and compliance concepts within Azure.

Desired Experience

  • Power BI and DAX
  • Business Objects Reporting

This job description is not to be construed as an exhaustive statement of duties, responsibilities, or requirements. You may be required to perform other job‑related duties as reasonably requested by your manager.


Pantheon is an Equal Opportunities employer; we are committed to building a diverse and inclusive workforce so if you're excited about this role but your past experience doesn't perfectly align we'd still encourage you to apply.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer - Energy

Senior Data Engineer, SQL, RDBMS, AWS, Python, Mainly Remote

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.