Senior Data Engineer ›

Aztec
Southampton
4 months ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

At the Aztec Group we credit our technology as one of the core ingredients to our award-winning outsourced solutions. As part of its Five-Year Plan, Aztec has the ambition to be a market-leading alternative fund administrator that provides compelling client experiences, products, and services.

These are exciting times across the group. Significant growth, change, and investment make it a truly world-class opportunity to help shape our organisation for the next stage of its journey.

To drive towards this ambition, we are seeking a motivated individual to join our Data Platform team and support Aztec’s new technology strategy using Azure Databricks. You will lead our Data Engineering capability and collaborate with others passionate about solving business problems.

Key responsibilities:

Data Platform Design and Architecture

  • Design, develop, and maintain a high-performing, secure, and scalable data platform, leveraging Databricks Corporate Lakehouse and Medallion Architectures.
  • Utilise our metadata-driven data platform framework combined with advanced cluster management techniques to create and optimise scalable, robust, and efficient data solutions.
  • Implement comprehensive logging, monitoring and alerting tools to manage the platform, ensuring resilience and optimal performance are maintained.

Data Integration and Transformation

  • Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth.
  • Create ETL and ELT processes using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints.

Governance and Compliance

  • Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS Gen2 encryption for audit compliance.

Development and Process Improvement

  • Evaluate requirements, create technical design documentation, and work within Agile methodologies to deploy and optimise data workflows, adhering to data platform policies and standards.

Collaboration and Knowledge Sharing

  • Collaborate with stakeholders to develop data solutions, maintain professional knowledge through continual development, and advocate best practices within a Centre of Excellence.

Skills, knowledge and expertise:

  • Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts.
  • Proven experience working with big data technologies, i.e., Databricks and Apache Spark.
  • Proven experience working with Azure data platform services, including Storage, ADLS Gen2, Azure Functions, Kubernetes.
  • Background in cloud platforms and data architectures, such as Corporate DataLake, Medallion Architecture, Metadata Driven Platform, Event-driven architecture.
  • Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing.
  • Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL.
  • Good working knowledge of data warehouse and data mart architectures.
  • Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage, Quality Checks, Master Data Management.
  • Experience using Azure DevOps to manage tasks and CI/CD deployments within an Agile framework, including utilising Azure Pipelines (YAML), Terraform, and implementing effective release and branching strategies.
  • Knowledge of security practices, covering RBAC, Azure Key Vault, Private Endpoints, Identity Management.
  • Experience working with relational and non-relational databases and unstructured data.
  • Exposure to Azure Purview, Power BI, and Profisee is an advantage.
  • Ability to compile accurate and concise technical documentation.
  • Strong analytical and problem-solving skills.
  • Good interpersonal and communication skills.

We will provide the training, both in house for relevant technical knowledge and for professional qualifications. You will need to be quick to learn new systems and be great with people, as close working relationships between our colleagues and clients is at the heart of what we do.

Beyond that, we will be with you every step of the way, enabling you to get the most out of your role, grow your skills your way, and see your career develop in the way you want. Be part of our talented Technology team and unbox your passion at a multi-award-winning leader in the alternative fund management industry.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.