Lead Data Engineer – Python, SQL, dbt, AWS, Dremio, Data Mesh, ETL, Data Pipelines, Data Warehousing, DWH, S3, Glue, Data Lake, Automation, London
A Lead Data Engineer is sought after by a leading workplace pensions provider, to run their expanding data engineering team in their London City office. The team have recently built out a next-gen data lake hosted in AWS which makes use of Dremio, Data Mesh architecture and a wide range of AWS tools (S3, Glue, Step Functions etc.).
With the infrastructure of this new Data Lake in place, there is now a focus on enhancing the data stored within (via monitoring, cleaning and, as well as a requirement to design, build and implement robust ETL pipelines for effective integration with the wider business.
As a Lead Engineer, you will also be responsible for overseeing a small team of Data Engineers, setting and driving best practises and standards, ensuring project delivery is kept on track and regularly engaging with stakeholders up to C-suite for feedback/updates.
This will also involve working directly alongside the Head of Data Platforms to ensure ongoing engineering work is aligned to overall architecture and roadmap, as well as future project planning.
If you demonstrate the following skillset, please do apply!
- 7/8 years’ experience in Python programming, particularly around data analysis, automation and the building of data pipelines (as well as an understanding of data warehousing concepts)
- Strong SQL skills, especially in relation to ETL processes and data management (querying, cleaning, storing)
- Experience working with and deploying to AWS cloud (having worked with any/all of S3, Glue, Lambda, Step Functions etc.)
- Usage of PowerBI for data visualization
- Experience of both Dremio and Data Mesh Architecture
- Prior experience of leading software engineering teams (including line management)
- Clear, confident communications with previous experience of working directly with business users