Principal Azure Data Engineer (Databricks)

Capco
London
11 months ago
Applications closed

Related Jobs

View all jobs

Principal Data Engineer (MS Azure)

Principal Data Engineer

Principal Data Engineer (GCP)

Principal Data Engineer

Principal Data Engineer (GCP)

Senior Machine Learning Engineer

Principal Azure Data Engineer (Databricks)

Joining Capco means joining an organisation that is committed to an inclusive working environment where you’re encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success.

Why Join Capco?

Capco is a global technology and business consultancy, focused on the financial services sector. We are passionate about helping our clients succeed in an ever-changing industry.

We are/have:

  • Experts in banking and payments, capital markets and wealth and asset management
  • Deep knowledge in financial services offering, including e.g. Finance, Risk and Compliance, Financial Crime, Core Banking etc.
  • Committed to growing our business and hiring the best talent to help us get there
  • Focused on maintaining our nimble, agile and entrepreneurial culture

As aPrincipal Data Engineerat Capco you will:

  • Work alongside clients to interpret requirements and define industry-leading solutions
  • Design and develop robust, well tested data pipelines
  • Demonstrate and help clients adhere to best practices in engineering and SDLC
  • Excellent knowledge of building event-driven, loosely coupled distributed applications
  • Experience in developing both on-premise and cloud-based solutions
  • Good understanding of key security technologies, protocols e.g. TLS, OAuth, Encryption
  • Support internal Capco capabilities by sharing insight, experience and credentials

Why Join Capco as a Principal Data Engineer?

  • You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry.
  • You’ll be part of digital engineering team that develop new and enhance existing financial and data solutions, having the opportunity to work on exciting greenfield projects as well as on established Tier1 bank applications adopted by millions of users.
  • You’ll be involved in digital and data transformation processes through a continuous delivery model.
  • You will work on automating and optimising data engineering processes, develop robust and fault tolerant data solutions both on cloud and on-premise deployments.
  • You’ll be able to work across different data, cloud and messaging technology stacks.
  • You’ll have an opportunity to learn and work with specialised data and cloud technologies to widen the skill set.

Skills & Expertise:

You will have experience working with some of the following Methodologies/Technologies;

Required Skills

  • Hands on working experience of the Databricks platform. Must have experience of delivering projects which use DeltaLake, Orchestration, Unity Catalog, Spark Structured Streaming on Databricks.
  • Extensive experience using Python, PySpark and the Python Ecosystem with good exposure to python libraries.
  • Experience with Big Data technologies and Distributed Systems such as Hadoop, HDFS, HIVE, Spark, Databricks, Cloudera.
  • Experience developing near real time event streaming pipelines with tools such as – Kafka, Spark Streaming, Azure Event Hubs.
  • Excellent experience in the Data Engineering Lifecycle, you will have created data pipelines which take data through all layers from generation, ingestion, transformation and serving.
  • Experience of modern Software Engineering principles and experience of creating well tested, clean and applications.
  • Experience with Data Lakehouse architecture and data warehousing principles, experience with Data Modelling, Schema design and using semi-structured and structured data.
  • Proficient in SQL & good understanding of the differences and trade-offs between SQL and NoSQL, ETL and ELT.
  • Proven experience DevOps and using building robust production data pipelines, CI/CD Pipelines on e.g. Azure DevOps, Jenkins, CircleCI, GitHub Actions etc.

Desirable Skills

  • Having a strong commercial focus and the ability to develop client relationships, spearhead sales opportunities and Data Engineering propositions.
  • An appetite to contribute to the wider Capco business outside of project assignments. This can be achieved through various means including thought leadership activities, supporting RFP's and the coaching/mentoring of more junior engineering team members.
  • Experience Developing in other languages e.g. Scala/Java.
  • Enthusiasm and ability to pick up new technologies as needed to solve problems.
  • Exposure to working with PII, Sensitive Data and understands data regulations such as GDPR.

#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.