Azure Data Engineer (Databricks)

慨正橡扯
London
1 week ago
Applications closed

Related Jobs

View all jobs

Azure Data Engineer

Azure Data Engineer

Azure Data Engineer - Insurance Firm – London – hybrid working

SC Cleared Azure Data Engineer - Government client

Senior Data Engineer - MS Fabric - Remote - £70k - £75k

Senior Data Engineer - Remote - £70k

Azure Data Engineer (Databricks) Joining Capco meansjoining an organisation that is committed to an inclusive workingenvironment where you’re encouraged to #BeYourselfAtWork. Wecelebrate individuality and recognize that diversity and inclusion,in all forms, is critical to success. It’s important to us that werecruit and develop as diverse a range of talent as we can and webelieve that everyone brings something different to the table – sowe’d love to know what makes you different. Such differences maymean we need to make changes to our process to allow you the bestpossible platform to succeed, and we are happy to cater to anyreasonable adjustments you may require. You will find the sectionto let us know of these at the bottom of your application form oryou can mention it directly to your recruiter at any stage and theywill be happy to help. Why Join Capco? Capco is a global technologyand business consultancy, focused on the financial services sector.We are passionate about helping our clients succeed in anever-changing industry. You will work on engaging projects withsome of the largest banks in the world, on projects that willtransform the financial services industry. We are/have: - Expertsin banking and payments, capital markets and wealth and assetmanagement - Deep knowledge in financial services offering,including e.g. Finance, Risk and Compliance, Financial Crime, CoreBanking etc. - Committed to growing our business and hiring thebest talent to help us get there - Focused on maintaining ournimble, agile and entrepreneurial culture As a Data Engineer atCapco you will: - Work alongside clients to interpret requirementsand define industry-leading solutions - Design and develop robust,well-tested data pipelines - Demonstrate and help clients adhere tobest practices in engineering and SDLC - Have excellent knowledgeof building event-driven, loosely coupled distributed applications- Have experience in developing both on-premise and cloud-basedsolutions - Possess a good understanding of key securitytechnologies and protocols e.g. TLS, OAuth, Encryption - Supportinternal Capco capabilities by sharing insight, experience andcredentials Why Join Capco as a Data Engineer? - You will work onengaging projects with some of the largest banks in the world, onprojects that will transform the financial services industry. -You’ll be part of a digital engineering team that develops new andenhances existing financial and data solutions, having theopportunity to work on exciting greenfield projects as well as onestablished Tier 1 bank applications adopted by millions of users.- You’ll be involved in digital and data transformation processesthrough a continuous delivery model. - You will work on automatingand optimising data engineering processes, developing robust andfault-tolerant data solutions both on cloud and on-premisedeployments. - You’ll be able to work across different data, cloudand messaging technology stacks. - You’ll have an opportunity tolearn and work with specialised data and cloud technologies towiden your skill set. Skills & Expertise: You will haveexperience working with some of the followingMethodologies/Technologies; Required Skills - Hands-on workingexperience of the Databricks platform. Must have experience ofdelivering projects which use DeltaLake, Orchestration, UnityCatalog, Spark Structured Streaming on Databricks. - Extensiveexperience using Python, PySpark and the Python Ecosystem with goodexposure to Python libraries. - Experience with Big Datatechnologies and Distributed Systems such as Hadoop, HDFS, HIVE,Spark, Databricks, Cloudera. - Experience developing near real-timeevent streaming pipelines with tools such as – Kafka, SparkStreaming, Azure Event Hubs. - Excellent experience in the DataEngineering Lifecycle, having created data pipelines which takedata through all layers from generation, ingestion, transformationand serving. - Experience of modern Software Engineering principlesand experience of creating well-tested, clean applications. -Experience with Data Lakehouse architecture and data warehousingprinciples, experience with Data Modelling, Schema design and usingsemi-structured and structured data. - Proficient in SQL & goodunderstanding of the differences and trade-offs between SQL andNoSQL, ETL and ELT. - Proven experience in DevOps and buildingrobust production data pipelines, CI/CD Pipelines on e.g. AzureDevOps, Jenkins, CircleCI, GitHub Actions etc. Desirable Skills -Experience developing in other languages e.g. Scala/Java. -Enthusiasm and ability to pick up new technologies as needed tosolve problems. - Exposure to working with PII, Sensitive Data andunderstanding data regulations such as GDPR.#J-18808-Ljbffr

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine‑Learning Jobs for Non‑Technical Professionals: Where Do You Fit In?

The Model Needs More Than Math When ChatGPT went viral and London start‑ups raised seed rounds around “foundation models,” many professionals asked, “Do I need to learn PyTorch to work in machine learning?” The answer is no. According to the Turing Institute’s UK ML Industry Survey 2024, 39 % of advertised ML roles focus on strategy, compliance, product or operations rather than writing code. As models move from proof‑of‑concept to production, demand surges for specialists who translate algorithms into business value, manage risk and drive adoption. This guide reveals the fastest‑growing non‑coding ML roles, the transferable skills you may already have, real transition stories and a 90‑day action plan—no gradient descent necessary.

Quantexa Machine‑Learning Jobs in 2025: Your Complete UK Guide to Joining the Decision‑Intelligence Revolution

Money‑laundering rings, sanctioned entities, synthetic identities—complex risks hide in plain sight inside data. Quantexa, a London‑born scale‑up now valued at US $2.2 bn (Series F, August 2024), solves that problem with contextual decision‑intelligence (DI): graph analytics, entity resolution and machine learning stitched into a single platform. Banks, insurers, telecoms and governments from HSBC to HMRC use Quantexa to spot fraud, combat financial crime and optimise customer engagement. With the launch of Quantexa AI Studio in February 2025—bringing generative AI co‑pilots and large‑scale Graph Neural Networks (GNNs) to the platform—the company is hiring at record pace. The Quantexa careers portal lists 450+ open roles worldwide, over 220 in the UK across data science, software engineering, ML Ops and client delivery. Whether you are a graduate data scientist fluent in Python, a Scala veteran who loves Spark or a solutions architect who can turn messy data into knowledge graphs, this guide explains how to land a Quantexa machine‑learning job in 2025.

Machine Learning vs. Deep Learning vs. MLOps Jobs: Which Path Should You Choose?

Machine Learning (ML) continues to transform how businesses operate, from personalised product recommendations to automated fraud detection. As ML adoption accelerates in nearly every industry—finance, healthcare, retail, automotive, and beyond—the demand for professionals with specialised ML skills is surging. Yet as you browse Machine Learning jobs on www.machinelearningjobs.co.uk, you may encounter multiple sub-disciplines, such as Deep Learning and MLOps. Each of these fields offers unique challenges, requires a distinct skill set, and can lead to a rewarding career path. So how do Machine Learning, Deep Learning, and MLOps differ? And which area best aligns with your talents and aspirations? This comprehensive guide will define each field, highlight overlaps and differences, discuss salary ranges and typical responsibilities, and explore real-world examples. By the end, you’ll have a clearer vision of which career track suits you—whether you prefer building foundational ML models, pushing the boundaries of neural network performance, or orchestrating robust ML pipelines at scale.