Data Engineer, Unified Platform

P2P
City of London
4 days ago
Create job alert

DRW is a diversified trading firm with over 3 decades of experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we operate using our own capital and trading at our own risk.


Headquartered in Chicago with offices throughout the U.S., Canada, Europe, and Asia, we trade a variety of asset classes including Fixed Income, ETFs, Equities, FX, Commodities and Energy across all major global markets. We have also leveraged our expertise and technology to expand into three non-traditional strategies: real estate, venture capital and cryptoassets.


We operate with respect, curiosity and open minds. The people who thrive here share our belief that it’s not just what we do that matters–it's how we do it. DRW is a place of high expectations, integrity, innovation and a willingness to challenge consensus.


As a Data Engineer on our Data Experience team, you will play an integral role in bringing vendor datasets into our data platform, governing our centralized data pipelines, supporting rapid data product development, and working alongside individual Traders, Quantitative Researchers, and Back-Office personnel to best utilize the firm’s data and platform tools.


Technical Requirements Summary
  • Have experience designing and building data pipelines

  • Have experience working within modern batch or streaming data ecosystems

  • An expert in SQL and have experience in Java or Python

  • Can apply data modeling techniques

  • Able to own the delivery of data products, working with analysts and stakeholders to understand requirements and implement solutions

  • Able to contribute to project management and project reporting

What you will do in this role:
  • Help model, build, and manage data products built atop DRW’s Unified Data Platform.

  • Work closely with Data Strategists to determine appropriate data sources and implement processes to onboard and manage new data sources for trading, research, and back-office purposes.

  • Contribute to data governance processes that enable discovery, cost-sharing, usage tracking, access controls, and quality control of datasets to address the needs of DRW trading teams and strategies.

  • Continually monitor data ingestion pipelines and data quality to ensure stability, reliability, and quality of the data. Contribute to the monitoring and quality control software and processes.

  • Own the technical aspects of vendor ingestion pipelines, coordinating with vendor relationship managers on upcoming changes, performing routine data operations without breaking internal users, and contributing to the team’s on-call rotation to respond to unanticipated changes.

  • Rapidly respond to user requests, identifying platform gaps and self-service opportunities that make the user experience more efficient.

What you will need in this role:
  • 3+ years of experience working with modern data technologies and/or building data-first products.

  • Excellent written and verbal communication skills.

  • Proven ability to work in a collaborative, agile, and fast-paced environment, prioritizing multiple tasks and projects, and efficiently handle the demands of a trading environment.

  • Proven ability to deliver rapid results within processes that span multiple stakeholders.

  • Strong technical problem-solving skills.

  • Extensive familiarity with SQL and Java or Python, with a proven ability to develop and deliver maintainable data tranformations for production data pipelines.

  • Experience leveraging data modeling techniques and ability to articulate the trade-offs of different approaches.

  • Experience with one or more data processing technologies (e.g. Flink, Spark, Polars, Dask, etc.)

  • Experience with multiple data storage technologies (e.g. S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc.) and knowledge of their associated trade-offs.

  • Experience with multiple data formats and serialization systems (e.g. Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc.)

  • Experience managing data pipeline orchestration systems (e.g. Kubernetes, Argo Workflows, Airflow, Prefect, Dagster, etc.)

  • Proven experience in managing the operational aspects of large data pipelines such as backfilling datasets, rerunning batch jobs, and handling dead-letter queues.

  • Prior experience triaging data quality control processes, correcting data gaps and inaccuracies.

For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at https://drw.com/privacy-notice.


California residents, please review the California Privacy Notice for information about certain legal rights at https://drw.com/california-privacy-notice.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer, Unified Platform

Data Engineer

Data Engineer - AI Data Oxford, England, United Kingdom

Data Engineer - AI

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.