Data Engineer

Above & Beyond - Climate Tech Recruitment
London
2 days ago
Create job alert

Data Engineer

Remote or Hybrid

Based in London or Nairobi (must have right to work)

London - £80,000 - 100,000

Nairobi - KES 10-15M


Above and Beyond Recruitment is proud to be partnering with ONE Data to recruit a Data Engineer to join their mission to build the world's first public finance and development data tool.


Who are we?

ONE Data is an initiative of The ONE Campaign focused on transforming how public finance and development data is accessed and used.


Our vision is a world where information asymmetries are collapsed and high-quality, evidence-based decisions lead to greater economic opportunity and healthier lives.


Our mission is to organise the world’s public finance and development data and make it universally accessible and useful - collapsing the time from raw data to actionable insight. By building open, interoperable data infrastructure and intuitive analytical tools, ONE Data strengthens transparency, accountability, and more effective investment in development.


In a system where data is fragmented, delayed, and difficult to interpret, ONE Data integrates disparate sources into trusted, policy-relevant insights that empower decision-makers, advocates, journalists, researchers, and partners globally.


The opportunity:

We are looking for a Data Engineer to help build the data infrastructure that powers ONE Data's products, Knowledge Graph, APIs, and analytical platforms. This is a role with real ownership. You will shape foundational systems, help make architectural decisions, and see your work directly enable better policy decisions, research and analysis.


ONE Data works with complex, fragmented public finance and development datasets, from aid flows and budget data to debt statistics and policy indicators. The Data Engineer designs the pipelines, models, and quality frameworks that transform these disparate sources into trusted, interoperable data that researchers, policymakers, and advocates can rely on.


The successful candidate will help shape a working foundation into a mature, well-documented, well-tested data platform. They will contribute to architectural decisions alongside the Senior Director for Data & Product, helping establish engineering standards, and coordinating with external service providers for specialised data modelling and engineering work when the scope requires it.


You will focus on:

In the coming months, the Data Engineer will focus on:

  • Building the Development Finance Observatory, designing and shipping the ETL pipelines and tools that integrate development finance datasets (e.g. OECD, IATI, World Bank, IMF, WHO, etc.) into a unified knowledge graph.
  • Scaling the Knowledge Graph, including schema design, data integration, and optimisations.
  • Developing the data quality framework, implementing provenance tracking, quality indicators, coverage metrics, and automated testing so that every data point in our systems is trustworthy and well documented


And will also contribute to:

  • Shipping open-source data infrastructure, building pipelines and tools that the broader development data community can use and extend.
  • Designing APIs for data access, including RESTful APIs and an MCP server to provide programmatic access to our data.
  • Coordinating with specialist partners and external data engineering service providers for deep domain work like concept modelling or high-volume data integration.


Tech stack:

  • Languages: python (pandas, httpx, sdmx, pydantic, FastAPI, FastMCP, ADK), SQL (ISO Graph Query Language would be a plus)
  • Cloud: Google Cloud Platform (Cloud Run, Cloud Build, BigQuery, Spanner Graph, Cloud SQL, Cloud Storage)
  • Other: DuckDB, Terraform, Git, Cloud Build


The infrastructure runs primarily on Google Cloud Platform, with the Knowledge Graph built on Spanner through the Data Commons infrastructure, alongside BigQuery for internal analytical workloads and MySQL for supporting services.



Key responsibilities:

Data infrastructure and pipelines

  • Design, build, and maintain open-source ETL/ELT pipelines that ingest, clean, transform, and deliver development finance data from multiple sources.
  • Contribute to data modelling and schema design across ONE Data's infrastructure.
  • Help design, build and maintain APIs for structured data access, serving both internal products and external users.
  • Implement and maintain Infrastructure-as-Code for deployment, scaling, and monitoring.
  • Establish and maintain data lineage documentation across all systems.
  • Design and implement data quality frameworks, automated testing, and monitoring systems.


Knowledge graph and data architecture

  • Contribute to the development and evolution of the ONE Data’s deployment of the Data Commons Knowledge Graph on Spanner Graph, including schema design, data integration, and query optimisation.
  • Work within and extend the Data Commons infrastructure to support ONE Data's analytical and product needs.
  • Ensure interoperability and consistency across ONE Data’s systems, tools and products.


Collaboration and delivery

  • Support policy researchers, partners, and clients with data access and integration needs.
  • Help coordinate external data engineering service providers for specialised or high-volume data modelling work.
  • Participate in sprint planning, technical design reviews, and agile delivery cycles.
  • Contribute to open-source tooling and documentation.



Qualifications:

Education & Experience

  • Bachelor's degree (or higher) in computer science, data engineering, software engineering, or a related field.
  • 5+ years of experience in data engineering, back-end development, or a related technical role.
  • Experience working with open data, public finance, or international development datasets, including navigating the challenges of fragmented sources, inconsistent standards, and incomplete coverage that characterise this domain.
  • Experience contributing to data infrastructure decisions, with a desire to grow into architectural ownership.


Technical Expertise

  • Strong Python and SQL expertise for data engineering
  • Experience designing and building scalable ETL/ELT pipelines and data architectures.
  • Experience with Google Cloud Platform services (BigQuery, Cloud Storage, Spanner, Cloud Run, etc).
  • Experience with API design and development for data access.
  • Familiarity with Infrastructure-as-Code (Terraform or similar) or willingness to learn
  • Familiarity with graph databases or Knowledge Graph technologies strongly preferred. Willingness to learn and develop expertise in this area is essential.
  • Familiarity with data quality frameworks, automated testing, and monitoring.
  • Strong understanding of data modelling, schema design, and data governance principles.


Other attributes and culture fit:

  • Commitment to ONE Data's mission of making public finance and development data universally accessible and useful.
  • Belief that well-engineered data infrastructure is a public good.
  • Ability to operate effectively within a global matrix organisation.
  • Highly organised, analytical and self-motivated.
  • Collaborative mindset with strong interpersonal skills.
  • Comfortable navigating ambiguity and fast-moving priorities.
  • Remains positive under pressure and in high-stakes environments.
  • Independent problem solver with sound judgement.
  • Action-oriented and results focused.
  • Flexible and resourceful approach to delivery.
  • Commitment to transparency, accountability and equity in development.


Languages:

Fluency in English required. Proficiency in additional languages relevant to ONE’s work (such as French or German) is a plus.


Travel:

Travel requirements vary by role but may include occasional domestic and international travel (up to 10%) to attend partner meetings, conferences, or team convenings.


Work environment:

Hybrid or remote work environment depending on location. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.



ONE is an equal opportunity employer and does not discriminate in its selection and employment practices. All qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, political affiliation, sexual orientation, gender identity, marital status, disability, protected veteran status, genetic information, age, or other legally protected characteristics.

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.