National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Global Data Engineer

epay, a Euronet Worldwide Company
Billericay
1 month ago
Create job alert

Overview of the job:

At epay, data is at the core of everything we do. We’ve built a global team of Developers, Engineers, and Analysts who transform complex datasets into actionable insights for both internal stakeholders and external partners. As we continue to scale our commercial data services, we are looking to hire amid-level Data Engineer (2+ years’ experience)who has worked withAzure and Databricks, and can contribute immediately across pipeline development, categorisation, and optimisation for AI/ML use cases.

You’ll work with diverse datasets spanning prepaid, financial services, gambling, and payments, supporting business-critical decisions with high-quality, well-structured data. While engineering will be your focus, you’ll also need to collaborate across analytics and product functions—comfortable switching between roles to meet team goals.

This role includes occasional global travel and requires flexibility across time zones when collaborating with international teams.

 

This role is remote based however frequent attendance to one of 3 locations is required on a regular basis (Billericay, Essex; Bracknell and Baker Street, London)

The ideal candidate will also need to be able to travel globally when required.

 

Three best things about the job:

  • Be part of a high-performing team building modern, scalable data solutions used globally.
  • Work hands-on with cutting-edge Azure technologies, with a strong focus onDatabricksandPythondevelopment.
  • Play a key role in evolving epay’s data architecture and ML-enablement strategies.

In the first few months, you would have:

  • Taken ownership of a data pipeline or transformation flow within Databricks and contributed to its optimisation and reliability.
  • Worked across raw and curated datasets to deliver categorised and enriched data ready for analytics and machine learning use cases.
  • Provided support to analysts and financial stakeholders to validate and improve data accuracy.
  • Collaborated with the wider team to scope, test, and deploy improvements to data quality and model inputs.
  • Brought forward best practices from your prior experience to help shape how we clean, structure, and process data.
  • Demonstrated awareness of cost, latency, and scale when deploying cloud-based data services.

 The Ideal candidate should understand they are part of a team and be willing to occupy various roles to allow the team to adjust work more effectively.

Responsibilities of the role:

  • Data Pipeline Development:Build and maintain batch and streaming pipelines using Azure Data Factory and Azure Databricks.
  • Data Categorisation & Enrichment:Structure unprocessed datasets through tagging, standardisation, and feature engineering.
  • Automation & Scripting:Use Python to automate ingestion, transformation, and validation processes.
  • ML Readiness:Work closely with data scientists to shape training datasets, applying sound feature selection techniques.
  • Data Validation & Quality Assurance:Ensure accuracy and consistency across data pipelines with structured QA checks.
  • Collaboration:Partner with analysts, product teams, and engineering stakeholders to deliver usable and trusted data products.
  • Documentation & Stewardship:Document processes clearly and contribute to internal knowledge sharing and data governance.
  • Platform Scaling:Monitor and tune infrastructure for cost-efficiency, performance, and reliability as data volumes grow.
  • On-Call support: Participate in an on-call rota system to provide support for the production environment, ensuring timely resolution of incidents and maintaining system stability outside of standard working hours.

 

Requirements

What you will need:

The ideal candidate will be proactive and willing to develop and implement innovative solutions, capable of the following:

 

Recommended:

  • 2+ years of professional experience in a data engineering or similar role.
  • Proficiency inPython, including use of libraries for data processing (e.g., pandas, pySpark).
  • Experience working withAzure-based data services, particularlyAzure Databricks, Data Factory, and Blob Storage.
  • Demonstrable knowledge of data pipeline orchestration and optimisation.
  • Understanding of SQL for data extraction and transformation.
  • Familiarity with source control, deployment workflows, and working in Agile teams.
  • Strong communication and documentation skills, including translating technical work to non-technical stakeholders.

Preferred:

  • Exposure to machine learning workflows or model preparation tasks.
  • Experience working in a financial, payments, or regulated data environment.
  • Understanding of monitoring tools and logging best practices (e.g., Azure Monitor, Log Analytics).
  • Awareness of cost optimisation and scalable design patterns in the cloud.

Related Jobs

View all jobs

Data Engineer

Data Engineer

Senior Data Analyst - Pricing Data Engineering & Automation, CUO Global Pricing

Pricing & Revenue Data Scientist

Senior Data Analyst

Senior Data Engineer | Global Trading Technology Firm

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs Skills Radar 2026: Emerging Tools, Frameworks & Platforms to Learn Now

Machine learning is no longer confined to academic research—it's embedded in how UK companies detect fraud, recommend content, automate processes & forecast risk. But with model complexity rising and LLMs transforming workflows, employers are demanding new skills from machine learning professionals. Welcome to the Machine Learning Jobs Skills Radar 2026—your annual guide to the top languages, frameworks, platforms & tools shaping machine learning roles in the UK. Whether you're an aspiring ML engineer or a mid-career data scientist, this radar shows what to learn now to stay job-ready in 2026.

How to Find Hidden Machine Learning Jobs in the UK Using Professional Bodies like BCS, Turing Society & More

Machine learning (ML) continues to transform sectors across the UK—from fintech and retail to healthtech and autonomous systems. But while the demand for ML engineers, researchers, and applied scientists is growing, many of the best opportunities are never posted on traditional job boards. So, where do you find them? The answer lies in professional bodies, academic-industry networks, and tight-knit ML communities. In this guide, we’ll show you how to uncover hidden machine learning jobs in the UK by engaging with groups like the BCS (The Chartered Institute for IT), Turing Society, Alan Turing Institute, and others. We’ll explore how to use member directories, CPD events, SIGs (Special Interest Groups), and community projects to build connections, gain early access to job leads, and raise your professional profile in the ML ecosystem.

How to Get a Better Machine Learning Job After a Lay-Off or Redundancy

Redundancy in machine learning can feel especially frustrating when your role was technically advanced, strategically important, or AI-facing. But the UK still has strong demand for machine learning professionals across fintech, healthtech, retail, cybersecurity, autonomous systems, and generative AI. Whether you're a research-oriented ML engineer, production-focused MLOps developer, or applied scientist, this guide is designed to help you bounce back from redundancy and find a better opportunity that suits your goals.