Senior Data Engineer

Just Eat Takeaway.com
Bristol
4 days ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer - Energy

Senior Data Engineer, SQL, RDBMS, AWS, Python, Mainly Remote

Overview

Location Open to Both Bristol & London

Ready for a challenge?

The Experimentation Platform team is dedicated to supporting the business by operating JET's internal feature management and experimentation platform, JetFM. This involves processing vast amounts of experiment data, and thoroughly analysing and interpreting the results. Your primary goal in this Senior Python Data Engineer role is to help scale the use and scope of this state-of-the-art experimentation platform, expanding experimentation throughout the organisation. Automation and making experimentation fully self-served are key objectives, addressing the current complexity and learning curve for users driving greater volume of experiments & associated data processing. You will collaborate closely with other engineers, data scientists, and analysts as part of a broader engineering community.

The day-to-day work includes developing complex data pipelines in Python operating on massive amounts of data in Big Query. The role requires a versatile engineering skillset beyond traditional data engineering: evolving backend APIs, productionising statistical methodologies at scale, integration with other platforms or building data tools as required.

Responsibilities
  • Design, develop, and maintain reliable and scalable data engineering solutions within Google Cloud Platform (GCP).
  • Work collaboratively, prioritising teamwork and stakeholder value to achieve collective goals.
  • Advocate for building future-proof solutions for long-term impact.
  • Spread engineering skills and best practices within the team and wider engineering community.
  • Work cross-functionally with other Platform Engineering teams to resolve issues and standardise practices.
  • Continuously improve & maintain robust infrastructure, CI/CD processes, and monitoring solutions for the experimentation platform.
  • Integrate the experimentation platform with new data sources and develop data flows for processing and transforming data.
  • Develop the experimentation platform with efficient reporting solutions and cloud APIs to deliver experiment results to stakeholders.
  • Engineer a metrics library solution in the data warehouse to enable stakeholders to self-serve experimentation metrics.
  • Collaborate with data scientists to implement new methodological improvements to the statistical experimentation engine in a scalable and future-proof manner.
What will you bring to the team?
  • Dedication to Data Engineering such as Google Vertex AI pipelines, Airflow/DBT.
  • Proficiency in Python for engineering applications.
  • Experience with setting up, deploying, and managing cloud infrastructure using Infrastructure as Code (Terraform).
  • Strong application of engineering best practices across the product development lifecycle, including automated testing, CI/CD, and code reviews.
  • Comfortable working with various technologies across the software and data engineering stack, including Airflow, Vertex AI, Kubernetes, Docker, GitHub Actions, Jenkins, Google Cloudbuild, Prometheus, and Grafana.
  • Solid experience in cloud data storage, with particular expertise in Google BigQuery (GBQ), GCS/S3.
  • Demonstrable ability to produce high-quality engineering solutions free of technical debt, with a passion for maintaining high standards.
  • An excellent team player, capable of working collaboratively, communicating clearly, and providing/receiving feedback.
  • Ability to confidently write elegant, consistent, and maintainable source code with minimal supervision.
  • A working understanding of experimentation methodologies, such as the statistical evaluation of A/B tests.
  • A caring attitude towards the personal and professional development of the wider team, nurturing a collaborative and dynamic culture.
At JET, this is on the menu

Our teams forge connections internally and work with some of the best-known brands on the planet, giving us truly international impact in a dynamic environment.

Fun, fast-paced and supportive, the JET culture is about movement, growth, and about celebrating every aspect of our JETers. Thanks to them we stay one step ahead of the competition.

Inclusion, Diversity & Belonging

No matter who you are, what you look like, who you love, or where you are from, you can find your place at Just Eat Takeaway.com. We’re committed to creating an inclusive culture, encouraging diversity of people and thinking, in which all employees feel they truly belong and can bring their most colourful selves to work every day.

What else is cooking?

Want to know more about our JETers, culture or company? Have a look at our careers site where you can find people stories, blogs, podcasts and more JET morsels.

Are you ready to join the team? Apply now!

#LI-CA1


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.