Data Engineer (SEO)

Infected Blood Compensation Authority
Glasgow
4 days ago
Create job alert

The Infected Blood Compensation Authority (IBCA) is a new arm’s-length body set up, at unprecedented pace, to administer compensation to people whose lives have been impacted by the infected blood scandal.


IBCA will ensure payment is made in recognition of the wrongs experienced by those who have been infected by HIV, Hepatitis B or C, as well as those who love and care for them. They have been frustrated and distressed by the delays in achieving proper recognition, and we must help put this right.


We are committed to putting the infected and affected blood community at the centre of every decision we make and every step we take to build our organisation to deliver compensation payments.


IBCA employees will be public servants. If successful in this role you will be appointed directly into IBCA, on IBCA terms and conditions as a public servant.


Successful applicants will join the Civil Service Pension Scheme.


Please note that the mission of IBCA means that it is likely to be operational for a period of approximately 5 to 7 years. When IBCA’s work begins to wind down, IBCA employees will receive support and practical guidance to find a new role, whether in the Civil Service, another Arms Length Body (ALB), or an external employer.


The Infected Blood Compensation Authority (IBCA) is responsible for delivering a compensation scheme that has been long awaited by the infected blood community to provide financial compensation to victims of infected blood on a UK-wide basis. This role will lead our data engineering capability within the engineering team of the Data Operations arm of the IBCA Data Directorate.


The Data Operations team is responsible for developing and running safe and secure data solutions that provide a single source of truth for those going through their compensation journey. We are building a new data platform using Amazon Web Services (AWS) and data management and intelligence products using Databricks, Quantexa and Tableau. We are taking a product-centric approach treating data as a product and are building squads around our products, with a focus on paying compensation to those impacted by the infected blood scandal seamlessly.


You will focus on building robust data pipelines and data management processes (including master data management), ensuring data quality, and driving a culture of data-driven decision-making. Working at IBCA gives you a huge opportunity to make an impact on those who deserve compensation, and this role suits a candidate who can lead a team of data engineers to deliver solutions from the ground up to take from ideation to reality so that data is an enabler to everything IBCA does.


As a Data Engineer, you will join a multidisciplinary team to design, build, and deliver high-impact, scalable AWS-based data solutions using Databricks and Quantexa technologies, while also driving data quality, mentoring junior colleagues, and fostering a strong data engineering community.
You will:



  • Provide technical guidance for the development of robust, automated data pipelines and master data management processes for our data platform and products, encompassing DevSecOps best practices
  • Ensure tools and techniques are scalable, secure and efficient
  • Be responsible for ensuring the right data engineering practices are embedded consistently and to industry standard best practices within the data platform delivery teams
  • Further developing your own data engineering and leadership skills
  • Work with business stakeholders and across digital service teams, understanding their needs and translating them into data development.

Responsibilities

  • Data Solution Delivery: Designing, building, and delivering high-performance, scalable, and secure data solutions in a complex data environment within AWS, Databricks and Quantexa. This includes building robust ETL/ELT pipelines, and ensuring seamless integration between platforms.
  • Collaboration & Partnership: Work extensively with multidisciplinary teams across product delivery, architecture, engineering security and analytics to understand and meet data needs.
  • Quality & Operational Excellence: Driving high build and data quality, ensuring stability, robustness, and resilience of products, and embedding Agile, CI/CD, and DevOps practices.
  • Leadership & Community Building: Mentoring junior team members, fostering professional development, building a data engineering community, reviewing solutions, troubleshooting complex issues and proactively driving innovation and challenging existing methods.
  • Stakeholder Engagement & Problem Solving: Communicating complex data solutions and championing innovative approaches to diverse stakeholders, while also anticipating and resolving intricate data engineering challenges and building consensus to align data designs with organizational goals.

Person specification

  • Strong experience of cloud-native data engineering in AWS and building and maintaining complex data pipelines (both ETL and ELT) in a rapid delivery setting that covers data quality & transformation processes, data matching and master data management.
  • Experience working with structured and unstructured data, and data lakes to service operational and analytical business needs.
  • Proficiency in writing clear parameterised code in two or more of the following – Python, Databricks, Apache Spark (Pyspark, Spark SQL), NoSQL, Scala.
  • Experience of delivering through Agile/DevOps working practices in multi-disciplinary teams – CI/CD, Scrum, Automation.
  • Strong problem-solving skills, including assessing and mitigating risks while identifying opportunities for innovation.
  • Experience in the full end to end data lifecycle for design, build and test, with knowledge of the interactions and dependencies with data architecture, data modelling and testing engineering.
  • Demonstrable experience of setting up data engineering processes from scratch or of implementing large changes to existing processes within an organisation that operate in a cloud DevSecOps environment.

Additional information

A minimum 60% of your working time should be spent at your principal workplace. Although requirements to attend other locations for official business will also count towards this level of attendance.


Behaviours

We'll assess you against these behaviours during the selection process:



  • Changing and Improving
  • Communicating and Influencing

Technical skills

We'll assess you against these technical skills during the selection process:



  • Data analysis and synthesis

Apply before 11:55 pm on Monday 19th January 2026


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.