Data Engineer

Infected Blood Compensation Authority
Glasgow
4 days ago
Create job alert

Join to apply for the Data Engineer role at Infected Blood Compensation Authority


Infected Blood Compensation Authority provided pay range

This range is provided by Infected Blood Compensation Authority. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.


Base pay range

Direct message the job poster from Infected Blood Compensation Authority


Senior Resourcing Partner @ Infected Blood Compensation Authority | Advertising, Recruiting, Talent Acquisition

Location


Glasgow or Newcastle-upon-Tyne


About the Company


Are you a highly skilled Data Engineer (SEO) ready to combine your technical leadership with a meaningful mission? The Infected Blood Compensation Authority (IBCA) is a new arm’s-length body set up, at unprecedented pace, to administer compensation to people whose lives have been impacted by the infected blood scandal. IBCA will ensure payment is made in recognition of the wrongs experienced by those who have been infected by HIV, Hepatitis B or C, as well as those who love and care for them. They have been frustrated and distressed by the delays in achieving proper recognition, and we must help put this right. We are committed to putting the infected and affected blood community at the centre of every decision we make and every step we take to build our organisation to deliver compensation payments.


About the Role


This role will lead our data engineering capability within the engineering team of the Data Operations arm of the IBCA Data Directorate. The Data Operations team is responsible for developing and running safe and secure data solutions that provide a single source of truth for those going through their compensation journey. We are building a new data platform using Amazon Web Services (AWS) and data management and intelligence products using Databricks, Quantexa and Tableau. We are taking a product-centric approach treating data as a product and are building squads around our products, with a focus on paying compensation to those impacted by the infected blood scandal seamlessly.


Responsibilities



  • Provide technical guidance for the development of robust, automated data pipelines and master data management processes for the data platform and products, encompassing DevSecOps best practices.
  • Ensure tools and techniques are scalable, secure, and efficient.
  • Be responsible for ensuring the right data engineering practices are embedded consistently and to industry standard best practices within the data platform delivery teams.
  • Further develop your own data engineering and leadership skills.
  • Work with business stakeholders and across digital service teams, understanding their needs and translating them into data development.

Qualifications


Strong experience of cloud-native data engineering in AWS. Experience building and maintaining complex data pipelines (both ETL and ELT) in a rapid delivery setting. Expertise covering data quality & transformation processes, data matching, and master data management.


Required Skills



  • Experience working with structured and unstructured data and data lakes.
  • Ability to service both operational and analytical business needs.
  • Proficiency in writing clear, parameterised code in two or more of the following: Python, Databricks, Apache Spark (Pyspark, Spark SQL), NoSQL, Scala.
  • Experience of delivering through Agile/DevOps working practices in multi-disciplinary teams.
  • Strong problem-solving skills, including assessing and mitigating risks while identifying opportunities for innovation.
  • Experience in the full end-to-end data lifecycle for design, build, and test.
  • Demonstrable experience of setting up data engineering processes from scratch or implementing large changes to existing processes within an organisation.

Preferred Skills



  • Familiarity with practices such as CI/CD, Scrum, and Automation.
  • Knowledge of the interactions and dependencies with data architecture, data modelling, and testing engineering.
  • Must operate within a cloud DevSecOps environment.

Additional information:


Working at IBCA gives you a huge opportunity to make an impact on those who deserve compensation, and this role suits a candidate who can lead a team of data engineers to deliver solutions from the ground up to take from ideation to reality so that data is an enabler to everything IBCA does.


A minimum 60% of your working time should be spent at your principal workplace. Although requirements to attend other locations for official business will also count towards this level of attendance.


Apply before 11:55 pm on Monday 19th January 2026


Seniority level

Associate


Employment type

Full-time


Job function

Information Technology


Industries

Government Administration and IT Services and IT Consulting


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.