Data Engineer Intermediate

Sword Group
Glasgow
2 days ago
Create job alert
Overview

Sword is a leading provider of business technology solutions within the Energy, Public and Finance Sectors, driving transformational change within our clients. We use proven technology, specialist teams and domain expertise to build solid technical foundations across platforms, data, and business applications. We have a passion for using technology to solve business problems, working in partnership with our clients to help in achieving their goals.

About the role:

This Data Engineer role offers an excellent opportunity for someone who wants to deepen their technical expertise across the modern Microsoft data stack while taking on increased ownership and responsibility within delivery teams.

The role sits at the heart of client delivery, designing and building robust data pipelines, integrating complex data sources, and developing solutions using Azure Data Services, Microsoft Fabric, Power BI, and Purview.

As an Intermediate Engineer, you will handle more advanced engineering tasks, drive improvements to existing pipelines, and contribute to the design of scalable, high‑quality data architectures. You’ll collaborate directly with clients and cross-functional teams, translating real business needs into effective data solutions while beginning to support and mentor junior engineers.

This role is ideal for someone who wants to grow toward senior-level responsibilities, gain exposure to architectural thinking, and work with cutting-edge technologies, including Microsoft Foundry as part of emerging AI-driven capabilities. Candidates should apply if they’re looking for a role that blends hands-on engineering, problem-solving, continuous learning, and meaningful contribution to client outcomes within a high-performing Data & AI business unit.

Areas of Accountability, Responsibility and Competence Level
  • Design, build, and maintain scalable, secure, and high‑performing data pipelines using Azure Data Factory, Synapse, Fabric Pipelines, or Databricks
  • Integrate data from multiple systems into Azure Data Lake Storage or OneLake, ensuring consistency, reliability, and performance
  • Develop and maintain ELT/ETL processes using Azure-native tools and Fabric capabilities, optimising for cost and performance
  • Build and maintain Microsoft Fabric artefacts including lakehouses, warehouses, notebooks, dataflows, and semantic models
  • Perform advanced data transformations using SQL, Python, Spark, or Fabric Data Engineering experiences to prepare data for analytics and reporting
  • Manage and optimise data models used by Power BI, ensuring good performance, relationships, quality, and reusability
  • Support enterprise BI development by preparing datasets, semantic models, and reusable components within Power BI and Fabric
  • Apply data governance and security best practices by implementing Purview classification, lineage, access controls, and policies
  • Conduct data quality checks, data validation routines, and issue resolution to ensure trustworthy and reliable datasets
  • Work closely with cross-functional teams (data architects, analysts, data scientists, product teams) to understand requirements and translate them into robust data solutions
  • Engage directly with clients and stakeholders to clarify data needs, communicate progress, and explain technical concepts in a clear, accessible way
  • Take ownership of more complex engineering tasks and assist senior engineers with solution components requiring deeper technical insight
  • Mentor junior engineers by providing guidance, performing code reviews, and sharing best practices in engineering standards and Azure environments
  • Contribute to CI/CD processes, version control practices, and automated deployment pipelines for data solutions
  • Support the creation of documentation for pipelines, datasets, data models, and solution designs
  • Troubleshoot pipeline failures, performance issues, and data inconsistencies, ensuring the stability and reliability of production workloads
  • Stay current with emerging tools and capabilities across Azure, Microsoft Fabric, Power BI, Purview, and Microsoft Foundry
  • Identify opportunities to improve efficiency, automation, scalability, or maintainability across existing pipelines and solutions
  • Contribute to internal knowledge sharing, best-practice development, and engineering accelerators or templates
Responsibilities (continued)
  • Design, build, and maintain scalable data pipelines using Azure Data Factory, Synapse, Fabric pipelines, and other Azure-native tools
  • Integrate data from diverse systems into Azure Data Lake Storage or OneLake, ensuring reliability, consistency, and performance
  • Develop and optimise ELT/ETL processes using SQL, Python, Spark, and Microsoft Fabric capabilities
  • Build Fabric artefacts including lakehouses, warehouses, notebooks, dataflows, and semantic models to support analytics and reporting
  • Perform advanced data transformations to prepare data for downstream use in Power BI or analytical workloads
  • Develop and optimise Power BI datasets, models, and semantic layers to ensure fast, high‑quality reporting
  • Apply data governance and security best practices by using Purview to implement classification, lineage, policies, and access control
  • Conduct data validation, implement data quality checks, and resolve data issues to ensure trustworthy datasets
  • Monitor and maintain data pipelines and workloads, troubleshooting failures, performance issues, and inconsistencies
  • Work closely with cross-functional teams including analysts, data scientists, architects, and business stakeholders to translate requirements into data solutions
  • Engage with clients or internal stakeholders to clarify data needs and explain technical concepts in accessible ways
  • Take ownership of more complex engineering tasks, stepping beyond junior-level responsibilities
  • Mentor junior engineers through code reviews, guidance on best practices, and support in solving more advanced technical challenges
  • Contribute to CI/CD processes, version control, and automated deployment pipelines using Azure DevOps or Git-based workflows
  • Document data pipelines, architectural components, data models, and development processes clearly and consistently
  • Stay up to date with advancements across Azure, Microsoft Fabric, Power BI, Purview, and Microsoft Foundry
  • Participate in internal knowledge-sharing sessions and contribute to best-practice development within the engineering team
  • Identify opportunities to introduce automation, reusable components, or new tools that improve efficiency and scalability
  • Support engineering teams in delivering high-quality data solutions that align with architectural and governance standards
Benefits

At Sword, our core values and culture are based on caring about our people, investing in training and career development, and building inclusive teams where we are all encouraged to contribute to achieve success. We offer comprehensive benefits designed to support your professional development and enhance your overall quality of life. In addition to a Competitive Salary, here\'s what you can expect as part of our benefits package:

  • Personalised Career Development: We create a development plan customised to your goals and aspirations, with a range of learning and development opportunities within a culture that encourages growth.
  • Flexible working: Flexible work arrangements to support your work-life balance. We can’t promise to always be able to meet every request, however, are keen to discuss your individual preferences to make it work where we can.
  • A Fantastic Benefits Package: This includes generous annual leave allowance, enhanced family friendly benefits, pension scheme, access to private health, well-being, and insurance schemes.

At Sword we are dedicated to fostering a diverse and inclusive workplace and are proud to be an equal opportunities employer, ensuring that all applicants receive fair and equal consideration for employment, regardless of whether they meet every requirement. If you don’t tick all the boxes but feel you have some of the relevant skills and experience we’re looking for, please do consider applying and highlight your transferable skills and experience. We embrace diversity in all its forms, valuing individuals regardless of age, disability, gender identity or reassignment, marital or civil partner status, pregnancy or maternity status, race, colour, nationality, ethnic or national origin, religion or belief, sex, or sexual orientation. Your perspective and potential are important to us.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer, Intermediate — Azure Data & Analytics

Data Engineer

Data Engineer (Data Migrations)

Data Engineer

Senior NLP Engineer (London)

Senior NLP Engineer (London)

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.