Senior Data Engineer

RES
Glasgow
4 days ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer - Energy

Senior Data Engineer, SQL, RDBMS, AWS, Python, Mainly Remote

Description

Do you want to work to make Power for Good?


We're the world's largest independent renewable energy company. We're driven by a simple yet powerful vision: to create a future where everyone has access to affordable, zero carbon energy.


We know that achieving our ambitions would be impossible without our people. Because we're tackling some of the world's toughest problems, we need the very best people to help us. They're our most important asset so that's why we continually invest in them.


RES is a family with a diverse workforce, and we are dedicated to the personal professional growth of our people, no matter what stage of their career they're at. We can promise you rewarding work which makes a real impact, the chance to learn from inspiring colleagues from across a growing, global network and opportunities to grow personally and professionally.


Our competitive package offers rewards and benefits including pension schemes, flexible working, and top-down emphasis on better work-life balance. We also offer private healthcare, discounted green travel, 25 days holiday with options to buy/sell days, enhanced family leave and four volunteering days per year so you can make a difference somewhere else.


The Position

We are looking for a Senior Data Engineer with advanced expertise in Databricks to lead the development of scalable data solutions across in our asset performance management software, within our Digital Solutions business.


This role involves architecting complex data pipelines, mentoring junior engineers, and driving best practices in data engineering and cloud analytics. You will play a key role in shaping our data strategy which is the backbone of our software and enabling high-impact analytics and machine learning initiatives.


Accountabilities

  • Design and implement scalable, high-performance data pipelines.
  • Work with the lead cloud architect on the design of data lakehouse solutions leveraging Delta Lake and Unity Catalog.
  • Collaborate with cross-functional teams to define data requirements, governance standards, and integration strategies.
  • Champion data quality, lineage, and observability through automated testing, monitoring, and documentation.
  • Mentoring and guidance of junior data engineers. Using your passion for data engineering to foster a culture of technical excellence and continuous learning.
  • Driving the adoption of CI/CD and DevOps practices for data engineering workflows.
  • Stay ahead of emerging technologies and Databricks platform updates, evaluating their relevance and impact.

Knowledge

  • Deep understanding of distributed data processing, data lakehouse architecture, and cloud-native data platforms.
  • Optimization of data workflows for performance, reliability, and cost-efficiency on cloud platforms (particularly Azure but experience with AWS and/or GCP would be beneficial).
  • Strong knowledge of data modelling, warehousing, and governance principles.
  • Knowledge of data privacy and compliance standards (e.g., GDPR, HIPAA).
  • Understanding of OLTP and OLAP and what scenarios to deploy them in.
  • Understanding of incremental processing patterns.

Skills

  • Strong proficiency in Python and SQL. Experience working with Scala would be beneficial.
  • Proven ability to design and optimize large-scale ETL/ELT pipelines.
  • Building and managing orchestrations.
  • Excellent oral and written communication, both within the team and with our stakeholders.

Experience

  • 5+ years of experience in data engineering, with at least 2 years working extensively with Databricks and orchestrated pipelines such as DBT, DLT, or workflows using jobs.
  • Experience with Delta Lake and Unity Catalog in production environments.
  • Experience with CI/CD tools and version control systems (e.g., Git, GitHub Actions, Azure DevOps, Databricks Asset Bundles).
  • Experience with real-time data processing, both batch and streaming.
  • Experience working on machine learning workflows and integration with data pipelines.
  • Experience leading data engineering projects with distributed teams, ideally in a cross functional environment.

Qualifications

  • Databricks Certified Data Engineer Professional or equivalent certification.

At RES we celebrate difference as we know it makes our company a great place to work. Encouraging applicants with different backgrounds, ideas and points of view, we create teams who work together to solve complex problems and design practical solutions for our clients. Our multiple perspectives come from many sources including the diverse ethnicity, culture, gender, nationality, age, sex, sexual orientation, gender identity and expression, disability, marital status, parental status, education, social background and life experience of our people.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.