We’re looking for an experienced contract Data Engineer to join our innovative, fast-growing data team as a Data Engineer and help build the next generation of cloud data architecture on Microsoft Azure. You'll be at the forefront of designing scalable, secure, and modern data platforms using cutting-edge Azure services and the Databricks unified analytics platform.
This is a hands-on engineering role perfect for passionate data professionals who thrive on creating high-performance data infrastructure and solving complex big data challenges in cloud-native environments. Role will be remote based but may need occasional site visits and work must be conducted from the UK.
What You'll Be Doing
Data Architecture & Engineering
Design and implement enterprise-scale, robust data solutions using Azure and Databricks
Architect scalable ELT/ETL pipelines using Azure Data Factory, Azure Synapse Analytics, and Databricks
Build automated data ingestion processes from various sources including streaming and batch data
Develop and maintain data transformation workflows using PySpark, Scala, and SQL on Databricks
Data Platform & Optimization
Design and optimize Databricks clusters for performance, cost management, and resource utilization
Implement medallion architecture (Bronze, Silver, Gold) for data processing and transformation
Optimize Spark jobs and queries for maximum performance and cost efficiency
Monitor and troubleshoot data pipeline performance issues
Collaboration & Best Practices
Work closely with data scientists, analysts, and business stakeholders to understand requirements
Implement data governance frameworks and ensure data quality standards
Participate in code reviews and maintain high coding standards
Document technical solutions and create knowledge-sharing materials
Essential Requirements
Technical Expertise
4 years of hands-on data engineering experience with strong focus on cloud data platforms
Azure Mastery: Deep expertise in Azure data services including:
Azure Data Factory
Azure Synapse Analytics
Azure Data Lake Storage
Azure SQL Database
Databricks Proficiency: Advanced skills in Databricks including cluster management and optimization
Programming: Strong proficiency in Python, Scala, and SQL for data processing, transformation, and performance optimization
Big Data Technologies: Proven experience with distributed computing and big data frameworks
Cloud & DevOps
Deep understanding of Microsoft Azure ecosystem and cloud-native data services
Experience with CI/CD practices, Git-based workflows, and Infrastructure as Code
Knowledge of data orchestration tools (Azure Data Factory, Airflow, or similar)
Understanding of Azure security best practices, RBAC, and data encryption
Streaming & Real-Time Processing
Knowledge of real-time data processing using Azure Event Hubs, Kafka, and Spark Streaming
Experience with event-driven architectures and streaming data pipelines
Data Governance
Understanding of data governance frameworks, Unity Catalog, and data quality practices
Experience with data lineage, cataloguing, and metadata management
Desirable Skills
Experience with other cloud platforms (AWS, GCP)
Knowledge of containerization technologies (Docker, Kubernetes)
Familiarity with machine learning workflows and MLOps practices
Experience with data visualization tools (Power BI, Tableau)
Certification in Azure data services