This range is provided by Ethiq. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range
About Us
We are an ambitious, newly established consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies. As a Data Engineer, you will play a crucial role in building and optimising data solutions, ensuring scalability, performance, and reliability for our clients' complex data challenges.
The Role
As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This role requires strong technical expertise, problem-solving skills, and the ability to work in a dynamic, client-facing environment.
Your Impact:
- Develop, implement, and optimise data pipelines and ETL processes on Databricks.
- Work closely with clients to understand business requirements and translate them into technical solutions.
- Design and implement scalable, high-performance data architectures.
- Ensure data integrity, quality, and security through robust engineering practices.
- Monitor, troubleshoot, and optimise data workflows for efficiency and cost-effectiveness.
- Collaborate with data scientists and analysts to facilitate machine learning and analytical solutions.
- Contribute to best practices, coding standards, and documentation to improve data engineering processes.
- Mentor junior engineers and support knowledge-sharing across teams.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake.
- Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data.
- Implement data governance, security, and compliance standards.
- Work with cloud platforms such as AWS, Azure, or GCP to manage data storage and processing.
- Collaborate with cross-functional teams to enhance data accessibility and usability.
- Optimise data warehouse and lakehouse architectures for performance and cost efficiency.
- Maintain and improve CI/CD processes for data pipeline deployment and monitoring.
What We Are Looking For:
- 5+ years of experience in data engineering or related roles.
- Strong expertise in Databricks, Spark, Delta Lake, and cloud data platforms (AWS, Azure, or GCP).
- Proficiency in Python and SQL for data manipulation and transformation.
- Experience with ETL/ELT development and orchestration tools (e.g., Apache Airflow, dbt, Prefect).
- Knowledge of data modelling, data warehousing, and lakehouse architectures.
- Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code.
- Strong problem-solving skills and the ability to work in fast-paced environments.
- Excellent communication and stakeholder management skills.
Preferred Qualifications:
- Experience with machine learning data pipelines and MLOps practices.
- Knowledge of data streaming technologies such as Kafka or Kinesis.
- Familiarity with Terraform or similar infrastructure automation tools.
- Previous experience working in consulting or client-facing roles.
What We Offer:
- Competitive compensation, including performance-based incentives.
- Opportunities for professional growth and development in a fast-growing firm.
- A collaborative and supportive environment that values innovation, excellence, and client success.
If you’re passionate about data engineering and ready to make an impact in AI-driven consulting, we’d love to hear from you!
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries
Business Consulting and Services, IT Services and IT Consulting, and Data Infrastructure and Analytics
#J-18808-Ljbffr