GCP Engineer

Gravitai Ltd
London
2 months ago
Applications closed

Related Jobs

View all jobs

GCP Data Engineer (Java, Spark, ETL)

Staff Engineer (ML-Native / Software Engineering)

Data Engineer (Databricks Champion)

Data Engineer (Databricks Champion)

Data Engineer (Databricks Champion)

Data Engineer

We’re a small team with really big ambitions, both for what we want to achieve and also for the culture we’re building. We want to create a company that remains people-focused, harnessing the power of empowered and engaged teams. As we scale, we want people to really own what they do and be given the autonomy and freedom to make mistakes, learn, and create something meaningful.

If you would like to know a bit more about this opportunity, or are considering applying, then please read the following job information.We all work incredibly hard because we really care about what Gravitai represents and stands for – we’re looking for exceptional people who are proactive, hungry to learn, and want to put their pride in our collective achievements.We believe that having diversity in age, background, gender identity, race, sexual orientation, physical or mental ability, ethnicity, and perspective will make us an infinitely better company.Health, Safety, and Well-being are at the heart of everything we do.Purpose The Google Cloud Platform (GCP) Data Engineer will be responsible for designing, developing, and maintaining scalable data solutions in the cloud.The ideal candidate will have strong experience with GCP services, data pipelines, ETL processes, and big data technologies. You will work closely with data scientists, analysts, and software engineers to optimise data workflows and ensure the integrity and security of data within the GCP ecosystem.You'll work closely with developers, end-users, and stakeholders to deliver projects smoothly and improve the system over time.We’re looking for someone who’s not just technical but also enjoys working with people and solving real-world business challenges.Main Duties and Responsibilities

Design and Develop Data Pipelines: Build and implement scalable data pipelines using GCP services, including Cloud Dataflow, Cloud Dataproc, Apache Beam, and Cloud Composer (Apache Airflow).ETL/ELT Workflow Management: Develop, optimise, and maintain ETL/ELT workflows for structured and unstructured data.Big Data Solutions: Manage and optimise big data environments leveraging BigQuery, Cloud Storage, Pub/Sub, and Data Fusion.Data Integrity and Security: Ensure data quality, security, and governance by following industry best practices.Database Expertise: Work with both SQL and NoSQL databases, such as BigQuery, Cloud SQL, Firestore, and Spanner.Automation and Infrastructure as Code: Automate data workflows using Terraform, CI/CD pipelines, and Infrastructure as Code (IaC) methodologies.Performance Monitoring and Troubleshooting: Identify and resolve performance bottlenecks, failures, and latency issues.Cross-Functional Collaboration: Work closely with analytics, AI/ML, and business intelligence teams to integrate data solutions.Real-Time and Batch Processing: Implement efficient data management strategies for both real-time and batch processing.Technical Documentation: Maintain comprehensive documentation of technical specifications, workflows, and best practices.Experience & Expertise

Education: Bachelor’s Degree, Information Systems, or a related field (Preferred).Experience3+ years of hands-on experience in data engineering with GCP.Strong proficiency in SQL, Python, and/or Java/Scala for data processing.Practical experience with BigQuery, Cloud Dataflow, Cloud Dataproc, and Apache Beam.Experience with event-driven streaming platforms such as Apache Kafka or Pub/Sub.Familiarity with Terraform, Kubernetes (GKE), and Cloud Functions.Strong understanding of data modeling, data lakes, and data warehouse design.Knowledge of Airflow, Data Catalog, and IAM security policies.Exposure to DevOps practices, CI/CD pipelines, and containerisation (Docker, Kubernetes) is a plus.Skills:Strong analytical and problem-solving abilities.Ability to thrive in an agile, fast-paced environment.Preferred Qualifications

Certification: GCP Professional Data Engineer Certification (Required).Machine Learning Integration: Experience with ML pipelines using Vertex AI or TensorFlow on GCP.Cloud Architecture: Familiarity with multi-cloud and hybrid cloud environments.Benefits 28 days of holiday plus Bank Holidays.Regular socials & team events including Christmas events between all offices and staff (incl. remote).Remote-first position, preferably for UK-based candidates, with the option of contract-based role for non-UK staff.

#J-18808-Ljbffr

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Machine Learning Jobs (With Real GitHub Examples)

In today’s data-driven landscape, the field of machine learning (ML) is one of the most sought-after career paths. From startups to multinational enterprises, organisations are on the lookout for professionals who can develop and deploy ML models that drive impactful decisions. Whether you’re an aspiring data scientist, a seasoned researcher, or a machine learning engineer, one element can truly make your CV shine: a compelling portfolio. While your CV and cover letter detail your educational background and professional experiences, a portfolio reveals your practical know-how. The code you share, the projects you build, and your problem-solving process all help prospective employers ascertain if you’re the right fit for their team. But what kinds of portfolio projects stand out, and how can you showcase them effectively? This article provides the answers. We’ll look at: Why a machine learning portfolio is critical for impressing recruiters. How to select appropriate ML projects for your target roles. Inspirational GitHub examples that exemplify strong project structure and presentation. Tangible project ideas you can start immediately, from predictive modelling to computer vision. Best practices for showcasing your work on GitHub, personal websites, and beyond. Finally, we’ll share how you can leverage these projects to unlock opportunities—plus a handy link to upload your CV on Machine Learning Jobs when you’re ready to apply. Get ready to build a portfolio that underscores your skill set and positions you for the ML role you’ve been dreaming of!

Machine Learning Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Machine learning is fuelling innovation across every industry, from healthcare to retail to financial services. As organisations look to harness large datasets and predictive algorithms to gain competitive advantages, the demand for skilled ML professionals continues to soar. Whether you’re aiming for a machine learning engineer role or a research scientist position, strong interview performance can open doors to dynamic projects and fulfilling careers. However, machine learning interviews differ from standard software engineering ones. Beyond coding proficiency, you’ll be tested on algorithms, mathematics, data manipulation, and applied problem-solving skills. Employers also expect you to discuss how to deploy models in production and maintain them effectively—touching on MLOps or advanced system design for scaling model inferences. In this guide, we’ve compiled 30 real coding & system‑design questions you might face in a machine learning job interview. From linear regression to distributed training strategies, these questions aim to test your depth of knowledge and practical know‑how. And if you’re ready to find your next ML opportunity in the UK, head to www.machinelearningjobs.co.uk—a prime location for the latest machine learning vacancies. Let’s dive in and gear up for success in your forthcoming interviews.

Negotiating Your Machine Learning Job Offer: Equity, Bonuses & Perks Explained

How to Secure a Compensation Package That Matches Your Technical Mastery and Strategic Influence in the UK’s ML Landscape Machine learning (ML) has rapidly shifted from an emerging discipline to a mission-critical function in modern enterprises. From optimising e-commerce recommendations to powering autonomous vehicles and driving innovation in healthcare, ML experts hold the keys to transformative outcomes. As a mid‑senior professional in this field, you’re not only crafting sophisticated algorithms; you’re often guiding strategic decisions about data pipelines, model deployment, and product direction. With such a powerful impact on business results, companies across the UK are going beyond standard salary structures to attract top ML talent. Negotiating a compensation package that truly reflects your value means looking beyond the numbers on your monthly payslip. In addition to a competitive base salary, you could be securing equity, performance-based bonuses, and perks that support your ongoing research, development, and growth. However, many mid‑senior ML professionals leave these additional benefits on the table—either because they’re unsure how to negotiate them or they simply underestimate their long-term worth. This guide explores every critical aspect of negotiating a machine learning job offer. Whether you’re joining an AI-focused start-up or a major tech player expanding its ML capabilities, understanding equity structures, bonus schemes, and strategic perks will help you lock in a package that matches your technical expertise and strategic influence. Let’s dive in.