Senior Data Engineer

Ergomed
Guildford
3 days ago
Create job alert
Overview

Ergomed Group is a rapidly expanding full service mid-sized CRO specialising in Oncology and Rare Disease.

Since its foundation in 1997 the company has grown organically and steadily by making strategic investments and landmark acquisitions, with operations in Europe, North America and Asia.

Our company allows for employee visibility (you have a voice!) creative contribution and realistic career development. We have nourished a true international culture here at Ergomed. We value employee experience, well-being and mental health and we acknowledge that a healthy work life balance is a critical factor for employee satisfaction and in turn nurtures an environment from which a high-quality client service can be achieved. Come and join us in this exciting journey to make a positive impact in patient’s lives.

Responsibilities
  • Design and implement data integration procedures and pipelines that extract data from various sources, transform it into the desired format, and load it into the appropriate modern analytics data storage and management systems. Integrates data from-to different internal and external sources (batch, incremental, streaming).

  • Adoption and drive of active metadata usage in data integration processes with high level of automation and simplicity. You will be responsible for using innovative and modern tools, techniques, and architectures to automate the most common, repeatable, and tedious data preparation and integration tasks partially or completely.

  • Collaborates with analytics owners (business analysts, project finance analysts, domain owners, and SMEs) to optimize data products in domain of data and business intelligence responsibility.

  • Improving data quality and governance with business data owners.

  • Educate and train counterparts in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.

  • Ensures data consistency and integrity during the integration process, identifying root cause of quality issues, address them and work with technical system owners to identify and implement optimal solution.

  • Optimizes data pipelines and data processing workflows for performance, scalability, and efficiency.

  • Monitors and tunes data analytics systems, identifies and resolves performance bottlenecks, and implements caching and indexing strategies to enhance query performance.

  • Implements data quality checks and validations (business rules) within data pipelines to ensure the accuracy, consistency, and completeness of data.

  • Takes authority, responsibility, and accountability for exploiting the value of enterprise information assets and of the analytics used to render insights for decision-making automated decisions and augmentation of human performance.

  • Establishes the governance of data and algorithms used for analysis, analytical applications, and automated decision-making.

Qualifications

Skills

  • Strong experience with various Data Management architectures like data warehouse, data lake, LakeHouse architecture, Data Fabric vs Data Mesh concepts and the supporting processes like data Integration, MPP engines, governance, metadata management.

  • Intermediate experience in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines.

  • Strong experience to design, build, and deploy data solutions that capture, explore, transform, and utilize data to create data products and support data informed initiatives. Proficiency in ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration and data virtualization.

  • Basic knowledge and ability in data science languages/tools such as R, Python, TensorFlow, Databricks, Dataiku, Knime, SAS, or others.

  • Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (i.e. AWS, OCI, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks, etc).

  • Strong experience with database technologies such as SQL, NoSQL, PostgreSQL, Oracle, Hadoop, Teradata, etc.

  • Intermediate experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Qlik Sense, Looker, ThoughtSpot, MicroStrategy or others for semantic-layer-based data discovery is advantageous.

  • Expert problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve repetitive problems.

Soft skills and characteristics

  • Strong experience supporting and working with cross-functional teams in a dynamic business environment.

  • An ideal candidate would be expected to collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly. The successful candidate will also be required to have regular discussions with data consumers on optimally refining the data pipelines developed in nonproduction environments and deploying them in production.

  • Ideal candidate is a confident, energetic self-starter, with strong interpersonal skills.

  • Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.

  • Good business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals.

  • Ability to describe business use cases/outcomes, data sources and management concepts, and analytical approaches/options.

  • Willingness to learn and grow.

  • Advanced in English (both spoken and written).

Additional Information

We prioritize diversity, equity, and inclusion by creating an equal opportunities workplace and a human-centric environment where people of all cultural backgrounds, genders and ages can contribute and grow.

To succeed we must work together with a human first approach. Why? because our people are our greatest strength leading to our continued success on improving the lives of those around us.

Benefits
  • Training and career development opportunities internally

  • Strong emphasis on personal and professional growth

  • Friendly, supportive working environment

  • Opportunity to work with colleagues based all over the world, with English as the company language

Our core values are key to how we operate, and if you feel they resonate with you then PrimeVigilance could be a great company to join!

  • Quality

  • Integrity & Trust

  • Drive & Passion

  • Agility & Responsiveness

  • Belonging

  • Collaborative Partnerships

We look forward to welcoming your application.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.