SolutionsAt Coherent Solutions, we offer engineering solutions that protect your time, your bottom line, and most importantly, your business.##### Our ApproachWe work collaboratively and transparently, providing regular updates and seeking feedback, to ensure that your needs are met, and the project is delivered successfully.# Data Engineer (Food Delivery)## Locations## Company BackgroundOur client is a leading online and mobile food ordering company operating in over 1,600 U.S. cities and London. With a portfolio of well-known food delivery brands, they serve millions of diners and process hundreds of thousands of orders daily. The company supports a large and growing restaurant network and prioritizes excellent customer experience through 24/7 support, innovative technology, and scalable infrastructure.## Project DescriptionThis role is part of the data engineering team focused on marketing analytics and reporting infrastructure. You’ll enhance and extend a big data platform that supports marketing activities, senior-level metrics dashboards, and customer engagement systems. The role offers a unique opportunity to design and build end-to-end solutions for data storage, transformation, and visualization - ensuring that business decisions are data-driven and scalable.## Technologies* Python* PySpark* SQL* Spark* AWS (S3, EMR)* Git* Jenkins* CI/CD* PyCharm* Distributed Systems## What You'll Do* Design, build, and maintain large-scale data pipelines using Spark and PySpark.* Develop robust ETL processes to support automated analytics, performance monitoring, and campaign reporting.* Collaborate with marketing and analytics stakeholders to translate business needs into technical requirements.* Write unit tests and leverage CI/CD tools to ensure high code quality and performance.* Support and improve existing data infrastructure while planning for long-term scalability.* Ensure efficient data processing using distributed systems and cloud-native services.* Contribute to architectural decisions and recommend best practices for data handling, modeling, and storage.## Job Requirements* 2+ years of experience in Spark and SQL development.* Solid Python programming experience with emphasis on data engineering.* Experience with PySpark in distributed environments.* Familiarity with cloud services (especially AWS S3 and EMR).* Knowledge of ETL, data modeling, and performance optimization.* Understanding of software engineering best practices (e.g., unit testing, CI/CD, Git).* Strong communication skills and ability to explain technical concepts clearly.* Attention to detail and a strong sense of ownership.* English - B2 or higher.## What Do We OfferThe global benefits package includes:* Technical and non-technical training for professional and personal growth;* Internal conferences and meetups to learn from industry experts;* Support and mentorship from an experienced employee to help you professional grow and development;* Internal startup incubator;* Health insurance;* English courses;* Sports activities to promote a healthy lifestyle;* Flexible work options, including remote and hybrid opportunities;* Referral program for bringing in new talent;* Work anniversary program and additional vacation days.We're always starting new projects and we'd love to work with you. Please send your CV and we'll get in touch.
#J-18808-Ljbffr