Azure Data Engineer (Databricks) Joining Capco meansjoining an organisation that is committed to an inclusive workingenvironment where you’re encouraged to #BeYourselfAtWork. Wecelebrate individuality and recognize that diversity and inclusion,in all forms, is critical to success. It’s important to us that werecruit and develop as diverse a range of talent as we can and webelieve that everyone brings something different to the table – sowe’d love to know what makes you different. Such differences maymean we need to make changes to our process to allow you the bestpossible platform to succeed, and we are happy to cater to anyreasonable adjustments you may require. You will find the sectionto let us know of these at the bottom of your application form oryou can mention it directly to your recruiter at any stage and theywill be happy to help. Why Join Capco? Capco is a global technologyand business consultancy, focused on the financial services sector.We are passionate about helping our clients succeed in anever-changing industry. You will work on engaging projects withsome of the largest banks in the world, on projects that willtransform the financial services industry. We are/have: - Expertsin banking and payments, capital markets and wealth and assetmanagement - Deep knowledge in financial services offering,including e.g. Finance, Risk and Compliance, Financial Crime, CoreBanking etc. - Committed to growing our business and hiring thebest talent to help us get there - Focused on maintaining ournimble, agile and entrepreneurial culture As a Data Engineer atCapco you will: - Work alongside clients to interpret requirementsand define industry-leading solutions - Design and develop robust,well-tested data pipelines - Demonstrate and help clients adhere tobest practices in engineering and SDLC - Have excellent knowledgeof building event-driven, loosely coupled distributed applications- Have experience in developing both on-premise and cloud-basedsolutions - Possess a good understanding of key securitytechnologies and protocols e.g. TLS, OAuth, Encryption - Supportinternal Capco capabilities by sharing insight, experience andcredentials Why Join Capco as a Data Engineer? - You will work onengaging projects with some of the largest banks in the world, onprojects that will transform the financial services industry. -You’ll be part of a digital engineering team that develops new andenhances existing financial and data solutions, having theopportunity to work on exciting greenfield projects as well as onestablished Tier 1 bank applications adopted by millions of users.- You’ll be involved in digital and data transformation processesthrough a continuous delivery model. - You will work on automatingand optimising data engineering processes, developing robust andfault-tolerant data solutions both on cloud and on-premisedeployments. - You’ll be able to work across different data, cloudand messaging technology stacks. - You’ll have an opportunity tolearn and work with specialised data and cloud technologies towiden your skill set. Skills & Expertise: You will haveexperience working with some of the followingMethodologies/Technologies; Required Skills - Hands-on workingexperience of the Databricks platform. Must have experience ofdelivering projects which use DeltaLake, Orchestration, UnityCatalog, Spark Structured Streaming on Databricks. - Extensiveexperience using Python, PySpark and the Python Ecosystem with goodexposure to Python libraries. - Experience with Big Datatechnologies and Distributed Systems such as Hadoop, HDFS, HIVE,Spark, Databricks, Cloudera. - Experience developing near real-timeevent streaming pipelines with tools such as – Kafka, SparkStreaming, Azure Event Hubs. - Excellent experience in the DataEngineering Lifecycle, having created data pipelines which takedata through all layers from generation, ingestion, transformationand serving. - Experience of modern Software Engineering principlesand experience of creating well-tested, clean applications. -Experience with Data Lakehouse architecture and data warehousingprinciples, experience with Data Modelling, Schema design and usingsemi-structured and structured data. - Proficient in SQL & goodunderstanding of the differences and trade-offs between SQL andNoSQL, ETL and ELT. - Proven experience in DevOps and buildingrobust production data pipelines, CI/CD Pipelines on e.g. AzureDevOps, Jenkins, CircleCI, GitHub Actions etc. Desirable Skills -Experience developing in other languages e.g. Scala/Java. -Enthusiasm and ability to pick up new technologies as needed tosolve problems. - Exposure to working with PII, Sensitive Data andunderstanding data regulations such as GDPR.#J-18808-Ljbffr