Senior Data Engineer - PySpark, Azure DevOps, Databricks
Location: Nanterre, Paris
Remote: 2 days onsite per week
Language: French & English
Duration: 3 year project, 3-month renewable
The Company:
We have partnered up with a French multinational insurance company, providing investment management and other financial services in locations including Europe, Middle east, North America and Indian Pacific Region. They are looking for a Senior Data Engineer to play a pivotal role in data engineering and and analytics initiatives.
Job Description:
* Design, develop, and maintain data pipelines using Pyspark, Azure DevOps, Databricks, and Data Factory.
* Collaborate with cross-functional teams to understand data requirements and ensure data is readily available for analytics and reporting purposes.
* Implement data quality and data governance best practices.
* Troubleshoot and optimize data pipelines for performance and scalability.
* Ensure data security and compliance with industry standards and regulations.
Must have experience:
* Pyspark, Azure DevOps, Databricks, and Data Factory.
* Solid knowledge of data engineering best practices, data integration, and ETL processes.
* Proficiency in French (Mandatory) and English, as effective communication is essential for cross-team collaboration.
* Strong problem-solving and analytical skills.
* Experience working with large datasets and distributed computing.
This is a great opportunity to join a global financial company who provide opportunities for professional growth and development.
Please apply directly or email me: amy.collier (@) glocomms.com
