Azure Data Engineer | EU Institution | Remote from EU
Confidential
Posted: March 6, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Develop, design, and deploy data pipelines and analytics solutions for large-scale data processing and analysis using Python, SQL, and machine learning techniques.
Required Skills
Job Description
Who are we?
Trasys International is a dynamic global organization that takes pride in being the trusted partner of EU Institutions. With strong commitment to excellence and a 30-years track record of delivering high-quality solutions, we are dedicated to supporting the growth and success of our clients. Our Mission is to help our clients keep up with the challenges of digital transformation by providing the right talent at the right time for the right job. To this end, we are constantly looking for talented professionals who are interested in working on challenging international projects and able to deliver high-quality results within multicultural environments. Our services include (but are not limited to) modernization of solutions, digital workspaces, cloud technologies and IT security. Our Headquarters are in Brussels and we have active accounts and offices across Europe (i.e. Luxembourg, Amsterdam, Athens, Stockholm, Geneva).
For one our esteemed Clients in Belgium, Brussels we are currently looking for Azure Data Engineer.
Please note that, despite the remote nature of the role within any EU location, you will be expected to attend onboarding at the client’s premises in Brussels.
You will be mainly responsible for…
• Develop, deploy, and maintain scalable and incremental data pipelines from REST APIs and databases using Python, PySpark, Azure Synapse, Knime, SQL, and ETL tools to ingest, transform, and prepare data.
• Process and transform complex JSON and GIS data into structured datasets optimized for analysis and reporting. This includes parsing, transforming, and validating JSON data to ensure data quality and consistency.
• Load, organize, and manage data in Azure Data Lake Storage and Microsoft Fabric OneLake, ensuring accessibility, performance, and efficient storage using lakehouse and Delta Lake patterns.
• Document ETL processes, metadata definitions, data lineage, and technical specifications to ensure transparency and reusability.
• Collaborate with data analysts, BI developers, and business stakeholders to understand data requirements and deliver reliable, well-documented datasets aligned with organizational needs.
• Implement data quality checks, logging, monitoring, and automated incremental load mechanisms within data pipelines to support maintainability, observability, and troubleshooting.
#LI-MS1