Senior Data Engineer(Azure/Apache Kafka)
Confidential
Posted: February 25, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
A Senior Data Engineer is responsible for designing, implementing and maintaining complex data pipelines, including Apache Kafka and Azure services, to drive business growth and efficiency.
Required Skills
Job Description
About Billigence:
Billigence is a boutique data consultancy with global outreach & clientele, transforming the way organizations work with data. We leverage proven, cutting-edge technologies to design, tailor, and implement advanced Business Intelligence solutions with high added value across a wide range of applications from process digitization through to Cloud Data Warehousing, Visualisation, Data Science, and Engineering or Data Governance. Headquartered in Sydney, Australia with offices around the world, we help clients navigate difficult business conditions, remove inefficiencies, and enable scalable adoption of analytics culture.
About the role:
We are seeking an experienced Senior Data Engineer with strong hands-on expertise in Microsoft Azure–based data platforms. The role focuses on development of scalable, high-performance data pipelines and streaming solutions. The ideal candidate will have deep experience with big data technologies, cloud-native services, and modern data engineering practices.
What you'll do:
Design, develop, and maintain scalable data pipelines on Microsoft Azure
Build and manage real-time and batch data processing solutions using Kafka, Spark, and Python
Develop and optimize ETL/ELT workflows using Azure Data Factory (ADF)
Work extensively with Databricks for large-scale data processing and analytics
Design, develop, and optimize Snowflake data models and queries
Write efficient and optimized SQL for data transformation and analysis
Process and manage structured and semi-structured data, including XML and JSON formats
Collaborate with cross-functional teams to support data-driven solutions
Ensure data quality, reliability, performance, and scalability of data systems
Follow best practices for cloud infrastructure, security, and performance optimization
What you'll need:
Apache Kafka (Queue-based messaging / streaming) – Strong hands-on experience
Python – Advanced development skills
Apache Spark – Strong experience in distributed data processing
Databricks – Hands-on experience in development and data engineering workflows
Azure Data Factory (ADF) – ETL/ELT pipeline development
Snowflake – Data warehousing and SQL optimization
SQL – Advanced querying and performance tuning
XML & JSON – Strong understanding of data structures and transformations
Scala – Basic working knowledge
Benefits:
Hybrid/remote working environment, allowing you a flexible work-life balance to thrive both in the office and from the comfort of your home.
Competitive compensations package + performance bonus.
Referral bonus scheme.
Coaching, mentoring, and buddy scheme (for faster integration during the probationary period)
Certification opportunities throughout your time with us.
Career growth support, internal moves, and career advancement opportunities.
Team building and networking events.