ARCHIVED
This job listing has been archived and is no longer accepting applications.
MisuJob - AI Job Search Platform MisuJob

Senior Data Engineer - Lakehouse & Data Engineering Frameworks - REF5085J

DeutscheTelekomITSolutions

Budapest, , Hungary Hybrid permanent

Posted: February 13, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

As a Senior Data Engineer, you will be responsible for designing and implementing data engineering frameworks for large-scale data processing and storage systems, with a focus on scalability, reliability, and performance. You will work closely with cross-functional teams to identify and resolve complex data-related issues, and ensure that data is properly integrated with other systems. You will also be responsible for developing and maintaining data pipelines, data models, and data governance policies.

Job Description

As Hungary’s most attractive employer in 2025 (according to Randstad’s representative survey), Deutsche Telekom IT Solutions is a subsidiary of the Deutsche Telekom Group. The company provides a wide portfolio of IT and telecommunications services with more than 5300 employees. We have hundreds of large customers, corporations in Germany and in other European countries. DT-ITS recieved the Best in Educational Cooperation award from HIPA in 2019, acknowledged as the the Most Ethical Multinational Company in 2019. The company continuously develops its four sites in Budapest, Debrecen, Pécs and Szeged and is looking for skilled IT professionals to join its team.

We are looking for a Senior Data Engineer to build and operate scalable data ingestion and CDC capabilities on our Azure-based Lakehouse platform. Beyond developing pipelines in Azure Data Factory and Databricks, you will help us mature our engineering approach: we increasingly deliver ingestion and CDC preparation through Python projects and reusable frameworks, and we expect this role to apply professional software engineering practices (clean architecture, testing, code reviews, packaging, CI/CD, and operational excellence). 

Our platform runs batch-first processing, with streaming sources landed raw and processed in batch and selective evolution toward streaming where needed. 

You will work within the Common Data Intelligence Hub, collaborating with data architects, analytics engineers, and solution designers to enable robust data products and governed data flows across the enterprise. 

• Your team owns ingestion & CDC engineering end-to-end (design, build, operate, observability, reliability, reusable components). 
• You contribute to platform standards (contracts, layer semantics, readiness criteria) and reference implementations. 
• You do not primarily own cloud infrastructure provisioning (e.g., enterprise networking, core IaC foundations), but you collaborate with the platform team by defining requirements, reviewing changes, and maintaining deployable code for pipelines and jobs. 

Platform data engineering & delivery 

• Design and develop ingestion pipelines using Azure and Databricks services (ADF pipelines, Databricks notebooks/jobs/workflows). 
• Implement and operate CDC patterns (inserts, updates, deletes), including late arriving data and reprocessing strategies. 
• Structure and maintain bronze and silver Delta Lake datasets (schema enforcement, de-duplication, performance tuning). 
• Build “transformation-ready” datasets and interfaces (stable schemas, contracts, metadata expectations) for analytics engineers and downstream modeling. 
• Ingest data in a batch-first approach (raw landing, replayability, idempotent batch processing), and help evolve patterns toward true streaming where future use cases require it. 

Software engineering for data frameworks 

• Develop and maintain Python-based ingestion/CDC components as production-grade software (modules/packages, versioning, releases). 
• Apply engineering best practices: code reviews, unit/integration tests, static analysis, formatting/linting, type hints, and clear documentation. 
• Establish and improve CI/CD pipelines for data engineering code and pipeline assets (build, test, security checks, deploy, rollback patterns). 
• Drive reuse via shared libraries, templates, and reference implementations; reduce “one-off notebook” solutions. 

Operations, reliability & observability 

• Implement logging, metrics, tracing, and data pipeline observability (run-time KPIs, SLAs, alerting, incident readiness). 
• Troubleshoot distributed processing and production issues end-to-end. 
• Work with solution designers on event-based triggers and orchestration workflows; contribute to operational standards. 
• Implement operational and security hygiene: secure secret handling, least-privilege access patterns, and support for auditability (e.g., logs/metadata/lineage expectations). 

Collaboration & leadership 

• Mentor other engineers and promote consistent engineering practices across teams. 
• Contribute to the Data Engineering Community of Practice and help define standards, patterns, and guardrails. 
• Contribute to architectural discussions (layer semantics, readiness criteria, contracts, and governance). 
• Work with architects and governance stakeholders to ensure datasets meet governance requirements (cataloging, ownership, documentation, access patterns, compliance constraints) before promotion to higher layers. 

• 3–5 years of hands-on experience building data pipelines with Databricks and Azure in production. 
• Strong knowledge of Delta Lake patterns (CDC, schema evolution, deduplication, partitioning, performance optimization). 
• Advanced Python engineering skills: building maintainable projects (packaging, dependency management, testing, tooling). 
• Solid SQL skills (complex transformations, debugging, performance tuning). 
• Proven experience with CI/CD and Git-based workflows (merge requests, branching strategies, automated testing, environment promotion). 
• Ability to diagnose and resolve issues in distributed systems (Spark execution, cluster/runtime behavior, data correctness). 
• Good understanding of data modeling principles and how they influence ingestion and performance. 
• Practical experience applying data governance and security controls in a Lakehouse environment (permissions/access patterns, secure secret handling, audit needs; Unity Catalog is a plus). 
• Proactive, reliable, and able to work independently within agile teams. 
• Strong communication skills in English (spoken and written). 

Technical  Core Skills 

• Databricks (Jobs/Workflows, Notebooks, Spark, Autoloader, Delta Lake) 
• Azure Functions & Durable Functions (orchestration, long-running workflows) 
• SQL (analysis + performance tuning) 
• PySpark and Python (production-grade) 
• Azure Data Factory (pipelines, triggers, linked services, monitoring) 
• ADLS Gen2 (lake storage design, folder/partition strategy, access controls, lifecycle/retention) 

Software engineering toolchain 

• Git + code review workflows 
• CI/CD pipelines (e.g., GitLab CI, Azure DevOps) 
• Testing: unit/integration tests, test data strategies 
• Code quality: linting/formatting, static analysis, type hints 
• Packaging & dependency management (e.g., Poetry/pip-tools/conda — whichever you standardize on) 

Governance, security & orchestration 

• Unity Catalog (cataloging, permissions/access patterns, basic governance controls) 
• Secure secret handling and service authentication patterns (Key Vault or equivalent) 
• Event Grid / Azure Functions / event-driven orchestration 
• Observability (structured logging, metrics, alerting; Log Analytics or equivalent) 

 

* Please be informed that our remote working possibility is only available within Hungary due to European taxation regulation.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply