Data Engineer/Architect (Python & SQL) - WFH #34640
Manila Recruitment
Posted: October 10, 2025
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design and evolve scalable data pipelines for Azure SQL, Synapse, Delta Lake, and SCD patterns, with a focus on performance optimization and cost governance.
Required Skills
Job Description
As a Data Architect, you will own our SQL data estate, design scalable pipelines, and lead data enrichment across our Azure-first platform. You’ll set the standards for modelling, quality, security, and cost while writing production-grade Python and SQL daily.
• Design and evolve schemas for OLTP/OLAP (Azure SQL, Synapse, Delta Lake), with partitioning, indexing, and RLS for multi-tenant isolation.
• Establish data contracts and versioning, govern schema evolution, and implement CDC +SCD patterns.
• Performance engineering: query tuning, resource classes, caching strategies, and cost guardrails.
• Architect ELT/ETL across batch & streaming using Azure Data
• Factory/Synapse/Databricks, Event Hubs/Service Bus, Functions, and Container Apps/AKS.
• Build reliable, observable pipelines (idempotent, retryable, lineage-aware) with SLAs/SLOs and runbooks.
• Implement CI/CD for data (dbt/SQL projects, PySpark jobs, tests) using GitHub Actions and IaC (Terraform/Bicep).
• Define and operate enrichment layers: UPC/GS1, OCR/EXIF metadata, taxonomies, embeddings, and third-party data joins.
• Curate gold/semantic models for analytics & product APIs; manage feature/metric definitions and documentation.
• Partner with DS/ML to operationalize feature stores, model outputs, drift signals, and evaluation tables.
• Own reference architecture across ADLS Gen2, Synapse/Databricks, Azure SQL/SQL Server, Cosmos DB (incl. vector), Azure AI Search, Key Vault, Purview.
• Security & compliance by default: encryption, secret management, RBAC/ABAC, data retention and GDPR/SOC 2 controls.
• Observability: OpenTelemetry + Azure Monitor/App Insights, data quality tests, freshness SLAs, and lineage in Purview.
Requirements:
• Extremely strong Python & SQL (you can diagnose complex query plans, write PySpark and pandas with equal ease).
• 7+ years in data engineering/architecture with production ownership of SQL databases and pipelines.
• Deep Azure experience: ADLS Gen2, Data Factory/Synapse/Databricks, Azure SQL/SQL Server, Functions, Event Hubs/Service Bus, Key Vault.
• Proven design skills in data modeling (star/snowflake, Data Vault/Lakehouse), CDC/SCD, and semantics (dbt or equivalent).
• Track record implementing data quality frameworks, lineage, and cost/performance guardrails at scale.
• Strong understanding of multi-tenant SaaS, security, and privacy (GDPR basics).
• Cosmos DB (incl. vector) and Azure AI Search; embedding pipelines for images/text.
• Feature stores, MLflow/registries, real-time inference plumbing.
• SQL Server internals, PolyBase/Serverless SQL; Postgres familiarity.
• Purview rollouts, governance programs, and data product operating models.