Lead Data Architect - AWS (Databricks / Lakehouse)
Unison Group
Posted: March 26, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are seeking a Lead Data Architect to drive the design and evolution of our next-generation data platform on AWS. This is a high-impact leadership role responsible for building scalable, cloud-native data ecosystems and enabling enterprise-wide data-driven decision-making.
Required Skills
Job Description
Role Overview
We are seeking a Lead Data Architect to drive the design and evolution of our next-generation data platform on AWS. This is a high-impact leadership role responsible for building scalable, cloud-native data ecosystems and enabling enterprise-wide data-driven decision-making.
You will play a critical role in modernizing legacy data platforms, defining Lakehouse architecture, and leading cross-functional teams to deliver robust, high-performance data solutions.
Requirements:
Key Responsibilities
🔹 Architecture & Strategy
• Define and implement enterprise-scale data architecture on AWS
• Lead modernization initiatives (Data Lake → Lakehouse using Databricks & Delta Lake)
• Design scalable, secure, and cost-efficient data platforms
• Establish data governance, quality, and security frameworks
🔹 Engineering Excellence
• Architect and oversee development of high-performance data pipelines using PySpark & Databricks
• Optimize Spark workloads, partitioning strategies, and query performance
• Design real-time and batch processing frameworks
🔹 Leadership & Delivery
• Lead and mentor data engineers, BI developers, and analysts
• Drive delivery across multiple data programs and initiatives
• Collaborate with business stakeholders to translate data needs into scalable solutions
• Own end-to-end data platform reliability and performance
🔹 Innovation & CoE
• Build and scale a Data & Analytics Center of Excellence (CoE)
• Drive adoption of best practices, reusable frameworks, and standards
• Evaluate and implement emerging technologies in the AWS data ecosystem
Must-Have Skills & Experience
• 10+ years in Data Engineering / Big Data / Data Platforms
• 3–5+ years of deep hands-on experience with Databricks on AWS
• Strong expertise in:
• Apache Spark (internals, optimization, distributed computing)
• Delta Lake & Lakehouse architecture
• AWS services (S3, EMR, Glue, Redshift, Lambda, IAM, etc.)
• Proven experience with large-scale data platforms (Hadoop, Hive, HDFS)
• Strong proficiency in Python, PySpark, and SQL
• Experience designing data lakes, data warehouses, and hybrid architectures
• Expertise in data orchestration tools (Airflow, etc.)
• Strong understanding of CI/CD, Git, and DevOps for data pipelines
• Experience with real-time streaming (Kafka, Kinesis)
• Exposure to multi-cloud environments (Azure / GCP)
• Knowledge of Data Mesh or Data Fabric concepts
• Experience in domain-driven data architecture
👥 Leadership Expectations
• Proven ability to lead cross-functional teams
• Experience working with enterprise stakeholders and clients
• Strong communication and decision-making skills
• Ability to drive data strategy and organizational adoption
Benefits:
• Employment visa sponsorship
• Dependent visa support
• Flight ticket coverage
• Two weeks of free accommodation
• Group insurance with inpatient and outpatient allowances