MisuJob - AI Job Search Platform MisuJob

Senior Data Platform Engineer

Aspora

India permanent

Posted: March 25, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

A Senior Data Platform Engineer is a key member of our team, responsible for designing and implementing scalable and secure data systems. The ideal candidate will have a strong background in data engineering, a passion for building scalable solutions, and excellent communication skills. The right candidate will be able to work remotely and contribute to the development of our borderless financial operating system.

Job Description

About Aspora

People on the move deserve a bank that moves with them. Since 2022, Aspora has been building a borderless financial operating system that makes money as mobile and transparent as its users.

Backed by influential venture capitalists like Sequoia Capital, Greylock Partners, Hummingbird Ventures, Y Combinator & Global Founders Capital. We're a team of 75+ across India, the UK, the UAE, EU and the US, working with extreme ownership, radical candour, and an obsession with customer impact.

We celebrate builders who question assumptions, ship fast, and turn regulatory complexity into elegant solutions. If you’re driven to redefine what global banking can be, we’d love to build the future with you.

About the Role

We're building the data infrastructure that powers decisions across every part of our business — from real-time analytics to large-scale batch computation. As a Senior Data Platform Engineer, you'll own the systems that process billions of events, move data reliably, and make insights fast to produce.

You'll work closely with analytics, ML, and product engineering teams — setting the bar for reliability, performance, and data quality across the platform.

What You'll Do

• Big Data Platform & Infrastructure

• Design, build, and operate large-scale data processing infrastructure using Spark on Databricks — ensuring reliability, performance, and cost efficiency at scale.

• Architect and maintain lakehouse solutions (Delta Lake, Iceberg) including partitioning strategies, Z-ordering, and compaction jobs.

• Own cluster management, autoscaling policies, and resource governance across Databricks workspaces.

• Drive platform-level improvements: query optimisation, caching strategies, compute–storage separation, and shuffle tuning.

• ETL / ELT Pipeline Engineering

• Design and build robust, idempotent, and testable data pipelines handling batch and near-real-time workloads.

• Manage and extend our Airflow-based orchestration layer — DAG authoring standards, dependency management, alerting, and SLA enforcement.

• Implement and maintain CDC pipelines (Debezium, Kafka Connect, or native DB replication) ensuring low-latency, high-fidelity data propagation.

• Define data pipeline contracts (schemas, SLAs, quality assertions) and enforce them via automated data quality frameworks.

• Analytical Storage & Computation

• Model and manage analytical data stores — dimensional models, OBT patterns, and aggregation layers optimised for BI and self-serve analytics.

• Own the evolution of our analytical warehouse/lakehouse stack — performance benchmarking, cost modelling, and technology selection.

• Build and maintain efficient data serving layers for dashboards, ML feature stores, and reverse ETL use cases.

• Implement data retention, archival, and lifecycle management policies across hot/warm/cold storage tiers.

• Platform Engineering & Developer Experience

• Define and enforce data platform engineering best practices — code standards, CI/CD for pipelines, automated testing, and observability.

• Build internal tooling and libraries that make data engineers faster: reusable Spark utilities, pipeline templates, local dev environments.

• Champion data reliability engineering: lineage tracking, incident response playbooks, pipeline SLO monitoring, and root cause analysis.

Tech-Stack

| Area | Tools | Compute | Apache Spark, Databricks, PySpark, Scala | Orchestration | Apache Airflow, dbt | Ingestion & CDC | Debezium, Kafka, Kafka Connect | Storage | Delta Lake, Iceberg, S3/GCS, Snowflake | Languages | Python, SQL, Scala | Observability | Great Expectations, OpenLineage, Monte Carlo |

What We're Looking For

• 5+ years of data engineering experience with 2+ years on large-scale big data platforms.

• Hands-on expertise with Apache Spark — performance tuning, partitioning, broadcast joins, execution plans.

• Deep Databricks experience — workspace configuration, Unity Catalog, Delta Live Tables, or equivalent.

• Solid Apache Airflow experience: DAG authoring, custom operators, XCom, Pools, and sensor patterns.

• Production experience implementing CDC pipelines (Debezium, Kafka Connect, or DMS).

• Strong proficiency in Python and SQL.

• Experience designing analytical data models for large datasets (star schema, wide tables, aggregation layers).

• Track record of building reliable, observable, and testable pipelines in production.

What Great Looks Like

• Hands-on experience with modern data lake technologies like Delta Lake or Apache Iceberg, including compaction, time travel, and schema evolution

• Experience building and operating streaming data pipelines using Apache Spark Structured Streaming, Apache Flink, or Kafka Streams

• Proficiency with dbt for data transformations and lineage management

• Experience working with cloud data infrastructure on Amazon Web Services, Google Cloud Platform, or Microsoft Azure

• Familiarity with infrastructure-as-code tools such as Terraform or AWS CloudFormation

• Experience owning data platform reliability end-to-end, including monitoring, alerting, and building self-healing systems

• A strong data-as-a-product mindset, with emphasis on clear contracts, versioned schemas, SLOs, and well-documented datasets

• A bias toward automation—proactively reducing operational toil by building scalable frameworks and tooling

• Solid engineering fundamentals, including writing testable code, participating in rigorous code reviews, and maintaining high standards for operational excellence

Why Join Aspora?

• Work on a high-impact product that is redefining banking for immigrants worldwide.

• Own backend design and execution, solving complex engineering problems at scale.

• Work alongside a top-tier global team of engineers in a fast-paced environment.

• Competitive ESOPs—align your growth with Aspora’s long-term vision.

• Health insurance, strong leave policies, and career growth opportunities in a high-impact startup

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply