MisuJob - AI Job Search Platform MisuJob

Senior Data Engineer

Payoneer

Gurugram, India (India-Gurgaon) Remote permanent

Posted: April 9, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

We're seeking a senior data engineer to join our Gurugram team, where you'll work on global payments and compliance, multi-currency and workforce management, and business intelligence. The ideal candidate will have 3+ years of experience in data engineering, a strong background in data modeling and analysis, and excellent communication skills. The right candidate will be able to work independently and as part of a team to deliver high-quality results.

Job Description

About Payoneer

Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world’s underserved businesses to a rising global economy. We’re a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 countries and territories.

By taking the complexity out of the financial workflows–including everything from global payments and compliance to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence.

Role summary

We’re looking for a Senior Data Engineer with a drive for excellence and an ownership mindset who can lead the design and delivery of scalable, secure, and highly reliable data platforms in a complex payments and fintech environment. You set the technical bar for your team: you architect systems, make sound trade-off decisions, unblock cross-team delivery, and mentor engineers.

You’re deliberate about how AI-assisted development is adopted on your team, setting guardrails that prevent shortcuts from becoming long-term cognitive debt, while actively using AI to solve real engineering and business problems.

AI-first mindset: We value engineers who can incorporate AI and agentic development practices into how we build data systems, setting patterns for responsible AI-assisted engineering across design reviews, code quality, testing, and documentation, while delivering data engineering-led AI use cases such as intelligent data quality and observability, anomaly detection, automated alert triage, and governance.

What You’ll Do

• Own the technical architecture for large-scale batch and streaming data pipelines that power product, risk, and reporting use cases, using frameworks such as Apache Beam, Spark, or Flink with managed runners like Google Cloud Dataflow.

• Lead data warehouse and lakehouse design for analytical and operational usecases, setting modelling standards and driving performance and cost optimisation.

• Design event-driven and streaming architectures with strong correctness guarantees schema evolution, replay and backfill strategies, late-data handling, idempotency, and operational safety.

• Build and operate storage patterns for operational and analytical workloads using wide-column stores (Bigtable, Cassandra, HBase, or equivalents), including capacity planning and SLO definition.

• Establish orchestration and operational excellence using tools like Airflow, Composer, Dagster, or Prefect - including CI/CD strategy, automated testing, pipeline observability, and incident response practices.

• Drive data quality, governance, and auditability through automated controls, lineage/metadata practices, and secure-by-default access patterns.

• Lead through technical influence: mentor engineers, run design reviews, maintain decision records, unblock cross-team delivery, and shape roadmaps through clear technical reasoning.

• Set the standard for how your team uses AI-assisted development, ensuring AI-generated code meets the same review, testing, and documentation bar as any other code. Identify and deliver AI-driven solutions for data engineering problems such as intelligent data profiling, anomaly detection, and automated root-cause analysis, with a focus on reproducibility and governance.

Who You Are

• You are a seasoned data engineer with a strong sense of ownership and accountability, comfortable operating in complex domains and driving end-to-end delivery from design through production operations.

• You balance long-term platform thinking with pragmatic execution, and you know how to raise reliability and quality without slowing teams down.

• You thrive in cross-team environments and influence through technical leadership, clarity, and strong execution.

• You are impact-driven and measure success by how effectively you enable teams, improve data trust, and accelerate product outcomes.

• You think critically about AI adoption in engineering workflows, you see the leverage it provides, but you also understand the risks of unchecked reliance: reduced understanding, hidden errors, and maintenance burden. You set patterns that capture the upside while protecting quality.

Key skills and competencies

• Strong track record delivering production-grade data pipelines and datasets end-to-end: design, implementation, deployment, and operations.

• Deep experience with distributed data processing

• Expertise with cloud data warehouses (BigQuery, Snowflake, Redshift, or Databricks) including strong dimensional modelling, query optimisation, and cost management skills.

• Hands-on experience designing systems on event streaming platforms including schema management, delivery semantics trade-offs, and operational patterns like replay and backfill.

• Experience with operational and wide-column stores (Bigtable, Cassandra, HBase, or equivalents) with a strong understanding of access-pattern-driven design and capacity planning.

• Strong orchestration and platform engineering experience with Airflow, Dagster, or Prefect, including CI/CD, automated testing, observability, and incident response.

• Demonstrated technical leadership: mentoring, design reviews, architectural decision records, and the ability to influence stakeholders and align teams around technical direction.

• A considered approach to AI-assisted engineering: you use AI tools to improve throughput and quality, but you also set guardrails, review standards, testing expectations, documentation requirements.

Preferred

• Prior experience in fintech, payments, lending, or broader financial services (e.g., reconciliation, settlement, risk and fraud data, regulatory reporting).

• Experience operating data systems with defined SLOs/SLAs and governance in cloud environments.

• Familiarity with data governance and compliance standards: PII handling, access controls, auditing, and policy-as-code patterns.

• Experience partnering with ML/DS teams to productionise features, training datasets, and monitoring infrastructure.

The Payoneer Ways of Working

Act as our customer’s partner on the inside
Learning what they need and creating what will help them go further.

Do it. Own it.
Being fearlessly accountable in everything we do.

Continuously improve
Always striving for a higher standard than our last.

Build each other up
Helping each other grow, as professionals and people.

If this sounds like a business, a community, and a mission you want to be part of, apply today.

We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply