ARCHIVED
This job listing has been archived and is no longer accepting applications.
MisuJob - AI Job Search Platform MisuJob

Senior Data Engineer

Cygnify

Petaling Jaya, Selangor, Malaysia permanent

Posted: February 25, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

Deliver and operate reliable, high-quality data pipelines and curated datasets on DXP data platform, focusing on end-to-end engineering delivery, Day-2 accountability, and data quality remediation.

Job Description

Role Mission:

Deliver and operate reliable, high-quality data pipelines and curated datasets on DXP data platform. This role owns end-to-end engineering delivery for assigned pipelines/data products and takes Day-2 accountability for DataOps stability, observability, cost-efficiency and data quality remediation in production.

Accountabilities:

1. End-to-end pipeline delivery (Build): Independently design, develop, test, and deploy ingestion and transformation pipelines from source to curated layers.

2. Production reliability (Run): Own operational health for assigned pipelines - monitoring, incident response, recovery, and continuous improvement to meet SLA and freshness expectations.

3. Data quality management (Govern): Implement and run data quality controls (validation, reconciliation, anomaly detection), drive root-cause analysis, and coordinate remediation with data stewards and source owners.

4. Engineering standards & observability: Apply engineering standards for CI/CD, version control, pipeline instrumentation, documentation, RBAC alignment, and cost/performance guardrails. Contribute to continuous improvements to engineering standards, including optimizing workflows using AI.

5. Stakeholder collaboration: Work directly with architects, platform engineers, data stewards, application domain teams and analytics users to clarify requirements, manage trade-offs, and deliver trusted datasets for self-serve analytics.

Responsibilities:

1. Data Engineering Delivery

a. Build/extend ingestion pipelines using Datapipe (Airbyte/Airflow), Snowflake (Snowpark, Snowpipe, Openflow) and AWS integration patterns; implement robust retry, idempotency, and backfill strategies.

b. Implement data model & develop transformation logic in Snowflake (SQL/Python where relevant) across Bronze/Silver/Gold (or equivalent) layers; optimize for maintainability and cost.

c. Deliver well-tested changes via CI/CD across DEV/SIT/PROD with clear release notes and rollback plans.

2. DataOps / Production Support

a. Monitor pipelines and data SLAs; triage failures, recover production runs, and perform RCA with preventive actions (not just quick fixes).

b. Create and maintain runbooks/playbooks, on-call handover notes, and operational dashboards for owned pipelines.

c. Collaborate with Platform Engineering on observability, alert tuning, operational readiness, and automation improvements.

3. Data Quality & Stewardship – Controls & Remediation

a. Implement automated DQ checks (completeness, uniqueness, referential integrity, schema drift, reconciliation) and publish outcomes to stakeholders.

b. Partner with Data Quality Stewards to track, prioritize, and remediate DQ issues; clearly separate source-system defects vs pipeline defects and drive the right owner actions.

c. Enable stewardship tooling: help domain stewards operationalize governance artifacts (e.g., turning glossary/CDE definitions into checks/scorecards; integrating with ticketing/knowledge base), without turning the engineer into the “governance admin.”

4. Collaboration & Enablement

a. Participate in requirement refinement with architects/analysts to shape requirements into implementable data contracts and acceptance criteria.

b. Produce engineering documentation (data lineage notes, assumptions, operational procedures) and contribute to team knowledge base and onboarding materials.

c. Co-own domain data contracts (with steward sign-off): translate business definitions (KPIs, allowable values, timeliness, “gold” dataset expectations) into implementable data contracts, acceptance criteria, and change control notes.

Team Scope/ Stakeholders:

1. Scope: Assigned pipelines, datasets, and operational ownership within the DXP Data Platform (Datapipe/Airflow/Airbyte, Snowflake, AWS).

2. Key stakeholders: Data Engineering Lead(s), Data Architects, Platform Engineering, Data Quality Stewards, BI/Analytics users, and source system owners.

3. Decision rights (within owned area): pipeline design approach, implementation choices (aligned to paved road/patterns), testing strategy, and operational guardrails (aligned to standards); incident triage actions and recovery steps; recommendations on prioritization for fixes vs enhancements.


Requirements:
1. 3 - 8+ years in data engineering with proven hands-on delivery and production operations ownership (not project-only).

2. Strong practical skills in Snowflake (or equivalent modern data platforms): data loading/transforms, performance tuning basics, role-aware designs, and cost awareness.

3. Orchestration experience: Airflow (or equivalent) DAG design, scheduling, dependency control, retries, and observability.

4. Python + SQL proficiency for transformation, validation, and operational tooling/scripts.

5. AWS fundamentals: S3 data structures/lifecycle, IAM-aware integrations, and monitoring basics (e.g., CloudWatch patterns).

6. Applied AI/agentic approaches to Data DevOps: hands-on exposure building or integrating AI-assisted operational workflows (e.g., incident triage summarization, log/query analysis helpers, automated runbook suggestion, anomaly detection for freshness/volume/schema drift, or LLM-based knowledge retrieval for pipeline support), with clear guardrails (RBAC, auditability, and “human-in-the-loop” approval for production actions).

7. Demonstrated ability to handle production incidents with structured RCA and preventative improvements.

8. Learning agility and problem-solving: picks up modern stack components fast and applies them pragmatically.

Core Traits (Non-negotiables)

1. "Build it, run it" ownership: doesn’t outsource ops thinking to others.

2. Production-first mindset: prioritizes reliability, data correctness, and recoverability.

3. Structured problem solving: hypothesis-driven debugging, evidence-based RCA, and tight feedback loops.

4. Collaboration maturity: works with stakeholders without over-promising; escalates early with options and trade-offs.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply