Data Engineer LATAM (Python/PySpark/AWS Glue/Amazon Athena/SQL/Apache Airflow)
Confidential
Posted: January 30, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
A Data Engineer LATAM role involves building, optimizing, and scaling data pipelines and infras, with a focus on Python, PySpark, AWS Glue, Athena, and Apache Airflow. The ideal candidate is a technical leader who can ship complex features in half the time it takes others, with a strong track record of writing clean code. This is a remote job with a salary range of N/A.
Required Skills
Job Description
Let’s be direct: We’re looking for a technical powerhouse. If you’re the developer who:
• Is the clear technical leader on your team
• Consistently solves problems others can’t crack
• Ships complex features in half the time it takes others
• Writes code so clean it could be published as a tutorial
• Takes pride in elevating the entire codebase
Then we want to talk to you.
This isn’t a role for everyone, and that’s by design.
We’re seeking developers who know they’re exceptional and have the track record to prove it.
What you’ll do
• Build, optimize, and scale data pipelines and infrastructure using Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.
• Design, operationalize, and monitor ingest and transformation workflows: DAGs, alerting, retries, SLAs, lineage, and cost controls.
• Collaborate with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows; work toward a feature store.
• Integrate pipeline health and metrics into engineering dashboards for full visibility and observability.
• Model data and implement efficient, scalable transformations in Snowflake and PostgreSQL.
• Build reusable frameworks and connectors to standardize internal data publishing and consumption.