MisuJob - AI Job Search Platform MisuJob

Senior Data Engineer (AI Enablement)

Devsavant

LATAM permanent

Posted: March 31, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

A Senior Data Engineer is responsible for designing and implementing data pipelines that enable AI enablement for startups and growth-stage companies in LATAM.

Job Description

About DevSavant

DevSavant is an operating partner for startups and growth-stage companies, helping them turn ambition into execution.

We support founders and leadership teams with product engineering and global staffing, from early prototypes and MVPs to scaling high-performing teams. Our vetted talent across LATAM and Asia embeds directly into client teams, operating as true extensions rather than external vendors.

With over 8 years working in venture-backed ecosystems, DevSavant is trusted to accelerate delivery, scale teams efficiently, and support companies as they reach their next milestone.

About the Role

We are seeking a Data Engineer to join our growing team of data experts. This is an individual contributor role embedded within cross-functional teams, focused on building and maintaining the data infrastructure that powers our analytics and business intelligence platforms.

The role is heavily data-oriented, with a strong emphasis on designing and developing scalable, reliable data pipelines and systems using Python and SQL. You will be responsible for ensuring that business-critical data is accurate, accessible, and optimized for downstream use by software developers, analysts, data scientists, and other stakeholders.

The ideal candidate is a hands-on data engineer who thrives in fast-paced environments, takes ownership, and is comfortable working with evolving requirements. You enjoy building robust data systems from the ground up, integrating new datasets, and continuously optimizing data infrastructure for performance and scalability.

Key Responsibilities

AI & Automation

• Contribute to AI-enabled data workflows, including integration with agents and MCP servers

• Leverage AI tools (e.g., Copilot, Openspec) to automate aspects of the software development lifecycle

• Instrument data systems and pipelines with automation, monitoring, and intelligent workflows

Data Engineering & Pipeline Development

• Build and maintain scalable data pipelines using Python, SQL, and modern ETL frameworks

• Design and implement robust data architectures that support business and analytical needs

• Assemble large, complex datasets that meet functional and non-functional requirements

• Optimize data systems for performance, reliability, and scalability

• Write clean, maintainable, and well-tested code following best practices

• Continuously improve data engineering standards and processes

Data Infrastructure & Integration

• Develop infrastructure for efficient extraction, transformation, and loading (ETL) of data from diverse sources

• Integrate structured and unstructured data formats (e.g., CSV, Excel, Shapefiles) into centralized systems

• Maintain and optimize databases containing customer usage, financial, and operational data

• Integrate and optimize data access across platforms, including analytical tools such as QGIS

• Maintain and improve search indices using both COTS and custom-built solutions

Analytics Enablement & Stakeholder Support

• Collaborate with analysts, data scientists, and business stakeholders to support data needs

• Build and maintain data tools that empower analysts to explore and optimize datasets

• Assist stakeholders with data-related technical challenges and infrastructure needs

• Support analytical workflows, including SQL query development and dataset preparation

• Partner with data and analytics teams to enhance overall system capabilities

Monitoring, Reliability & Operations

• Monitor data systems and pipelines to ensure high availability and reliability

• Perform root cause analysis on data and system issues and implement corrective actions

• Improve observability and alerting for data infrastructure

• Maintain operational excellence across data platforms

Collaboration & Execution

• Work closely with cross-functional teams in a distributed, remote-first environment

• Translate evolving business requirements into scalable data solutions

• Take ownership of data systems from design through production

• Operate effectively in a fast-paced environment with changing priorities

Core Technical Stack

Data & Backend

• Python for data processing and pipeline development

• SQL for querying and data transformation

• PostgreSQL (preferred) and other relational databases

• ETL orchestration tools such as Airflow or Cloud Composer

• Data transformation tools such as DBT

Data Platforms & Tools

• Geospatial tools and analytical platforms (e.g., QGIS)

• Handling of structured and unstructured data formats (CSV, Excel, Shapefiles)

• Search and indexing technologies (COTS and custom solutions)

Cloud & Infrastructure

• Cloud platforms (GCP preferred; BigQuery experience is a strong plus)

• Familiarity with data warehouses, data lakes, and distributed data systems

• Message queuing and stream processing systems

DevOps & Tooling

• Linux-based environments and shell scripting

• Version control, CI/CD practices, and automated workflows

• AI-powered development tools (e.g., GitHub Copilot, Openspec)

Required Qualifications

• 3–5 years of experience in data engineering, data pipelines, or related fields

• Strong proficiency in SQL and experience working with relational databases (PostgreSQL preferred)

• Advanced experience using Python for data processing (including spatial and non-spatial data)

• Experience building and optimizing data pipelines and architectures

• Hands-on experience with ETL orchestration tools (Airflow or Cloud Composer preferred)

• Experience with data transformation tools (DBT preferred)

• Experience working with unstructured and legacy data formats

• Strong analytical skills and experience working with large, complex datasets

• Experience performing root cause analysis and improving data processes

• Familiarity with distributed systems, message queues, or stream processing

• Experience working in Linux environments and using command-line tools

• Strong communication skills and ability to collaborate across teams

• Proactive mindset with a focus on ownership and continuous improvement

Nice to Have

• Experience with GCP and BigQuery administration

• Experience with geospatial data and tools

• Familiarity with AI-enabled data systems and agent-based architectures

• Experience automating SDLC processes using AI tools

• Experience integrating data platforms with analytics and BI tools

• Experience in high-growth or fast-paced environments

Qualities We're Looking For

• Results-driven mindset (GTD): Ability to identify next actions, communicate clearly, and execute efficiently

• Ownership mentality: Strong sense of accountability and decision-making ability

• Builder mindset: Passion for creating scalable, impactful data solutions

• Curiosity and continuous improvement: Always seeking better ways to solve problems

• Team collaboration: Comfortable working across teams and supporting diverse stakeholders

• Bonus: You enjoy coffee, love software and products, and bring a good sense of humor

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply