MisuJob - AI Job Search Platform MisuJob

Data Engineering Lead

MediaRadar

United States Remote permanent

Posted: April 7, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

A Data Engineering Lead is responsible for designing, developing, and deploying scalable data engineering solutions to support the growth of MediaRadar's marketing and sales teams. Key responsibilities include leading the development of data engineering projects, collaborating with cross-functional teams to implement data-driven solutions, and ensuring data quality and integrity. The ideal candidate will have expertise in data engineering, cloud computing, and programming languages such as Python, Java, or SQL.

Job Description

Role: Data Engineering Lead

Location: Remote (USA)

About MediaRadar

MediaRadar, an Industry Leader in Marketing Intelligence now including the data and capabilities of Vivvix, powers the mission-critical marketing and sales decisions that drive competitive advantage. Our next-generation marketing intelligence platform enables clients to achieve peak performance with always-on insights that span the media, creative, and business strategies of 5 million brands across 30+ media channels and 275 billion in media spend.

Role Summary

The Data Engineering Lead is a high-velocity, hands-on "player-coach" responsible for technical stewardship, designing scalable systems, and integrating complex Machine Learning models into robust ETL pipelines. You will lead a lean team through a cultural shift toward cross-trained agility while spending 70-80% of your time in the code. Success is defined by achieving total record processing, maintaining strict cloud cost-efficiency, and shrinking data delivery windows.

• Coding & Technical Stewardship (70-80% Hands-on): Architect and implement complex, end-to-end data pipelines using Azure Databricks and PySpark. Design, build, and maintain a scalable data architecture using the Medallion Architecture (Bronze/Silver/Gold layers).
• Performance & Cost Optimization: Optimize Apache Spark jobs, tune Databricks units, and define cluster policies to minimize compute costs. Proactively audit and refactor pipelines every 3-6 months to maintain effectiveness and reduce cloud costs. Implement caching strategies (e.g., broadcast joins) and manage performance impact.
• System Integrity & SLAs: Develop a proactive monitoring and alerts framework to ensure 99.9% reliability and mitigate system issues before they impact end-users. Build an end-to-end Data Validation Framework (e.g., Great Expectations) to enforce data accuracy and consistency. Minimize job failure rates and ensure data is available in the Gold layer within the required 24-hour turnaround time.
• Database Architecture: Architect and design high-performance schemas in PostgreSQL, managing indexing, partitioning, and optimizing complex analytical queries.
• Team Leadership & Agility: Lead a lean team toward cross-trained agility, moving away from "siloed specialists". Manage sprint cycles, conduct code reviews, and guide the team on best engineering practices (including CI/CD).
• Strategy & Scalability: Anticipate future data needs and design High-Velocity Architecture that is highly scalable and manageable to handle sudden volume increases (e.g., double the data from new sources like paid social/CTV). A critical function is translating business-level requirements into clear, technical user stories for developers.
• ML Integration: Collaborate with ML teams to integrate automated model orchestration into robust ETL pipelines.
• Collaborate with the offshore team lead to facilitate seamless knowledge transfer and operational continuity across time zones. Establish clear communication protocols, standardized documentation, and robust feedback loops to ensure alignment on project goals. Act as the primary bridge between teams to mitigate bottlenecks and maintain high-quality delivery standards.


Requirements:
Required Technical Stack (Mandatory)

• Core: Python, PostgreSQL + pgvector.
• Big Data: Azure Databricks, PySpark, Delta Lake
• DevOps: Docker, Git, Azure DevOps, CI/CD

Qualifications

• 10+ years of experience in Data or Software Engineering with deep codebase involvement.
• 3+ years as a Technical Lead managing agile teams.
• Proven ability to lead lean, high-impact teams while maintaining high individual output.
• Experience with cross-training advocacy and scaling data processing through automation.

Desired Qualifications

• Workflow Orchestration: Experience with Apache Airflow.
• Containerization: Familiarity with Azure Kubernetes Service (AKS).

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply