ARCHIVED
This job listing has been archived and is no longer accepting applications.
MisuJob - AI Job Search Platform MisuJob

Staff Data Engineer

Novaprime

Denver, Colorado, United States permanent

Posted: November 4, 2025

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

Staff Data Engineer is responsible for designing and implementing data engineering solutions using distributed ledger technology, with a focus on AI and data-driven innovation.

Job Description

***This role is fully remote within the U.S. (occasional travel to meet teams or partners). If you'd like to differentiate yourself, shoot me a connection request on LinkedIn and tell me your favorite book of all time***

About Us:

Novaprime is a mortgage technology company dedicated to reducing the costs of originating loans by leveraging emerging technologies, with strong focus on AI and Distributed Ledger Technology (DLT). We accomplish our goals by focusing on data-driven innovation, working with some of the world's largest institutions, and creating outcomes. Novaprime is backed by key investors in the mortgage industry, VC, and financial services.

Job Description:

Novaprime is hiring a Staff Data Engineer to architect, build, and operate our Databricks-centric lakehouse on AWS. You will own the data lifecycle—streaming and batch ingestion, modeling, governance, quality, observability, and cost/perf—using Delta Lake, Delta Live Tables, and Databricks Workflows. This is a hands-on leadership role: you will set technical direction, deliver mission-critical pipelines, mentor engineers, and directly drive analytics by defining trusted metrics, instrumentation, and monitoring alongside product and ML. To succeed, you must enjoy thinking in systems and always learning.

Responsibilities:

• Implement new technologies that yield competitive advantages and are aligned with our business goals.

• Drive development from concept to market by combining various technologies and collaborating with a cross-functional team.

• Define the lakehouse architecture and standards on Databricks (Unity Catalog governance, Workflows, DLT, Delta Lake).

• Build and operate high-reliability streaming and batch pipelines with Structured Streaming, Auto Loader, CDC patterns, and backfills.

• Design medallion data models and canonical domains; implement SCDs, schema evolution, and versioned/time-travel datasets.

• Establish data quality, SLAs/SLOs, lineage/traceability, and audit-ready documentation aligned to SOC 2.

• Drive analytics: define and govern KPI/metric definitions, build metrics pipelines, enable semantic consistency, and implement monitoring/alerting for data and dashboards.

• Optimize cost/perf on Databricks (cluster policies, sizing, Photon, AQE, partitioning, file sizing, skew mitigation, Z-ORDER/OPTIMIZE).

• Enforce security and privacy (Unity Catalog permissions, row/column-level controls, PII masking/tokenization, secrets management).

• Enable self-serve with standardized, well-documented datasets; collaborate with ML on feature pipelines and Feature Store.

• Champion software excellence: Git-based workflows, code reviews, automated testing, CI/CD for data, and IaC.

• Collaborate with product managers, designers, and other stakeholders to develop strategies and implement new products and features.

• Stay current with the latest technologies to maintain competitiveness and technological leadership in the market.

• Various engineering-related tasks which continue to progress the organization’s mission.

Requirements:

• B.S. in Computer Science or equivalent experience.

• 8+ years building and operating production data platforms; 4+ years deep, hands-on Databricks/Spark (PySpark + SQL).

• Proven ownership of a production lakehouse (S3 + Delta Lake) with strict SLAs and compliance requirements.

• Expertise with Delta Lake (MERGE/CDC, schema evolution, time travel, OPTIMIZE/Z-ORDER, VACUUM) and DLT, Workflows, Auto Loader; Feature Store experience in production.

• Strong data modeling (dimensional, canonical), SCD Types 1/2, and handling slowly changing entities and schema drift.

• Track record delivering trustworthy datasets with monitoring, alerting, lineage, and clear documentation; able to define and maintain metric layers consumed by product and business.

• Advanced Python and SQL; testing culture (pytest), CI/CD (GitHub Actions), and Terraform for Databricks; solid Git practices.

• AWS foundations: S3, IAM, networking basics; event ingestion.

• Excellent communication and leadership; able to drive design reviews, write clear technical docs, and mentor engineers in a remote, async environment.

Desired Experience:

• Databricks SQL/Serverless, Unity Catalog lineage/system tables, and semantic layer experience.

• Product analytics and observability: Mixpanel and New Relic.

• Prior leadership of SOC 2 audits/readiness and data platform on-call rotations.

• Previous startup experience.

Benefits:

• Competitive salary, equity, and benefits.

• Mostly remote work.

• Opportunity to make a difference for millions and their ability to be homeowners.

We are an equal-opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply