Staff Data Engineer
Keystone
Posted: March 13, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Our company is seeking a skilled Data Engineer to join our team in Boston, New York, or San Francisco. The ideal candidate will have experience in data engineering, data science, and experience with data pipelines and cloud platforms such as AWS. The Data Engineer should be able to design, implement, and maintain large-scale data processing systems and ensure data quality and security.
Required Skills
Job Description
Keystone is a premier economics, technology, and strategy consulting firm built to help companies lead through transformation. As breakthrough innovations reshape industries,
redefine competition and change our society, complex and highly competitive ecosystems emerge. Keystone advises technology leaders, Fortune 100 companies, their legal counsel, and
governments on business, economic, litigation, and regulatory strategy in relation to these innovations and competitive eco-systems. We operate globally from offices in New York, Boston, San Francisco, Seattle, London, Dubai, and Washington, D.C.
K.ATS Foundry is Keystone’s engineering center of excellence, embedding data, platform, and forensic expertise into the firm’s most complex and high-impact projects. Foundry builds secure, reusable infrastructure and scalable technical solutions that accelerate project delivery and ensure defensible, data-driven outcomes. Our engineers work across disciplines—from automating data pipelines and managing cloud platforms to conducting forensic code investigations—helping every engagement start faster, run smoother, and deliver greater impact.
About the Staff Data Engineer Role
You will lead the design and implementation of reproducible, scalable data workflows that power Keystone’s most data-intensive projects. You will architect infrastructure, guide technical implementation across engagements, and develop reusable frameworks that accelerate work across the firm.
This role sits at the intersection of data engineering, platform design, and applied analytics, supporting cross-disciplinary teams in building data systems that are robust, defensible, and automation-first. You’ll serve as a senior individual contributor and mentor, own project-level architecture, shape technical standards, and contribute to Foundry’s shared engineering culture.
Key Responsibilities
Technical Leadership
• Architect and implement end-to-end data pipelines, transformations, and infrastructure across cloud environments (AWS, GCP, Azure, Snowflake).
• Develop reproducible workflows using Infrastructure as Code and data orchestration tools (Airflow, dbt, Spark, or equivalent).
• Design data models and storage systems optimized for performance, scalability, and defensibility in regulatory and litigation contexts.
• Build and maintain reusable templates, libraries, and automation frameworks that improve engineering efficiency across projects.
• Conduct code reviews, mentor peers, and ensure technical rigor and reproducibility in all project deliverables.
Collaboration and Delivery
• Partner with client-facing teams, economists, and data scientists to translate analytical goals into scalable, automated data solutions.
• Collaborate with Platform and Forensic Engineering peers to develop integrated solutions across Foundry focus areas.
• Support the design and delivery of data visualization and analytic tooling (e.g., Power BI, Tableau, Streamlit) for client and internal projects.
• Contribute to documentation and internal guides that capture lessons learned, reusable modules, and technical standards.
Practice Development
• Mentor junior engineers and foster adoption of best practices in data architecture, version control, testing, and CI/CD.
• Participate in Scaling Days to share learnings, refine templates, and advance the firm’s reusable engineering assets.
• Identify opportunities to generalize project solutions into reusable Foundry tooling and QuickStarts.
• Contribute to research and evaluation of emerging technologies (e.g., SQLMesh, OpenLineage, LLM data evaluation pipelines).
• Travel 10 – 25% depending on project assignments.
What You'll Bring
• 6+ years of experience in data engineering, data infrastructure, or software engineering roles.
• Deep proficiency in Python, SQL, and modern data stack tools (Airflow, dbt, Spark, Snowflake, BigQuery, or equivalent).
• Strong understanding of cloud architecture and Infrastructure as Code (Terraform, CloudFormation, or similar).
• Experience with version control, CI/CD workflows, and containerization (Docker, Kubernetes).
• Proven ability to architect and optimize large-scale data systems for performance, reliability, and reproducibility.
• Excellent communication skills with the ability to convey technical concepts to non-technical audiences.
• Demonstrated collaboration across disciplines and commitment to mentoring and technical excellence.
• Bachelor’s or advanced degree in Computer Science, Engineering, or related field.
Preferred Experience
• Prior experience at a consulting firm or matrix-based organization.
• Experience developing internal libraries, frameworks, or automation solutions.
• Familiarity with data governance, lineage tracking, or privacy-preserving techniques.
• Exposure to AI/ML data pipelines, LLM evaluation, or text analytics systems.
In addition to annual salary, we provide annual 401k contributions and competitive benefits package. Actual Compensation within the range will depend upon the level the individual is hired into based on their skills, experience, and qualifications.
Annual Salary Range
$186,000—$221,905 USD
At Keystone we believe diversity matters. At every level of our firm, we seek to advance and promote diversity, foster an inclusive culture, and ensure our colleagues have a deep sense of respect and belonging. If you are interested in growing your career with colleagues from varied backgrounds and cultures, consider Keystone.