Data Engineer (GCP, Snowflake)
Addepto
Posted: March 27, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Data Engineer at Addepto, focuses on designing scalable, ROI-focused AI solutions for global enterprises and startups, with expertise in Big Data and Artificial Intelligence.
Required Skills
Job Description
Addepto is a leading AI consulting (https://addepto.com/ai-consulting/) and data engineering (https://addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth.
The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide.
As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale.
As a Data Engineer, you will be part of data engineering projects within GCP environments, including migrating, analyzing, and managing data structures across cloud platforms such as BigQuery and Snowflake. You will design, develop, and maintain scalable data solutions, working closely with clients and cross-functional teams. You will play a key role in building data pipelines, integrating data from multiple sources, and ensuring data quality, security, and performance across the data platform.
📍 Location: This role requires on-site work in the United States
🚀 Your main responsibilities:
• Design, develop, test, and maintain data pipelines and ETL/ELT processes using GCP and Snowflake.
• Implement data ingestion, transformation, and storage solutions for structured, semi-structured, and unstructured data.
• Build and optimize batch, micro-batch, and real-time data pipelines.
• Support data migration from legacy systems to cloud platforms (GCP, Snowflake).
• Collaborate with business and technical stakeholders to translate requirements into scalable data solutions.
• Work with GCP services such as BigQuery, Cloud SQL, Cloud Spanner, and Cloud Bigtable.
• Integrate data from various sources and support data platform development.
• Ensure data quality by implementing validation rules, testing frameworks, and monitoring solutions.
• Work closely with security teams to ensure data protection, access control, and compliance.
• Support development of data models and schemas for analytics and reporting.
• Contribute to CI/CD processes, version control, and infrastructure automation (e.g., Git, Terraform).
• Collaborate with data scientists, analysts, and engineers to support data-driven use cases.