DWH Engineer
Gypsy Collective
Posted: February 11, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design, build, and maintain scalable data warehouse solutions that power analytics and business decision-making, working across the full data lifecycle.
Required Skills
Job Description
We are looking for a Middle/Senior DWH Engineer to design, build, and maintain scalable data warehouse solutions that power analytics and business decision-making. You will work across the full data lifecycle — from ingestion and modeling to optimization, reliability, and automation — collaborating closely with analysts, developers, and business stakeholders.
Requirements:
• Strong SQL skills: complex queries, CTEs, window functions, analytical queries (5+ years experience);
• Knowledge of Python or other scripting languages for data transformations (3+ years experience);
• Deep understanding of DWH concepts: ETL/ELT, Data Vault, Kimball, Star/Snowflake schemas (4+ years experience);
• Experience with Airflow or other data pipeline orchestrators (3+ years experience);
• Hands-on experience with modern DWH and query engines: BigQuery, Snowflake, Redshift, ClickHouse, Vertica, AWS Athena, Trino (2+ years experience);
• Confident use of Git; experience with team workflows (pull requests, rebasing, merge conflict resolution) (5+ years experience);
• Understanding of server and cloud infrastructure: basic skills in configuration, maintenance, monitoring, and load control (2+ years experience).
Nice to Have
• Experience with CDC tools and streaming data sources;
• Knowledge of Docker, Kubernetes, and Infrastructure as Code (Terraform);
• Experience with cloud platforms: AWS, GCP, or Azure;
• Familiarity with data governance, data cataloging, and lineage tools.
Responsibilities:
• Design, build, and maintain scalable Data Warehouse architectures aligned with business needs;
• Develop and optimize ETL/ELT pipelines using Python/Airflow and custom solutions;
• Work with DWH\Datalake: PostgreSQL, Trino, BigQuery;
• Implement incremental loads, CDC, backfills, and reprocessing strategies;
• Optimize query performance, data models, and pipeline execution;
• Ensure data quality through validation, automated testing, monitoring, and alerting;
• Integrate new data sources (APIs, third-party systems, raw data) without disrupting existing pipelines;
• Collaborate with analysts, engineers, BI teams, and business stakeholders to translate requirements into data solutions;
• Mentor engineers, review code, and contribute to data standards and best practices.
Benefits:
💸 Flexible payment options: choose the method that works best for you;
🧾 Tax assistance included: we handle part of your taxes and provide guidance on the local setup;
🎁 Financial perks: Bonuses for holidays, B-day, work milestones and more - just to show we care;
📈 Learn & grow: We cover courses and certifications — and offer real opportunities to grow your career with us;
🥐 Benefit Сafeteria: Choose what suits you — sports, language courses, therapy sessions, and more;
🎉 Stay connected: From team-building events to industry conferences — we bring people together online, offline, and on stage;
💻 Modern Equipment: We provide new laptops along with essential peripherals like monitors and headphones for a comfortable workflow;
🕘 Your schedule, your rules: Start your day at 9, 10, or even 11 — we care about results, not clock-ins.