GCP Data Engineer
Sutherland
Posted: February 23, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design and implement real-time data ingestion pipelines using Pub/Sub and Kafka Streams for healthcare data formats (HL7, FHIR) and build robust Bronze layer as the single source of truth storing raw, untransformed data in Cloud Storage.
Required Skills
Job Description
Sutherland is seeking a reliable and technical person to join us as Full Stack Developer who will play a key role in driving our continued product growth and innovation. If you are looking to build a fulfilling career and are confident you have the skills and experience to help us succeed, we want to work with you!!
Key Responsibilities:
• Design and implement real-time data ingestion pipelines using Pub/Sub and Kafka Streams for healthcare data formats (HL7, FHIR)
• Build robust Bronze layer as the single source of truth storing raw, untransformed data in Cloud Storage
• Develop streaming ingestion patterns using Dataflow for real-time data capture with minimal transformation
• Implement batch loading processes using Dataproc for large-volume data from diverse sources (logs, databases, APIs)
• Apply schema inference and basic data type adjustments while preserving raw data lineage
• Design partitioning strategies in Cloud Storage for efficient historical data archival and retrieval
• Establish data landing zone controls including audit logging, versioning, and immutability patterns
• Create automated workflows using Cloud Composer for orchestrating ingestion pipelines
• Implement data catalog and metadata management for raw data assets
 
Required Skills:
• 5+ years experience with GCP services (Cloud Storage, Pub/Sub, Dataflow, Dataproc, Cloud Composer)
• Strong expertise in Apache Kafka, Kafka Streams, and event-driven architectures
• Proficiency in Python and/or Java for data pipeline development using Apache Beam SDK
• Experience with healthcare data standards (HL7, FHIR) and handling semi-structured data
• Hands-on experience with streaming frameworks (Apache Beam, Dataflow) for near-real-time ingestion
• Knowledge of file formats and compression (JSON, Avro, Parquet) for raw data storage
• Understanding of CDC patterns, incremental loading, and data versioning strategies
• Experience with Cloud Storage lifecycle management and cost optimization
Preferred Qualifications:
• GCP Professional Data Engineer certification
• Experience with Confluent Platform or Google Cloud managed Kafka (if applicable)
• Familiarity with healthcare compliance requirements (HIPAA) and data residency
• Background in log aggregation platforms (Fluentd, Logstash) and observability
• Knowledge of data lake security patterns and IAM controls
All your information will be kept confidential according to EEO guidelines.