Platform Data Engineer
Neonredwood
Posted: November 21, 2025
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are seeking a Data Engineer II to develop and expand our data infrastructure and analytics capabilities, working with large-scale data sets and cutting-edge AI and data-driven solutions.
Required Skills
Job Description
About Neon Redwood
Neon Redwood is a data services consulting company, working on cutting-edge AI and data-driven solutions. We are a team of passionate engineers and data experts, and we are currently looking for a Data Engineer to join our team and help us develop and expand our data infrastructure and analytics capabilities.
The Role
We are seeking an experienced Data Engineer II with a strong background in data engineering and a passion for working with large-scale data sets. Help us develop and expand our data infrastructure and analytics capabilities.
The ideal candidate will have at least 2 years of professional experience and a solid understanding of Python, BigQuery, and Google Cloud Platform (GCP) or similar technologies. This full-time role will involve working closely with our CTO and other team members to design, develop, and maintain data pipelines, ETL processes, and data warehousing solutions.
Responsibilities
• Collaborate with the CTO and other team members to design, develop, and maintain data pipelines and ETL processes.
• Write clean, efficient, and maintainable code in Python and other relevant technologies.
• Implement and optimize data storage and processing solutions using BigQuery and Google Cloud Platform (GCP).
• Ensure data quality and integrity through proper data validation and monitoring techniques.
• Stay up-to-date with the latest industry trends and technologies to ensure our data infrastructure remains competitive.
• Assist in the development and launch of new data-driven tools and products.
• Mentor and guide junior engineers, fostering a culture of continuous learning and improvement.
Requirements
• Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience.
• 2+ years of professional data engineering experience.
• Proficiency in Python, BigQuery, and Google Cloud Platform (GCP) or equivalent technologies.
• Experience with data pipeline and ETL process design and development.
• Excellent problem-solving skills and the ability to work independently or as part of a team.
• Strong communication, collaboration, and a passion to develop leadership skills.
• Passion for working with large-scale data sets and staying current with industry trends.
• Experience with Asana or similar project management tools.
Additional Skills (Nice to Have)
• Experience with other data processing technologies and platforms (e.g., Apache Beam, Dataflow, Hadoop, Spark).
• Experience with data visualization tools and libraries (e.g., Looker, Sigma, D3.js).
• Knowledge of machine learning and AI concepts.
• Experience with real-time data processing and streaming technologies (e.g., Kafka, Pub/Sub).