AWS Data Engineer
Weekday AI
Posted: March 5, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
This role involves designing, implementing, and deploying scalable data pipelines on AWS using various services such as Glue, Lambda, EventBridge, Kinesis, S3, Redshift, and PySpark, requiring expertise in data modeling, automation, testing, and deployment processes.
Required Skills
Job Description
This role is for one of the Weekday's clients
Min Experience: 6 years
Location: Hyderabad
JobType: full-time
Required: Extensive experience with AWS Data Services such as Glue, Lambda, EventBridge, Kinesis, S3/EMR, Redshift, RDS, Step Functions, Airflow, and PySpark.
Requirements:
Demonstrated knowledge of IAM, CloudTrail, cluster optimization, Python, and SQL.
Expertise in data design, STTM, data models, component design, automated testing, code coverage, UAT support, and deployment processes.
Familiarity with version control systems, including SVN and Git.
Responsibilities include creating and managing AWS Glue crawlers and jobs to automate the data cataloging and ingestion processes from diverse structured and unstructured data sources.
Solid experience in using AWS Glue for building ETL pipelines, overseeing crawlers, and utilizing the Glue data catalog.
Skilled in designing and managing AWS Redshift clusters, crafting complex SQL queries, and enhancing query performance.
Facilitate data consumption for reporting and analytics business applications utilizing AWS services (e.g., QuickSight, SageMaker, JDBC/ODBC connectivity, etc.).
Skills
AWS
Data Engineer
Python
PySpark
SQL