Data Architect
Infosys Singapore & Australia
Posted: October 3, 2023
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design and implement high-performance, scalable, and secure data architectures for ELT and PySpark/Hadoop workloads, with a focus on business growth and stakeholder management.
Required Skills
Job Description
We are looking for a highly experienced and skilled Data Architect to join our team. The ideal candidate will have 12-15 years of experience in architecting solutions of data engineering, focusing on ELT and PySpark/Hadoop workloads. In addition to strong solution and delivery skills, the ideal candidate will also have a view on business growth and managing stakeholders.
Responsibilities:
• Design and implement high-performance, scalable, and secure data architectures.
• Work with business stakeholders to understand their data needs and translate them into technical requirements.
• Design and develop data pipelines and workflows using ELT principles and PySpark/Hadoop
• Optimize data pipelines and workflows for performance and efficiency
• Work with data scientists and engineers to ensure that data is accessible and usable for analytics and machine learning
• Implement data governance and security best practices
• Manage and mentor data engineers
• Contribute to the overall data engineering strategy and roadmap
Requirements:
Qualifications:
• 12-15 years of experience in data engineering, with a focus on ELT and PySpark/Hadoop workloads
• Strong experience in designing and implementing high-performance, scalable, and secure data architectures.
• Experience with data governance and security best practices
• Experience in managing and mentoring data engineers
• Excellent communication and interpersonal skills
• Ability to work independently and as part of a team
• Strong problem-solving and analytical skills
Desired Skills:
• Experience with cloud computing platforms such as AWS, Azure, or GCP
• Experience with big data technologies such as Spark, Hadoop, Hive, and Kafka
• Experience with data warehousing and data lakes
• Experience with DevOps and MLOps practices
• Experience with data science and machine learning, streaming data processing
• Experience with real-time analytics, data visualization and reporting tools