Data Reliability Engineer II
Zeta
Posted: November 27, 2025
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are looking for a Data Reliability Engineer II to join our team and help us build the future of banking.
Required Skills
Job Description
About us Build the future of banking.
Zeta is a next-generation banking technology company providing cloud-native, fully stackable processing and core banking platforms for issuers. With a focus on scalability, compliance, and innovation, Zeta empowers financial institutions to modernize their technology infrastructure and deliver secure, seamless digital banking experiences.
Our impact runs at real-world scale. Today, over 25 million cards are live on Zeta-powered platforms across 7 countries, supported by a passionate team of 1,700+ Zetanauts across India, the US, EMEA, and Asia. Backed by SoftBank Vision Fund, Mastercard, and other reputed strategic investors, we reached a valuation of $2 billion in 2025.
Our focus is on establishing product lines that focus on key outcomes by addressing real customer pain points, modernizing legacy systems, and strengthening core fundamentals. As a result, our systems and platforms support a wide range of banking and payments capabilities, including:
1. Tachyon, our cloud-native banking stack built for population-scale systems
2. Cipher, our unified authentication platform for secure, high-volume banking environments
3. Digital Credit as a Service, enabling banks to launch credit lines on UPI
4. Elena, our intelligent and conversational AI platform for banking
5. Pixel, India’s first digital-native credit card, launched in partnership with HDFC Bank, for whom we also revamped their PayZapp mobile app: Winner of the Celent Model Bank Award for Payments Innovation 2024
6. Sparrow, the leading card experience for non-prime cardholders in the US
…and more across cards, payments, lending, and core banking.
We are an engineering-first organization that values ownership, bias for action, and long-term thinking. Together, we solve some of the hardest problems in banking tech. Our culture is built around trust, collaboration, and creating the conditions for you to drive impact proportionate to your potential. Reinforcing our commitment to creating an inclusive and supportive workplace, we have been consistently recognized as a Great Place to Work.
If you want to build cutting-edge banking tech that enables banks to serve millions reliably, securely, and at a population scale, Zeta is your playground.
If you would like to learn more about how we have grown and evolved over the years, watch our journey here. You can also explore our website and follow us on LinkedIn, Instagram,YouTube, and X.
Zeta is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We encourage applicants from all backgrounds, cultures, and communities to apply and believe that a diverse workforce is key to our success.
Responsibilities:
• Proactively monitor PostgreSQL RDS instances for performance, availability, and resource utilization (CPU, memory, storage, connections) using established monitoring tools (e.g., CloudWatch, Prometheus).
• Assist in identifying performance bottlenecks in PostgreSQL RDS. Apply basic performance tuning techniques like reviewing query execution plans, adding missing indexes, and recommending parameter adjustments.
• Monitor the health and performance of Debezium and Kafka Connect connectors, identifying and troubleshooting basic issues related to data capture and delivery.
• Monitor Apache Nifi data flows for errors, backpressure, and performance issues. Assist in troubleshooting and resolving common Nifi flow failures.
• Provide support for data related issues and participate in root cause analysis.
• Monitor the execution of Apache Airflow DAGs, identify failed tasks, and troubleshooting and re-runs.
• Develop and maintain automation scripts and infrastructure as code (IAC) templates (e.g., using Crossplane, Terraform) to automate routine database tasks, deployments, and updates.
• Participate in on-call rotations to respond to database-related incidents and perform troubleshooting and root cause analysis.
• Assist in implementing and maintaining security best practices for cloud databases, including access controls, encryption, and compliance with regulatory requirements.
• Regularly audit and assess database security configurations.
• Configure and manage database backup and recovery strategies to ensure data integrity and availability in case of failures or data loss.
• Analyse database query performance and collaborate with developers to optimize SQL queries and schemas.
• Participate in continuous improvement initiatives to enhance the reliability, scalability, and performance of cloud databases.
• Assist in the design and optimization of database schemas for cloud environments.
Skills:
• Familiarity with data pipeline concepts and technologies like Debezium, Kafka Connect, Apache Nifi.
• Basic understanding of Amazon Redshift and S3.
• Exposure to Apache Spark for data processing.
• Basic understanding of Apache Airflow for workflow orchestration.
• Strong SQL scripting skills for querying and basic data manipulation.
• Familiarity with scripting languages (e.g., Python, Bash) is a plus.
• Knowledge of database security best practices, including access controls, encryption, and compliance with regulatory requirements (e.g., GDPR, HIPAA).
• Having ‘AWS Certified Database - Specialty' certification is a plus
Experience and Qualifications:
• Bachelor's degree in Computer Science, Information Technology, or a related field.
• 3-5 years of experience in database administration, with a focus on PostgreSQL.
• 1-2 years of hands-on experience with PostgreSQL RDS.