Senior Data Platform Engineer
Confidential
Posted: March 10, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are seeking a highly skilled Senior Data Platform Engineer to join our team in Barcelona, Spain.
Required Skills
Job Description
About seQura
SeQura provides innovative, flexible and easy-to-use payments technologies that help merchants acquire, convert and retain more customers.
We make a difference in sales performance by tailoring our solutions to different verticals (Retail, Education, Optics, Travel…).
We also empower smart shopping to consumers who seek more value, convenience, and flexibility in their shopping, with new payment experiences that allow them to save, access interest-free credit, or pay in small, comfortable installments of up to 24 months.
Born in Barcelona, seQura is a privately-owned fintech in the scaleup phase. Present in southern Europe and Latin America, we are growing above 50% CAGR and with more than 100 Million in Annual Recurring Revenue. Over 5000 businesses, more than 2 million shoppers, and 400+ employees continue to rate us as one of the most loved and trusted fintechs out there, with an NPS of 87%, a Trustpilot rating of 4.5/5, and a Glassdoor rating of 4.3/5.
About the role🤓
We’re looking for a Senior Data Platform Engineer to join our Data Platform team and help us scale seQura’s data ecosystem through automation, platform capabilities, and self-service tooling.
Your mission will be to help evolve our Data Platform as a Product, enabling software engineers, analysts, data scientists, and business teams to leverage data across its lifecycle — from ingestion to discovery — while ensuring security, governance, and reliability are built into the platform by default. The goal is to empower teams across the company to access trusted data and make better, data-driven decisions.
This role goes beyond building pipelines. You’ll design the guardrails, abstractions, and platform capabilities that make it easy for other teams to build and scale their own data products safely and efficiently.
You’ll work in a cloud-native environment on AWS, where Infrastructure as Code, observability, and automation are fundamental engineering principles.
What challenges you'll be solving 🚀
• Design and evolve the self-service Data Platform, enabling engineering and business teams to build and manage their own data products autonomously
• Identify friction points for internal data consumers and build platform capabilities that improve developer experience
• Design and maintain a “Golden Path” for data pipelines, enabling teams to deploy new pipelines in minutes instead of weeks
• Embed data governance, observability, lineage, and access control directly into platform capabilities
• Improve the reliability, performance, and cost efficiency of our data infrastructure
• Lead architectural discussions and mentor engineers on distributed systems and platform design
• Evangelize the platform internally to drive adoption across teams
About the Data team 🧩
Team mission
To build and evolve a self-service data platform that empowers engineering teams, analysts, and data scientists to build, discover, and scale their own data initiatives with built-in governance, observability, and minimal friction.
What we own
• The full lifecycle of data across the company: extraction, ingestion, landing, transformation, governance, and observability
• The data infrastructure and platform capabilities that enable teams to create and scale their own data products
• Core platform components that ensure data quality, compliance, lineage, and discoverability
• The evolution of our data platform into a self-service ecosystem where teams can autonomously manage their data lifecycle
Team Structure
The Data Platform team is a key part of the Data organization, which also includes the Head of Data, Data Governance Lead, Data Science & AI, and Data Analytics teams.
The Data Platform team currently consists of a Data Platform Lead, four Data Engineers, and one MLOps Engineer. They collaborate closely with the Infrastructure Platform team and the rest of the Data organization to build and evolve the company's data platform.
How we work
• Platform mindset: we build capabilities and tooling, not one-off solutions
• Engineers treat internal teams as platform customers
• Strong collaboration with Infrastructure Platform, Data Science & AI, and Data Analytics teams
• Infrastructure-as-Code and automation are core engineering principles
• We are transitioning from a reactive ticket-based model to a product-oriented platform team
What to expect in the next 90 days 🏁
Month 1: You’ll immerse yourself in our data stack and our Data Platform philosophy. You’ll meet engineers, analysts, and product teams to understand their current challenges and friction points when working with data. You’ll contribute early improvements to our Infrastructure-as-Code, pipelines, or observability stack, with the goal of identifying and fixing a developer experience pain point.
Month 2: You’ll start contributing to the long-term platform roadmap. You’ll help define standardized ingestion patterns that move us away from bespoke pipelines and toward reusable, scalable platform capabilities.
Your goal will be to reduce “Time-to-Data” for one of our key business domains.
Month 3: You’ll lead a strategic platform initiative from design to deployment, such as evolving our data discovery layer or improving automated change detection.
During seQura Week (when the whole company gathers at our Barcelona HQ), you’ll present your work to the engineering organization, showing how the evolution of the data platform directly enables faster and better business decision-making.
By this point, you’ll have delivered a platform component that allows a non-data team to independently deploy a data producer or consumer.
Tech stack & environment 🛠️
Our data platform runs on AWS with Kubernetes (EKS) and everything managed via Terraform and Helm. CI/CD is handled with GitHub Actions and Jenkins. We ingest data from multiple sources using Airbyte, store it in S3 and Redshift, transform it with dbt, and orchestrate pipelines with Airflow. For governance and discovery, we use OpenMetadata, while analytics are delivered through Metabase. Observability is implemented with Prometheus, Grafana, Thanos, Elastic, and Tempo, and our infrastructure and automation scripts are mainly written in Python, Terraform, and Helm.