Site Reliability Engineer (SRE) — AI Training & Inference Infrastructure
Confidential
Posted: January 30, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
A Site Reliability Engineer (SRE) is responsible for building and operating the compute, storage, networking, and platform layers that power model training, evaluation, and production inference. The ideal candidate will have experience in building scalable infrastructure and working with AI models. The role requires a strong understanding of system architecture and a passion for delivering high-quality results.
Required Skills
Job Description
About STACK
STACK builds software that helps teams plan, build, and operate with clarity and speed. We’re investing in an in-house AI team to train and run models that meaningfully improve our products—and we need reliable, scalable infrastructure to make that possible.
About the Team
The AI Infrastructure role builds and operates the compute, storage, networking, and platform layers that power model training, evaluation, and production inference. Our focus is simple: make it easy for researchers and product engineers to ship, while keeping systems fast, secure, and resilient at scale.
About the Role
We’re looking for an SRE to own reliability for STACK’s model training and inference platforms. You’ll operate and evolve GPU-enabled clusters (cloud and/or on-prem), improve developer experience for AI workloads, and build automation that makes deployments repeatable and recoveries boring.
This role blends distributed systems engineering with hands-on infrastructure work—especially around Kubernetes, GPU systems, observability, and large-scale operations. If you like turning complex systems into “it just works,” this is for you.
What You’ll Do
Build and operate AI compute platforms
Design, provision, and scale GPU-backed clusters for training and inference (Kubernetes-based and/or HPC-style schedulers).
Own cluster lifecycle management: provisioning, bootstrapping, upgrades, autoscaling/capacity scaling, and decommissioning.
Build reliable abstractions so training jobs can run across multiple clusters/environments with minimal friction.
Reliability, incident response, and operational excellence
Define and track SLIs/SLOs for training and inference systems (job success rate, queue latency, throughput, tail latency, GPU utilization, etc.).
Lead incident response and root-cause analysis; drive permanent fixes and “never again” automation.
Improve recovery and maintenance workflows (e.g., reducing restart/upgrade times; safer rollouts).
Observability and performance
Implement end-to-end monitoring across compute, networking, storage, and accelerators.
Build dashboards, alerting, and anomaly detection that catch issues early—before they derail long runs.
Tune performance and cost: GPU utilization, scheduling efficiency, I/O bottlenecks, and network hotspots.
Hardware + systems integration (where applicable)
Partner with vendors and internal stakeholders on firmware/driver alignment, and node health.
Developer experience for AI teams
Provide paved paths for training: reproducible environments, job templates, secure secrets, artifact storage, and dataset access patterns.
Collaborate closely with ML researchers/engineers to understand workload needs and remove infrastructure bottlenecks.
Qualifications (Required)
5+ years building/operating production infrastructure as an SRE, infrastructure engineer, or systems engineer.
Strong Kubernetes experience (cluster operations, upgrades, networking, storage, and troubleshooting).
Proficiency in at least one programming/scripting language (Python, Go, etc.) for automation and tooling.
Experience with Infrastructure-as-Code (Terraform preferred) and CI/CD for infra or platform components.
Solid Linux/Unix fundamentals (performance, debugging, kernel/userland tooling).
Strong operational mindset: you care about reliability, safe change management, and measurable outcomes.