MisuJob - AI Job Search Platform MisuJob

Machine Learning Systems & Infrastructure Engineer

Spaitial

London, London, United Kingdom permanent

Posted: May 6, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

We're looking for a Machine Learning Systems & Infrastructure Engineer who can design and develop generative AI models for physical-world simulations, leveraging cutting-edge techniques and technologies. The ideal candidate will be an innovative problem-solver with strong technical skills in machine learning, computer vision, and simulation, and a passion for pushing the boundaries of AI. This role involves working with complex systems, collaborating with interdisciplinary teams, and contributing to the development of new AI models and technologies.

Job Description

SpAItial is pioneering the next generation of World Models, pushing the boundaries of generative AI, computer vision, and simulation. We are moving beyond 2D pixels to build models that natively understand the physics and geometry of our world. Our mission is to redefine how industries, from robotics and AR/VR to gaming and cinema, generate and interact with physically-grounded 3D environments.

We’re looking for bold, innovative individuals driven by a passion for tackling hard problems in generative 3D AI. You should thrive in an environment where creativity meets technical challenge, take pride in craft, and collaborate closely with a small team building frontier systems.

We are seeking a Machine Learning Systems & Infrastructure Engineer to build and own the systems that turn raw real-world data into trained world models and reliable production endpoints. You will design, implement, and operate scalable training stacks, data ingestion pipelines, experiment orchestration, and model serving for large diffusion-based generative models. The role is hands-on and code-heavy — you will work inside the same monorepo as the research team, mostly in Python, and should be as comfortable refactoring a trainer class or a dataset loader as you are writing Terraform.

Responsibilities

• Own and evolve the ML systems that enable training, evaluation, and serving of large foundation models — trainer, dataset loaders, checkpointing, and experiment orchestration code.

• Distributed training enablement: Improve high-throughput training stacks (e.g., PyTorch DDP/FSDP, NCCL) for performance, stability, and reproducibility, including preemption-safe and sharded checkpointing.

• Data systems and pipelines: Build end-to-end Python pipelines that turn third-party capture sources into clean, versioned training datasets — including scraping (e.g., Playwright) and preprocessing — and optimize the underlying storage at petabyte scale (object storage, fuse mounts, caching layers, shared filesystems, and relational / analytical / embedded metadata stores).

• ML workflow orchestration and serving: Operate the systems researchers use to launch experiments, data jobs, and production endpoints — workflow engines (e.g., Kubeflow Pipelines, Airflow), GPU schedulers (e.g., Volcano, Slurm), experiment trackers (e.g., MLflow, Weights & Biases), and managed-inference platforms (e.g., Modal, Triton) — and maintain a launcher SDK for one-command runs.

• Containerization and packaging: Ship workloads with Docker and Kubernetes; maintain IaC (Terraform) for the surfaces you own and CI/CD pipelines, including self-hosted GPU runners.

• Observability and reliability: Monitoring, logging, and alerting for job performance, data-pipeline health, and cost (e.g., Prometheus/Grafana, OpenTelemetry); define SLOs and incident response for the systems you own.

• Security and access: Manage secrets, IAM, and network boundaries (e.g., Tailscale, cloud VPC) for the systems you own.

• Collaboration: Partner with ML researchers, engineers, and the platform team to unblock training and data work and improve developer experience.

Key Qualifications

• 3+ years writing production-quality Python in a large, multi-author codebase, with strong SWE fundamentals (ML systems experience strongly preferred).

• Hands-on with modern ML training stacks (PyTorch; DDP/FSDP or comparable); have personally debugged distributed jobs across many GPUs and nodes.

• Have shipped non-trivial end-to-end data pipelines at scale — ingestion, transformation, validation, versioning, republish — ideally including real-world sources with rate limits, auth, or undocumented APIs.

• Hands-on GPU compute and performance debugging (CUDA/NCCL, GPU utilization, networking bottlenecks, profiling).

• Working knowledge of cloud environments (AWS, GCP, or Azure), including object storage, IAM, and cost awareness.

• Proficient with containers (Docker, Kubernetes) and comfortable reading and writing IaC (Terraform) for the surfaces you ship.

• Strong working knowledge of how to store and query large datasets at scale: SQL fundamentals; relational (e.g., Postgres), analytical (e.g., BigQuery, Snowflake), and embedded (e.g., SQLite) stores; and object storage with caching layers. Familiarity with ML workflow orchestration and experiment tracking (e.g., Kubeflow Pipelines, MLflow).

• Experience with monitoring and observability tooling (e.g., Prometheus/Grafana, OpenTelemetry) and CI/CD for infra and ML workflows (e.g., GitHub Actions).

At SpAItial, we are committed to creating a diverse and inclusive workplace. We welcome applications from people of all backgrounds, experiences, and perspectives. We are an equal opportunity employer and ensure all candidates are treated fairly throughout the recruitment process.

At SpAItial, we are committed to creating a diverse and inclusive workplace. We welcome applications from people of all backgrounds, experiences, and perspectives. We are an equal opportunity employer and ensure all candidates are treated fairly throughout the recruitment process.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply