LLM Pre-training & Distributed Engineer (AI Infrastructure)
Hyphenconnect
Posted: April 24, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Orchestrating distributed training runs across 1,000+ GPUs using PyTorch, DeepSpeed, or Megatron-LM. Efficiently optimizing networking and memory management to prevent out-of-memory errors.
Required Skills
Job Description
We are seeking a highly skilled LLM Pre-training & Distributed Systems Engineer. This role is essential for orchestrating large-scale machine learning training runs and optimizing distributed infrastructure. The ideal candidate will have a deep understanding of GPU clusters and extensive experience in system engineering to ensure efficient and reliable training processes.
Responsibilities:
• Orchestrate distributed training runs across 1,000+ GPUs using PyTorch, DeepSpeed, or Megatron-LM.
• Optimize networking (InfiniBand/RDMA) and memory management to prevent out-of-memory errors.
• Automate checkpointing and failure recovery during month-long training runs.
Required Skills:
• Deep expertise in 3D parallelism (Data, Tensor, Pipeline).
• Experience managing SLURM or Kubernetes-based GPU clusters.
• Strong systems engineering background (C++, CUDA, Python).