Research Engineer - AI Performance & Kernel Optimization
Zyphra
Posted: March 16, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design and implement highly optimized kernels for AI performance on various accelerator platforms, working closely with the pretraining and inference teams to improve throughput, latency, and hardware utilization.
Required Skills
Job Description
Zyphra is an artificial intelligence company based in San Francisco, California.
The Role:
As a Research Engineer - AI Performance & Kernel Optimization, you will improve and optimize the performance of our large-scale language model training and inference stacks. You will work closely with our pretraining and inference teams to identify bottlenecks, design and implement highly optimized kernels, and push the limits of throughput, latency, and hardware utilization across a range of accelerator platforms. This role is suited for someone who enjoys deep systems work, cares about performance at every level of the stack, and is excited to translate low-level optimizations into meaningful gains for frontier-scale AI systems.
You’ll Work Across:
• Kernel development and optimization for large-scale ML workloads, using any level of the stack from PTX/assembly to CUDA, HIP, Triton, or other GPU DSLs
• Performance tuning for training and inference stacks across GPUs and other accelerators
• Profiling and eliminating bottlenecks in memory movement, communication, scheduling, and compute utilization
• Optimizing distributed training and inference systems for large MoE models, including large-scale model parallelism
• Portability and optimization across non-NVIDIA hardware, with special interest in AMD hardware such as the MI300x and MI355x
• Collaboration with research and infrastructure teams to turn systems improvements into real-world model training and inference gains
What We're Looking For / Requirements:
• Strong engineering aptitude for building reliable, high-performance systems
• Excellent low-level performance intuition and the ability to reason about hardware-software interactions
• Are excited to rapidly learn new systems, tools, and hardware environments
• Excellent communication and collaboration skills, with the ability to work effectively across research and engineering teams
• Enjoy diving deep into the weeds and hunting down the last 10–20% of performance
Qualifications / Additional Skills:
• Experience writing highly performant GPU kernels at any level of abstraction–PTX, CUDA, HIP, Triton, or other kernel DSLs
• Experience optimizing ML workloads for large-scale training, ideally in language model pretraining or inference environments
• Experience with non-NVIDIA accelerator hardware, such as AMD, AWS Trainium, Google TPU, Qualcomm, ARM, Intel, and custom ASICs
• Strong understanding of distributed training systems and parallelism schemes, including data parallelism, tensor/model parallelism, pipeline parallelism, sharding, and communication/computation overlap
• Experience with performance engineering in other demanding parallel computing environments such as HPC, quantitative finance, scientific computing, graphics, compilers, or numerical simulation
• Strong systems intuition around memory hierarchy, bandwidth constraints, kernel fusion, launch overhead, communication overhead, and hardware utilization
• Experience using profiling and debugging tools to drive performance improvements
• Familiarity with infrastructure underlying large-scale training and inference, including collective communication libraries, and runtime performance analysis
• Background in a highly technical field such as physics, mathematics, theoretical computer science, computer science, or electrical engineering
• Any HPC experience is a strong plus
Why Work at Zyphra:
• Our research methodology is grounded in methodical, step-by-step approaches to ambitious goals. Both deep research and engineering excellence are equally valued
• We strongly value new and crazy ideas and are very willing to bet big on new ideas
• We move as quickly as we can; we aim to minimize the bar to impact as low as possible
• We all enjoy what we do and love discussing AI
Benefits and Perks:
• Comprehensive medical, dental, vision, and FSA plans
• Competitive compensation and 401(k) plan
• Relocation and immigration support on a case-by-case basis
• In-office snacks and meals provided
• Unlimited PTO and company holidays
• In-person team in San Francisco with a collaborative, high-energy environment