Senior Performance Engineer - LLM Inference Frameworks
NVIDIA
Posted: April 20, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are looking for a Senior Performance Engineer to build and optimize the core inference infrastructure for large language models. The ideal candidate will have expertise in optimizing model runtime performance, memory efficiency, and scalability. The successful candidate will be part of a high-performing team that shapes the frameworks for state-of-the-art LLM inference.
Required Skills
Job Description
NVIDIA is hiring exceptional software engineers to build and optimize the core inference infrastructure for large language models. Join the TensorRT‑LLM team - the group defining how generative AI performs at global scale on NVIDIA GPUs. We’re looking for engineers who love squeezing every drop of throughput, memory efficiency, and scalability out of modern model runtimes. Your work will directly shape the frameworks behind state‑of‑the‑art LLM inference used across NVIDIA and the AI community. Join us to redefine what “fast” means for LLM inference - building the frameworks that power the next generation of generative AI at scale.
What you'll be doing:
• Design, implement, and optimize high‑performance inference pipelines for large language models running on GPUs
• Profile and tune model execution across the stack - from scheduler design to kernel fusions and everything in-between
• Design and experiment with memory management strategies for improved memory bandwidth optimization and cache efficiency
• Innovate and Implement cutting-edge techniques such as Speculative Decoding, Context Caching, and FP8/INT4 quantization to push the boundaries of tokens-per-second-per-watt
• Develop and maintain benchmarking and testing systems that quantify latency, utilization, and efficiency
What we need to see:
• Bachelor's, Master's, or higher degree in Computer Engineering, Computer Science, Applied Mathematics, or related computing-focused degree (or equivalent experience)
• 5+ years of relevant software development experience.
• Excellent Python programming skills, software design, and software engineering skills
• Experience working with deep learning frameworks like PyTorch and HuggingFace
• Experience profiling and debugging performance at all levels - Python runtime, PyTorch internals, and GPU utilization metrics
• Awareness of the latest developments in LLM architectures and LLM inference techniques
• Proactive and able to work without supervision
• Excellent written and oral communication skills in English
Ways to stand out from the crowd:
• Contributions to inference frameworks such as TensorRT‑LLM, vLLM, SGLang, or similar systems
• Demonstrated expertise in performance modeling, memory optimization, distributed model execution or GPU execution workflows
• Hands‑on experience with NVIDIA profiling tools (Nsight Systems, PyTorch Profiler, custom benchmarking harnesses)
• Strong grasp of the trade‑offs shaping inference efficiency: compute vs. memory, scheduling vs. batching, latency vs. throughput
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com.
#LI-Hybrid