Solutions Architect - CPU and LPU
NVIDIA
Posted: April 3, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
The Solutions Architect will be responsible for driving adoption of next-generation AI infrastructure across NVIDIA CPU platforms and LPU-based inference systems, focusing on NVIDIA CPUs, including Grace, Vera, and future CPU generations, and on LPU platforms and LPX-class systems used to accelerate large language model inference and other latency-sensitive generative AI workloads.
Required Skills
Job Description
aNVIDIA’s Solutions Architect team is looking for a software-focused Solutions Architect to drive adoption of next-generation AI infrastructure across NVIDIA CPU platforms and LPU-based inference systems. This role will focus on NVIDIA CPUs, including Grace, Vera, and future CPU generations, and on LPU platforms and LPX-class systems used to accelerate large language model inference and other latency-sensitive generative AI workloads. We are looking for someone who understands that AI efficiency is a full-stack challenge spanning model architecture, runtime, compiler, serving framework, host software, memory movement, and workload partitioning across CPU, GPU, and LPU.
As a Solutions Architect, you will be the first line of technical expertise between NVIDIA and our customers for CPU- and LPU-centric AI system design. You will help customers understand how NVIDIA CPUs and LPU-based systems can improve the efficiency, latency, throughput, and total cost of their AI workloads, especially when deployed alongside NVIDIA GPUs in heterogeneous production environments. Your work will range from proof-of-concept development and software stack optimization to technical leadership with customer architects, engineering teams, and senior decision makers. You will engage directly with developers, ML engineers, researchers, platform architects, and IT leaders to identify bottlenecks, design optimization strategies, and build deployable reference architectures. You will also work closely with NVIDIA engineering, product, and field teams to translate customer needs into platform feedback, solution patterns, and roadmap inputs.
What you’ll be doing:
• Evangelize NVIDIA CPU platforms, including Grace, Vera, and future generations, as well as LPU-based systems and LPX-class platforms, with a strong focus on AI software stacks and workload efficiency.
• Help customers design and optimize AI workloads across CPU, GPU, and LPU, improving latency, throughput, utilization, and overall cost efficiency.
• Analyze and tune LLM and generative AI pipelines across serving, runtime, memory, I/O, batching, scheduling, and orchestration layers.
• Build proof-of-concepts, reference architectures, and technical guidance in partnership with Engineering, Product, and Sales teams.
• Establish trusted technical relationships with customer architects, infrastructure teams, and senior leaders, becoming a strategic advisor for heterogeneous AI system design.
What we need to see:
• MS or PhD in Computer Science, Engineering, Mathematics, Physics, or a related field, or equivalent experience, plus 5+ years in AI systems, infrastructure, performance engineering, or solution architecture.
• Strong understanding of modern CPU architecture, Linux systems, and software performance tuning, along with hands-on experience in AI inference for LLM, generative AI, or agentic AI workloads.
• Experience optimizing heterogeneous systems involving CPU and accelerators, with familiarity in frameworks such as PyTorch, Triton, TensorRT-LLM, vLLM, or ONNX Runtime.
• Strong programming, problem-solving, and communication skills, with the ability to work effectively with both technical teams and senior customer stakeholders.
Ways to stand out from the crowd:
• Experience with NVIDIA CPU platforms such as Grace, Grace Hopper, or Arm64 server environments, and familiarity with LPU-based systems or other low-latency inference accelerators.
• Deep expertise in LLM inference optimization, serving architecture, and workload placement across CPU, GPU, and LPU.
• Experience building customer-facing proof-of-concepts and measuring AI efficiency through latency, throughput, cost per token, power, or utilization.
• Familiarity with NVIDIA AI software and platform technologies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-looking and talented people in the world working with us. If you are creative, autonomous, and excited about helping customers build highly efficient AI platforms across CPU, GPU, and LPU technologies, we want to hear from you.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. We highly value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.