Senior HPC and AI Network Software Architect
NVIDIA
Posted: April 6, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
The Senior HPC and AI Network Software Architect will design and develop complex software solutions with NVIDIA GPUs, leveraging the company's expertise in AI and HPC to drive innovation.
Required Skills
Job Description
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
We are looking for a Senior HPC and AI Network Software Architect to help build the next generation of scalable AI infrastructure. The role emphasizes distributed training, real-time inference, and communication efficiency across large systems. You will develop new software and hardware approaches, shape platform evolution through hands-on innovation, and contribute to designing systems powering the fastest AI workloads globally. Collaborate with our distinguished team of researchers and engineers building software and hardware for AI at an outstanding scale.
What you will be doing:
• Build and evolve the architecture of scalable software systems for distributed AI training and inference, focusing on throughput, latency, resiliency, and memory efficiency across cluster-scale deployments.
• Develop and evaluate next-generation communication and runtime capabilities in libraries such as NCCL, UCX, and UCC, tailored to the evolving demands of frontier AI workloads.
• Partner with AI framework teams (e.g., TensorFlow, PyTorch, JAX) and internal platform teams to build integrations, explore new approaches, and improve end-to-end performance and reliability.
• Collaborate on hardware and system-level features across GPUs, DPUs, and interconnects to speed up data movement and enable new capabilities for training, inference, and model serving at scale.
• Drive innovation across runtime systems, communication libraries, and AI-specific protocol layers, helping turn new ideas into practical capabilities and robust implementations.
What we need to see:
• Ph.D., or equivalent industry experience, in computer science, computer engineering, or a closely related field.
• 5+ years of experience in systems programming, parallel or distributed computing, high-performance networking, or large-scale data movement, including experience designing and building complex systems.
• Strong programming background in C++, Python, and ideally CUDA or other GPU programming models, with a track record of building production-quality performance-critical software.
• Extensive hands-on experience with AI frameworks (e.g., PyTorch, TensorFlow, JAX) and a solid grasp of how communication libraries and runtime systems facilitate large-scale training and inference.
• Demonstrated success in developing and refining high-throughput, low-latency systems, including the ability to reason across software stacks, hardware capabilities, and system bottlenecks.
• Strong collaboration skills in a multi-national, interdisciplinary setting, with the ability to contribute ideas, build momentum, and work effectively with senior engineers, researchers, and partner teams.
Ways to stand out from the crowd:
• Deep expertise with NCCL, UCX, UCC, or similar communication libraries used in large-scale AI and HPC workloads.
• Strong background in networking and communication protocols, RDMA, collective communications, congestion-aware transport, or accelerator-aware networking.
• Comprehensive knowledge of large model training and inference serving at scale, including communication bottlenecks, scheduling challenges, and system-level tradeoffs across compute, memory, and fabric.
• Experience crafting hardware-software co-design for distributed AI systems, including contributions that advanced GPU, DPU, interconnect, or runtime capabilities.
• Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, expert parallelism, or hybrid parallelism.
At NVIDIA, you’ll work alongside individuals who are dedicated to continuous learning and creative problem-solving in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about architecting distributed systems, advancing AI infrastructure, and solving problems at scale, we want to hear from you!
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. For Poland: The base salary range is 221,250 PLN - 383,500 PLN for Level 3, and 292,500 PLN - 507,000 PLN for Level 4.