Solutions Architect, Cloud Inference Services
NVIDIA
Posted: May 11, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
A Solutions Architect in our team will help one or a few leading NVIDIA Cloud Partners integrate the NVIDIA AI stack and their agentic pipelines.
Required Skills
Job Description
NVIDIA’s Worldwide Field Operations (WWFO) team is looking for an AI focused Solution Architect with expertise in neural network inference and development/operation of agentic pipelines. A candidate with understanding of large scale DNN inference as well as end to end design of agentic utilities using tools such as NVIDIA NeMo Agent Toolkit, LangChain, LLamaIndex, Haystack, etc. .
As a Solutions Architect in our team, you will have a customer facing technical role helping one or potentially a few leading NVIDIA Cloud Partners (NCPs) to integrate the NVIDIA AI stack, and other OpenSource GPU accelerated stacks and help them develop, deploy and support an E2E solution for AI services from Training to Post Training and Inference workloads.
You will participate in projects that involve technologies like LLMs, VLMs, Physical-AI, Agentic Pipelines and others.
We are looking for someone who always thinks about artificial intelligence, someone who can thrive in a fast paced, rapidly developing field, someone able to coordinate efforts between customers, corporate marketing, industry business development and engineering. Working across different projects and tasks and efficiently multi-tasking while keeping a customer-facing approach will be critical in this capacity.
In this role, you will be the first line of technical expertise between NVIDIA and our partners and customers. Your duties include working on proof-of-concept demonstrations and leading the discussion with developers, product teams and key executives. You will encourage adoption of NVIDIA’s AI technology platform and simplify its deployment to production. Dynamically engaging with different roles within NVIDIA and with the NCP and other partner is a significant part of the Solutions Architect role and will give you experience with a range of technologies.
What You’ll Be Doing:
• Work directly with our NCPs and their key customers to understand their technology and provide the best solutions.
• Develop and demonstrate solutions based on NVIDIA’s and open-source NLP and LLM technology and integrate them into agentic pipelines.
• Perform in-depth analysis and optimisation to ensure the best performance on GPU based systems. This includes inference optimisation but also optimisation of end-to-end agentic pipelines.
• Partner with Engineering, Product and Sales teams to develop, plan best suitable solutions for customers. Enable development and growth of product features through customer feedback and proof-of-concept evaluations.
• Build industry expertise and become a contributor in integrating NVIDIA technology into AI Cloud solutions and Enterprise Computing architectures.
What We Need to See:
• Excellent verbal, written communication, and technical presentation skills in English
• Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience
• 5+ years of industry and/or academic experience in fields related to machine learning, deep learning and/or data science with preference towards DNN inference.
• Work experience and knowledge of modern LLM, VLM, diffusion architectures with emphasis on MoE.
• Understanding of key libraries used for DNN inference (e.g. TRT-LLM, Dynamo, RedHat Inference Server) as well as agentic pipeline development.
• Excited to work with multiple levels and teams across organizations (Engineering, Product, Sales and Marketing teams).
• Driven with strong analytical and problem-solving skills. You are a self-starter with a drive for growth, passion for continuous learning and sharing findings across the team.
• Strong time-management and organization skills for coordinating multiple initiatives, priorities and implementations of new technology and products into very sophisticated projects.
Ways to Stand Out from The Crowd:
• Experience working with inference of very large MoE architectures for NLP, CV, ASR or other.
• Experience using DevOps technologies such as Docker, Kubernetes, Singularity, etc.
• Understanding of HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.