MisuJob - AI Job Search Platform MisuJob

Distinguished Engineer, Storage – AI Cloud

NVIDIA

US, CA, Santa Clara permanent

Posted: May 14, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

An NVIDIAN is responsible for designing and developing high-performance AI computing systems, working closely with cross-functional teams to create innovative solutions that push the boundaries of what is possible with NVIDIA's GPU technology.

Job Description

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

AI Cloud Data Storage

NVIDIA DGXC Storage org handles some of the fastest training and inference tasks. Every GPU cycle depends on a storage platform built to keep tens of thousands of accelerators continuously busy. It maintains exabytes of data securely and powers the largest AI workloads worldwide across cloud, neocloud, and on-prem setups. With the growth of accelerated computing, storage is essential. It can make the difference between effective GPU use and wasted potential and between launching a frontier model on time or missing the deadline by months. We seek a Distinguished Engineer to lead NVIDIA's storage strategy for AI Cloud across the Neocloud Provider (NCP) and Cloud Service Provider (CSP) ecosystem. You will direct the architecture of high-performance parallel file systems, object stores, and block storage at exabyte scale. You will stay hands-on, collaborating with engineers, SREs, partners, and storage vendors. You will apply NVIDIA's AI tools to increase your productivity and that of those you impact. This is a distinctive prospect to establish the storage framework of the AI era at the company that introduced accelerated computing.

What you'll be doing:

• Lead the multi-year technical plan for AI Cloud Storage expansion across NCPs — determine the reference architecture, capabilities, performance and durability SLOs, qualification methodology, and roadmap for the high-performance file, object, and block storage that each NCP must offer to qualify for NVIDIA GPU allocation.

• Serve as the chief storage architect with deep hands-on involvement. Lead key reviews of storage builds and investigate root causes of complex production problems. Develop prototype reference implementations to minimize risks in new initiatives. Make final technical decisions on NCP storage deliveries using measurable SLOs. Apply AI tools heavily to amplify your technical influence throughout the program.

• Define the standard for "production-ready" in NCP storage, including durability and availability SLOs measured in 9s. Ensure sustained efficiency per TiB, observability, blast-radius containment, and reduced operational toil. Influence GPU delivery gating by requiring AI Cloud to accept GPU capacity only after verifying storage-focused ancillary services.

• Develop and guide the architectural direction by working closely with collaborators in training, inference, and accelerated-computing product lines. Coordinate with site-reliability, operations, networking, and security colleagues. Work together with external cloud providers, neocloud operators, and storage vendors to align on a common architecture.

• Develop the open-source path forward for AI storage. Establish and guide an open-source strategy that broadens the AI storage ecosystem. Advocate for a GitHub-first, security-first stance. Engage deeply with upstream open-source communities. Formalize the APIs, SDKs, and protocols allowing partners and the industry to build, integrate, and create with NVIDIA at the AI storage level.

• Lead an engineering culture centered on AI tools. Regularly use modern AI coding and agentic tools in your daily tasks. Show what 10× engineering means at NVIDIA. Distribute patterns, prompts, and evaluation harnesses across the storage organization.

• Partner with peer Distinguished and Principal storage architects across the organization to tackle the most difficult, long-term technical challenges. Make automation the only acceptable solution for infrastructure management tasks like live software upgrades, node and drive replacements, capacity rebalancing, cross-DC data movement, and dataset lifecycle. Establish root-cause analysis and corrective action rigor on every major incident. Design the storage layer for workloads spanning the next several GPU generations, including disaggregated inference with storage-backed KV caching, large-scale write-once-read-many inference patterns, exabyte regional object stores, and cross-DC dataset versioning and copy management.

• Mentor and develop senior, principal, and distinguished engineers across the storage organization and nearby business units. Raise the technical bar broadly. Represent NVIDIA externally in standards bodies, open-source communities, customer briefings, and industry forums (FAST, SC, OCP, SNIA, Linux Storage Summit).

What we need to see:

• BS, MS, or PhD in Computer Science, Electrical Engineering, or a related field — or equivalent experience.

• A minimum of 18+ years of practical engineering experience in storage technology is needed. This involves extensive involvement with a high-performance parallel file system like Lustre, GPFS / Spectrum Scale, WEKA, VAST, BeeGFS, DAOS, or its equivalent, handling data at multi-petabyte scale. Candidates must also have wide-ranging expertise in object storage (S3 / Swift-class) and block storage (NVMe-oF, NVMesh-class, iSCSI).

• A track record of crafting and managing storage platforms at exabyte scale for performance-critical workloads — AI training, HPC, video, or hyperscale data lakes — including direct responsibility for durability, availability, and performance SLOs measured in 9s.

• Demonstrated ability to set technical strategy across business units and partner organizations. You have driven multi-year storage architectures adopted by multiple teams, vendors, or customers. You can point to measurable outcomes such as GPU utility lift, $/PB reduction, incidents eliminated, and time-to-bring-up compressed.

• You are 100% hands-on in engineering. You write and review production code yourself. When a bug requires it, you read Lustre, NFS, kernel, NVMe-oF, or SPDK source code. You also run scale tests or recovery drills personally instead of delegating.

• Strong proficiency in at least one systems language (C, C++, Rust, or Go) and proficiency in Python; comfortable in the Linux kernel storage and networking stacks (block layer, RDMA / RoCE / InfiniBand, NVMe, page cache, VFS, multipath).

• Frequent daily use of advanced AI coding and autonomous tools, including specific examples showing how you accelerated building, coding, debugging, validation, and operations. Also, share your perspective on future trends.

• Excellent written and verbal communication. You can write a one-pager that aligns a VP. You can also write a six-pager that aligns an entire org. You can explain a deep technical trade-off to an SRE, a vendor CTO, and an internal customer in the same week.

• Comfort operating in a 24/7 production environment where storage incidents directly impact GPU revenue, with a security-first approach baked into every build.

Ways to stand out from the crowd:

• Proven background in designing or managing storage solutions for AI training or inference at 10k+ GPU scale, demonstrating clear improvements in GPU utilization or reducing I/O bottlenecks.

• Open-source contributions or maintainership in Lustre, NFS, SPDK, NVMe / NVMe-oF, CSI, Ceph, MinIO, RocksDB, or related projects.

• Built or led a disaggregated-inference or Inference-Time-Compute storage architecture — KV caching to fast in-cluster or GPU-adjacent storage, WORM at scale, storage-aware scheduling, or database-integrated inference.

• Public technical contributions — patents, peer-reviewed papers (FAST, SOSP, NSDI, OSDI, ATC), keynote talks, or RFCs — that demonstrate expertise and leadership in storage for AI infrastructure.

NVIDIA led the way in accelerated computing. Today, our AI infrastructure drives global intelligence, changing industries worldwide. The AI Cloud Storage group forms the base that maintains the world's largest GPU fleet's productivity. Every model trained, every inference served, and every checkpoint saved passes through systems we develop, construct, and manage.

Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 320,000 USD - 488,750 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 17, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply