AI Validation, Workload Enabling and Tools Engineer
Intel Corporation
Posted: May 5, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Develop and maintain AI software solutions that leverage Intel's AI capabilities to drive innovation and growth.
Required Skills
Job Description
Job Details:
Job Description:
AI Software Solution Engineer for validation and workload enabling role to work with internal engineering teams and ecosystem partners to deliver high-performance AI solutions optimized for Intel platforms. Explores emerging AI trends, prototypes advanced solutions, and drives adoption of Intel’s AI capabilities across cloud, edge, and client markets. Enhances AI model efficiency, accuracy, and performance through deep understanding of frameworks, algorithms, and underlying hardware. Enable AI models on Intel GPUs for accuracy and optimize for performance. Acts as a trusted technical leader supporting product enablement, performance tuning, validation, and benchmarking to help shape future Intel AI architectures and platforms.
Key Responsibilities:
Collaborate with cross-functional hardware and software engineering teams to validate AI workloads for Intel architectures. Evaluate and debug deep-learning models, kernels, and operators to maximize performance and efficiency while maintaining accuracy. Conduct benchmarking, regression analysis, and algorithmic validation across a variety of use-cases and frameworks. Develop prototype workloads, tools, and automation pipelines to accelerate performance tuning and validation workflows. Conduct performance and accuracy evaluation of AI models on competition HW to detect and plug the gaps. Engage with customers, ISVs, and internal development groups to drive enablement, performance improvements, and ecosystem readiness. Translate AI workload needs actionable architecture and product insights and support next-generation platform bring-up, pre-silicon modeling, and product maturity efforts.
Qualifications:
• Bachelor’s/master’s in computer science, Electronics Engineering, Mathematics, or related field with 8–15 years of experience.
• Knowledge of ML/DL is a Must including LLM Architecture, Transformer, Attention, Low precision Data type (fp8/fp4), Quantization Techniques, Open source upstreaming, Inference Serving etc.
• AI Workload enabling including accuracy debug and performance optimization is must.
• Hands-on experience with ML/DL models and distributed training/inference using PyTorch, Tensorflow, vLLM/SGLang or similar frameworks.
• Strong skills in performance debugging, numerical analysis, and regression tracking in validation environments.
• Working knowledge of Agentic AI deployment is a plus.
• Strong skills in validation framework design and test case development for High level frameworks like SGLang/vLLM
• Proficiency in Python (NumPy, SciPy, Pandas, PyTest)
• Solid Linux development/debugging experience (git, cmake, gdb, strace, perf), and familiarity with Git/GitHub/Gerrit workflows and CI/CD automation.
• Understanding of distributed systems, HPC/GPU scaling, MPI/torchrun/Fully Sharded Data Parallel/Tensor Parallel, and high-performance networking (Ethernet/InfiniBand).
• Skilled in Docker/Kubernetes, virtualization, performance benchmarking, and automation.
• Strong analytical, problem-solving, and communication skills with ability to work across architecture, development, and validation teams.
Job Type:
Experienced Hire
Shift:
Shift 1 (India)
Primary Location:
India, Bangalore
Additional Locations:
Business group:
At the Data Center Group (DCG), we're committed to delivering exceptional products and delighting our customers. We offer both broad-market Xeon-based solutions and custom x86-based products, ensuring tailored innovation for diverse needs across general-purpose compute, web services, HPC, and AI-accelerated systems. Our charter encompasses defining business strategy and roadmaps, product management, developing ecosystems and business opportunities, delivering strong financial performance, and reinvigorating x86 leadership. Join us as we transform the data center segment through workload driven leadership products and close collaboration with our partners.
Posting Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Position of Trust
N/A
Work Model for this Role
This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. * Job posting details (such as work model, location or time type) are subject to change.
*
ADDITIONAL INFORMATION: Intel is committed to Responsible Business Alliance (RBA) compliance and ethical hiring practices. We do not charge any fees during our hiring process. Candidates should never be required to pay recruitment fees, medical examination fees, or any other charges as a condition of employment. If you are asked to pay any fees during our hiring process, please report this immediately to your recruiter.