Member of Technical Staff – Model Training
Inflectionai
Posted: November 17, 2025
Interested in this position?
Create a free account to apply with AI-powered matching
Required Skills
Job Description
At Inflection AI, our public benefit mission is to harness the power of AI to improve human well-being and productivity.
The next era of AI will be defined by agents we trust to act on our behalf.
We’re pioneering this future with human-centered AI models that unite emotional intelligence (EQ) and raw intelligence (IQ)—transforming interactions from transactional to relational, to create enduring value for individuals and enterprises alike.
Our work comes to life in two ways today:
Pi, your personal AI, designed to be a kind and supportive companion that elevates everyday life with practical assistance and perspectives.
Platform — large-language models (LLMs) and APIs that enable builders, agents, and enterprises to bring Pi-class emotional intelligence into experiences where empathy and human understanding matter most.
We are building toward a future of AI agents that earn trust, deepen understanding, and create aligned, long-term value for all.
About the Role
As a Model Training engineer, you will design, build, and scale the post-training pipelines that turn a general LLM into a brand-fluent, production-ready assistant. Your innovations in fine-tuning and preference optimization (RLHF, DPO, GRPO, RLAIF) will directly improve reliability, alignment, and cost.
This is a good role for you if you:
• Have hands-on experience training and fine-tuning large transformer models on multi-GPU / multi-node clusters.
• Are fluent in PyTorch and its ecosystem tools (Torchtune, FSDP, DeepSpeed) and enjoy digging into distributed-training internals, mixed precision, and memory-efficiency tricks.
• Have shipped or published work in RLHF, DPO, GRPO, or RLAIF and understand their practical trade-offs.
• Care deeply about training tools, pipelines, and reproducibility—you automate the boring parts so you can iterate on the fun parts.
• Balance research curiosity with product pragmatism—you know when to run an ablation and when to ship.
• Communicate crisply with both technical and non-technical teammates.
• Have a bachelor’s degree or equivalent in a related field to the offered position requirements.
Responsibilities include:
• Contribute to end-to-end post-training workflows—dataset curation, hyper-parameter search, evaluation, and rollout—using PyTorch, Torchtune, FSDP/DeepSpeed, and our internal orchestration stack.
• Prototype and compare alignment techniques (e.g., curriculum RL, multi-objective reward modeling, tool-use fine-tuning) and push the best ideas into production.
• Automate training at scale: build robust pipeline components, tools, scripts, and dashboards so experiments are reproducible and easy to trace.
• Define the metrics that matter; run A/B tests and iterate quickly to meet aggressive quality targets.
• Collaborate with inference, safety, and product teams to land improvements in customer-facing systems.
Compensation & Benefits
• Salary Range: $175,000 – $350,000 USD per year (based on experience and location)
• Equity: Competitive stock options
• Benefits:
• Diverse medical, dental and vision options
• 401k matching program
• Unlimited paid time off
• Parental leave and flexibility for all parents and caregivers
• Support of country-specific visa needs for international employees living in the Bay Area