Applied Research Scientist, LLM Evaluation & Post-Training
Confidential
Posted: February 23, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Applied Research Scientist, LLM Evaluation & Post-Training job, responsible for evaluating large-scale language models, developing new models, and enhancing existing ones.
Required Skills
Job Description
Who we are:
Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are the AI technology solutions provider-of-choice to 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML/AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we’re helping usher in the promise of clean and optimized digital data to all industries. Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.
Our global workforce includes over 3,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany. We’re poised for a period of explosive growth over the next few years.
Position Summary:
Innodata is expanding its GenAI research capability to advance state-of-the-art evaluation and post-training methods for LLM and multimodal systems. As an Applied Research Scientist, LLM Evaluation & Post-Training, you will lead research and experimentation on how evaluation design, measurement strategies, and feedback signals influence model improvement.
This role is ideal for a technically rigorous researcher who is deeply fluent in modern LLM evaluation and post-training, and who can turn research insight into practical methods for customer solutions and internal platform innovation. You will work across human-in-the-loop and AI-augmented workflows, partnering with Language Data Scientists and AI/ML Research Engineers to design and validate evaluation frameworks that drive measurable model gains.
The ideal candidate combines strong experimental and statistical judgment with hands-on technical ability and can engage as a peer with research and engineering stakeholders at leading AI companies.
Who We’re Looking For:
You have at least 5+ years of relevant experience (including graduate research) in applied ML research, research science, or advanced ML experimentation, with significant experience in LLM evaluation, benchmarking, alignment, or post-training. You have a track record of designing high-quality experiments, interpreting results rigorously, and translating findings into practical improvements.
You are comfortable working across research and product/customer contexts. You can identify important methodological questions, build a research agenda, and collaborate with engineers and data experts to execute. You understand that evaluation is not only about metrics, but about measurement validity, robustness, stress testing, and alignment to real-world usage.
You are excited by frontier challenges including long-context, cross-modal, and dynamic multi-turn evaluations, and by the opportunity to build new benchmark datasets and evaluation frameworks that become strategic assets for Innodata and its customers.
You bring an implementation-minded approach to experimentation and are comfortable collaborating closely with engineers to productionize methods and research outputs when appropriate.
Tell Me More:
As an Applied Research Scientist, LLM Evaluation & Post-Training, you will help define the next generation of evaluation-driven model improvement workflows. You will study how different evaluation approaches (human, automated, hybrid) shape model selection and post-training outcomes, and you will design experiments that produce credible, actionable conclusions.
Your work may include designing benchmark datasets, developing evaluation taxonomies and protocols, defining metrics and scoring methodologies, analyzing failure modes, and testing how changes in evaluation setup affect downstream fine-tuning results. You will also support customer engagements by bringing scientific rigor to evaluation strategy, methodology review, and technical recommendations.
This is a highly collaborative role that sits at the intersection of research, engineering, and language/data operations.
Responsibilities:
• Define and execute a research agenda focused on LLM evaluation and post-training, especially evaluation-driven model improvement
• Design rigorous experiments to study how evaluation methodologies impact fine-tuning and post-training outcomes
• Develop and validate evaluation frameworks for LLM and multimodal systems, including:
• benchmark/task design
• scoring methods
• judge/model-assisted evaluation
• human evaluation protocols
• robustness/stress testing
• Lead research on advanced evaluation domains, including long-context, cross-modal, and dynamic multi-turn evaluations
• Study the effectiveness and limitations of existing evaluation techniques, and propose improved methodologies with clear validity and scalability tradeoffs
• Analyze model behavior and failure patterns; generate actionable recommendations for model improvement and evaluation redesign
• Collaborate with AI/ML Research Engineers to translate research methods into scalable evaluation and post-training pipelines
• Collaborate with Language Data Scientists to integrate human-in-the-loop and synthetic data/evaluation strategies into research programs
• Engage with customer technical stakeholders to understand evaluation goals, review methodologies, and provide expert recommendations
• Contribute to internal benchmark datasets, evaluation frameworks, and reusable research assets
• Produce high-quality technical documentation, internal research reports, and client-facing materials explaining methods, results, assumptions, and limitations
• Contribute to thought leadership and best practices in LLM evaluation, post-training, and GenAI quality measurement