MisuJob - AI Job Search Platform MisuJob

Research Engineer – Benchmarking, Evals & Failure Analysis

Mercor

San Francisco, California, United States permanent

Posted: March 17, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

We are seeking a Research Engineer - Benchmarking, Evaluations & Failure Analysis with expertise in AI development. The ideal candidate will work with leading AI labs and enterprises to define the future of work and train frontier AI models in the same way teachers teach students.

Job Description

About Mercor

Mercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.

Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.

Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.

Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our new San Francisco headquarters.

About the Role

As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how we train and improve frontier language models.
Your work will define how we measure tool use, agentic behavior, and real-world reasoning. You’ll design and run evals, build rubrics and scorers, and turn failure analysis into actionable improvements for post-training, RLVR, and data pipelines.

What You’ll Do

• Benchmarking: Design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning; ensure benchmarks scale with training and stay aligned with product and research goals.

• Evaluation systems: Build and operate LLM evaluation systems end-to-end runs, scoring, dashboards, and reporting, so researchers and applied AI teams can track model performance and compare runs at scale.

• Failure analysis: Run systematic failure analysis on model outputs (e.g., wrong tool use, reasoning errors, safety/alignment issues); categorize failure modes, quantify prevalence, and feed findings into reward design, data curation, and benchmark design.

• Rubrics and evaluators: Create and refine rubrics, automated evaluators, and scoring frameworks that drive training and evaluation decisions; balance rigor with scalability (human vs. model-as-judge, calibration, agreement).

• Data quality and usability: Quantify data usability, quality, and impact on key benchmarks; use evals and failure analysis to guide data generation, augmentation, and curation.

• Cross-team collaboration: Work with AI researchers, applied AI teams, and data producers to align evals with training objectives and to prioritize benchmarks and failure analyses that matter most.

• Ownership in a fast-paced environment: Operate in a high-iteration research setting with strong ownership of benchmarks, evals, and failure-analysis workflows.

What We’re Looking For

• Strong applied research background, with focus on model evaluation, benchmarking, and/or failure analysis.

• Strong coding skills and hands-on experience with ML models and evaluation code.

• Solid grasp of data structures, algorithms, and backend systems.

• Comfort with APIs, SQL/NoSQL, and cloud platforms for running and storing eval results.

• Ability to reason about model behavior, experimental results, and data quality from evals and failure analyses.

• Excitement to work in person in San Francisco five days a week in a high-intensity, high-ownership environment.

Nice To Have

• Industry experience on a post-training or evaluation/benchmarking team (highest priority).

• Publications at top-tier venues (NeurIPS, ICML, ACL), especially in evaluation or benchmarking.

• Experience building or running LLM evaluations, benchmarks, or failure-analysis pipelines.

• Experience with synthetic data generation, rubric design, or RL-style workflows that use evals for reward shaping.

• Work samples or code (e.g., eval frameworks, benchmark suites, failure-analysis reports or tooling) that demonstrate relevant skills.

Benefits

• Generous equity grant vested over 4 years

• A $10K housing bonus (if you live within 0.5 miles of our office)

• A $1.5K monthly stipend for meals

• Free Equinox membership

• Health insurance

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply