Research, Evals
Exa
Posted: October 15, 2025
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are looking for a researcher to join our ML organization, where you will be responsible for building foundational models for search, training state-of-the-art embedding models, and powering everything through our high-performance Rust-based vector database and a $5M H200 GPU cluster.
Required Skills
Job Description
Exa is building the search engine for the age of AI — from the silicon up. We run one of the most ambitious indexing operations in the world: crawling the open web at massive scale, training state-of-the-art embedding models to understand it, and powering everything through our own high-performance Rust-based vector database and a $5M H200 GPU cluster that regularly lights up tens of thousands of machines.
The ML organization sits at the heart of this mission. We train foundational models for search. Our goal is to build systems that can instantly filter the world's knowledge to exactly what you want, no matter how complex your query. Basically, put the web into an extremely powerful database.
And to do that well, we need to measure what “good search” actually means. That’s where you come in.
We're looking for an ML evals engineer to design and build our eval stack at Exa. The role involves investigating how to evaluate search engines in an LLM world and then building the most comprehensive, creative, and effective eval suite. You will be deciding the future of search through the evals we choose to optimize for.
Desired Experience
• Have hands-on ML experience (training, finetuning, or evaluating models (bonus if related to embeddings or LLMs)
• Have strong engineering fundamentals and can build reliable systems (Python, Rust, distributed pipelines, GPU/cluster jobs, etc.)
• Enjoy diving into data via building eval sets, inspecting edge cases, designing creative measurement strategies
Example Projects
• Write a manifesto of what perfect search means
• Design and implement evaluation frameworks that probe the limits of search
• Build scalable, reliable eval pipelines that track regressions, drift, and quality signals across billions of documents
• Create golden datasets, synthetic benchmarks, agentic tasks, and real-world test suites that reflect how developers, agents, and humans actually use Exa
• Partner closely with ML researchers, data engineers, infra engineers, and product to shape the feedback loops that improve our search models
This is an in-person opportunity in San Francisco. We're happy to sponsor international candidates (e.g., STEM OPT, OPT, H1B, O1, E3).