Intelligence Systems Researcher
Maincode
Posted: February 24, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are seeking a highly motivated and detail-oriented Intelligence Systems Researcher to join our team. The ideal candidate will have expertise in AI systems, machine learning, and the ability to design and implement complex algorithms. The successful candidate will be responsible for researching and developing new AI systems from first principles, with a focus on understanding how intelligence emerges and how to design learning processes.
Required Skills
Job Description
About the role
Maincode builds advanced AI systems from first principles. We design architectures, run our own infrastructure, shape our own training signal, and study how learning systems behave under real constraints.
This is not a product research role. It is not about feature velocity or incremental model tuning. It is about understanding how intelligence emerges, where current systems break, and how to design learning processes that are more coherent, efficient, and grounded.
We are looking for researchers who think in systems. People who care about how models actually learn. People who are willing to stay inside a problem long enough for structure to reveal itself.
The work changes as the frontier moves. There is no stable playbook. What matters is depth of reasoning, speed of learning, and the ability to turn abstract questions into disciplined experiments.
What you would actually do
You would work directly on the mechanics of learning systems across their full lifecycle.
This includes:
• Designing and testing new model architectures and training regimes at scale across large compute clusters and extensive real and synthetic datasets
• Studying failure modes in reasoning, generalisation, and representation
• Probing how objective functions, data distributions, and optimisation dynamics shape behaviour
• Running tightly scoped experiments to isolate causal effects
• Building research tooling and experimental pipelines to support large-scale training and analysis
• Iterating quickly while maintaining intellectual discipline
You will spend substantial time inside code, experiments, logs, and model outputs. The goal is not polish. The goal is clarity.
You will collaborate closely with engineers to scale promising ideas, but your primary responsibility is to generate insight through structured experimentation.
The kind of person who does well here
Success in this role is driven more by cognitive style than by specific prior credentials.
People who thrive here:
• Obsess over mechanism rather than surface behaviour
• Care deeply about precision in thinking and language
• Are comfortable sitting with ambiguity while building a mental model
• Enjoy tracing small changes through complex systems
• Prefer depth over novelty
• Can metabolise long experimental cycles into intuition
• Derive satisfaction from understanding why something works, not just that it works
You may come from machine learning, physics, neuroscience, applied mathematics, control systems, or another quantitative field. What matters most is your ability to reason about dynamic systems and extract signal from noisy feedback.
We do not expect you to have already solved frontier AI. No one has. We are hiring for learning gradient, systems intuition, and intellectual stamina.
How you would work
You will use code as a thinking tool.
You should be comfortable:
• Writing and modifying experimental training loops
• Working in Python with frameworks like PyTorch or JAX
• Designing controlled experiments rather than running large undirected sweeps
• Inspecting model internals and outputs with care
This is hands-on research. You will move between theory and implementation fluidly. Abstraction is valuable, but it must eventually meet experiment.
Speed matters, but so does rigour. We care about reducing uncertainty in a disciplined way.
What this role is not
• It is not primarily about shipping user-facing features
• It is not about benchmarking for leaderboard gains in isolation
• It is not about incremental prompt engineering
We are building internal capability in understanding and shaping learning systems. The standard is not external validation. It is whether our models actually become more coherent, more sample-efficient, and more robust under pressure.
Why Maincode
We are a small team building advanced AI systems end to end. We run our own GPU clusters. We train our own models. We study how they behave under real constraints.
You will work with people who:
• Care about mechanism, not just metrics
• Treat experiments as instruments for extracting signal
• Value depth, craftsmanship, and intellectual honesty
• Are building long-term capability rather than short-term artefacts
If you want to work on intelligence as a systems problem, are motivated by precision, feedback, and sustained inquiry, and genuinely enjoy doing the hard thing when the path is unclear, this is the environment for it.