Member of Technical Staff: AI Research & Engineering
Synhawk
Posted: April 9, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We're looking for a member of technical staff to join our team, with expertise in AI research and engineering, to help build omnimodal foundation models for communication integrity in the media industry.
Required Skills
Job Description
Member of Technical Staff: AI Research & Engineering in Media Integrity
About Synhawk
Synhawk builds omnimodal foundation models for communication integrity, aimed at infrastructure-side deployment in telco and banking sectors. Our platform analyzes the integrity of audio and video, and protects platforms against AI threats. This currently includes detecting synthetic speech & voice cloning, video & image manipulation, social engineering, and identity impersonation. We’re building our models in a way that generalizes to future AI threats yet to come, especially as AI agents get integrated into the workforce and society.
We're past the initial research phase. We're actively deploying with major telco customers, working with and building our own GPU clusters, air-gapped environments, and strict production SLAs. Your work with us will have an immediate, measurable impact on systems that defend real communications infrastructure.
We're a small, highly technical, founder-led team. You'll play an integral part in shaping our research agenda while building foundation models to help real customers from day one .
The Role
You'll own and advance Synhawk's core engine - spanning the development of foundation model and media integrity methods in a full research-to-production pipeline. This isn't a pure research role, as the models and methods you build will directly feed into production with our telco partners. That is to say, you’ll be working under real latency and reliability constraints. But no pressure.
While practical, real-world applicability grounds how we build, we also invest deeply in forward-looking research into emerging AI communication threats. In practice, we balance both: staying focused on protecting customers today while preparing for threats that don’t yet exist.
You'll have direct influence over our research agenda from day one. As we scale beyond early deployments, this role also provides opportunities to publish novel research and engage with the broader community through conferences and events.
What You'll Work On
Foundation Models
• Large-scale distributed training (>10B parameters, multi-GPU)
• Data curation: filtering, deduplication, pre/mid/post-training data mixtures
• Post-training alignment: SFT, DPO/PPO, RLHF; managing distribution shift and catastrophic forgetting
• Multimodal augmentation strategies; CUDA-level optimization where needed
Media Integrity & Deepfake Detection
• Detection architectures: constrained convolutions, SRM filtering, frequency-domain transformers
• Signal processing: codecs, compression artifacts, forensic fingerprints
• Adversarial robustness against PGD, FGSM, re-compression, downsampling
• Multilingual and multicultural support
• Hands-on with red teaming (open-source + proprietary deepfake generators, attack agents) integrated into the R&D cycle
Who We're Looking For
We’re looking for folks with PhD-level depth in AI research; the degree itself isn't required, but the experience and something to show for are: published work, open-source contributions, or direct startup/company experience in this space (ideally a mix). Strong foundations across both large-scale training and media forensics. Production experience matters, too. We want someone who's taken research past the prototype stage into something people can use
Most importantly, though, we’re looking for folks who are curious and purpose-driven. Even if you don’t meet all of the criteria above, but are eager to become an expert in them, we want to hear from you!
What We Offer
• Ownership of core AI research at an early-stage company with real traction
• H100 GPU clusters and serious compute
• Competitive comp + equity (discussed individually)
• Small, founder-led team - minimal process, maximum trust