AI/Computer Vision Intern - Onboard Detection & Tracking
Harmattan Ai
Posted: March 19, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are hiring an AI/Computer Vision Intern to onboard detection and tracking systems for a leading defense prime building autonomous and scalable defense systems in Paris, France.
Required Skills
Job Description
About Us
Harmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.
Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.
About the Role
As an AI/Computer Vision Intern, you will join the Perception team to solve one of our most critical challenges: enabling UAVs to "see" and "follow" in real-time. You will be responsible for developing a robust video-based detector (in opposition to a frame-based detector) in association with an existing tracking algorithm optimized for onboard execution. Your work will bridge the gap between high-level deep learning research and efficient embedded implementation, culminating in live flight tests where your code will drive the drone’s behavior.
Responsibilities
• Detector Development: Design and train state-of-the-art object detection models (e.g., YOLO variants, lightweight Transformers) based on a sequence of frames, tailored for specific mission-critical targets.
• Visual Tracking: Integrate this model into an existing end-to-end tracking algorithm (e.g., Sort/DeepSort, CSRT) that maintains lock under high-dynamics and occlusion.
• Edge Optimization: Profile and optimize models using TensorRT, or NPU-specific toolchains to achieve real-time inference on low-power onboard hardware (IMX, Jetson Nano, or similar).
• Data Pipeline: Curate, augment, and manage high-quality datasets, utilizing both real-world flight footage and synthetic data from simulation.
• System Integration: Integrate your vision pipeline into our flight stack (C++/Python) and collaborate with the GNC team to turn your detections into actionable flight commands.
• Validation: Benchmark performance using quantitative metrics and participate in field testing to validate your algorithms in diverse environmental conditions.
Requirements
• Education: Currently pursuing or recently completed a Master’s degree in Computer Science, Robotics, Electrical Engineering, or a related field with a focus on Computer Vision.
• Deep Learning: Strong understanding of CNN architectures, object detection frameworks, and modern loss functions, as well as the tracking world and its problematics.
• Software Engineering: Proficiency in Python (PyTorch/TensorFlow) and comfortable working in C++.
• Linux/Embedded: Experience working in a Linux environment; familiarity with Git is a plus.
• Problem Solving: A rigorous approach to debugging and an "engineering first" mindset - valuing performance over theoretical complexity.
• Language: Fluency in English; French is a plus.
Bonus
• Experience with Vision model development
• Experience with NVIDIA Jetson platforms and hardware-accelerated inference.
• FPV pilot experience or hobbyist interest in UAVs.
• Previous experience with synthetic data generation (e.g., NVIDIA Isaac Sim, Gazebo).
We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.