Perception Engineer – Computer Vision & AI/ML
Confidential
Posted: March 25, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Design, develop, and deploy camera-based perception systems for autonomous security applications.
Required Skills
Job Description
About Knightscope
Knightscope is a security technology company building the Nation’s First Autonomous Security Force. The Company combines autonomous machines, advanced software, and human expertise to help protect people, property, and critical infrastructure. Knightscope’s long-term mission is to make the United States of America the safest country in the world.
Location: Knightscope HQ, Sunnyvale, CA (This position is not remote)
Job Summary:
Knightscope is seeking a Perception Engineer with strong expertise in computer vision and AI/ML to design, develop, and deploy camera-based perception systems for autonomous security robots.
In this role, you will contribute to the development of Knightscope’s vision perception stack, enabling robots to understand and interpret complex real-world environments. Your work will directly impact the safety, reliability, and operational effectiveness of deployed robotic platforms.
You will collaborate closely with controls, simulation, and test engineering teams to deliver perception capabilities that support safe and intelligent robot behavior in real-world deployments.
About the Role
You will work on end-to-end perception system development, including algorithm design, model development, system integration, and deployment on embedded robotic platforms.
This role requires a balance of deep AI/ML expertise and strong systems understanding, including working with camera hardware, embedded compute (SoCs), and real-time constraints. You will contribute to building robust, scalable, and production-ready perception pipelines that operate reliably in diverse and dynamic environments.
Key Responsibilities
Design, develop, and optimize camera-based perception systems for real-time robotics applications
Develop and deploy visual perception and environment understanding algorithms using modern AI/ML techniques
Implement and improve capabilities for detection, recognition, tracking, and contextual understanding in real-world environments
Work with large-scale datasets to train, evaluate, and deploy machine learning models for perception tasks
Understand and account for camera hardware characteristics, sensor limitations, and ISP pipelines in algorithm design
Evaluate trade-offs between accuracy, latency, compute, and power consumption in deployed systems
Leverage fleet-scale data to evaluate perception performance, identify edge cases, and improve algorithms for robustness, accuracy, and real-world reliability
Define and maintain metrics and evaluation frameworks to continuously measure and improve perception system performance
Collaborate cross-functionally with controls, simulation, and test teams to ensure seamless integration into the autonomy stack
Support on-robot integration, debugging, and validation in real-world environments
Required Qualifications
S. or Ph.D. in Computer Science, Robotics, Electrical Engineering, Machine Learning, Computer Vision, or related field. B.S. with strong industry experience may be considered
Prior internship or full-time industry experience in computer vision, AI/ML, robotics, autonomous systems, or related domains
Strong background in computer vision and deep learning for image-based perception
Experience developing visual perception systems for real-world applications
Experience with modern ML frameworks (e.g., PyTorch, TensorFlow)
Strong programming skills in Python and/or C++
Experience developing software in large, distributed development environments
Experience with version control systems (e.g., Git) and collaborative software development workflows (code reviews, branching, merging)
Understanding of camera systems, image processing, and sensor characteristics
Ability to work in a hands-on robotics development and testing environment
Preferred Qualifications
Experience with real-time perception systems and low-latency inference
Experience deploying models on embedded/edge platforms (e.g., NVIDIA, Qualcomm SoCs)
Understanding of camera SoC hardware, ISP pipelines, and system-level trade-offs
Familiarity with multi-camera systems or sensor fusion
Experience with visual-language models (VLMs) or multimodal AI systems
Experience with video analytics or large-scale perception systems
Familiarity with ROS / ROS2
Experience working with large-scale datasets and data pipelines
Background in safety-critical or production AI systems
Compensation & Benefits
Base Salary: $150,000 to $210,000 (DOE)
Equity: Stock options
Benefits: Medical, dental, vision, 401(k), paid time off
Location Requirement: Full-time, on-site at Sunnyvale HQ