Staff Autonomy Safety Engineer, Robot Safety
Confidential
Posted: March 18, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We are hiring a Staff Autonomy Safety Engineer to lead safety assurance for machine learning–driven autonomy systems, ensuring perception, prediction, and decision-making systems operate safely under real-world conditions.
Required Skills
Job Description
Staff Autonomy Safety Engineer, Robot Safety
Location: San Carlos, CA (On-site)
Type: Full-time
About 1X
At 1X, we are building humanoid robots that work alongside humans to solve labor shortages and create abundance. Our robots operate in real-world, human environments—bringing AI out of simulation and into everyday life.
The Role
We are hiring a Staff Autonomy Safety Engineer to lead safety assurance for machine learning–driven autonomy systems. You will ensure that perception, prediction, and decision-making systems operate safely under real-world conditions, degrade gracefully under uncertainty, and remain robust in complex, human-facing environments.
This is a high-impact, deeply technical role focused on advancing AI safety for embodied systems. You will work at the intersection of autonomy, safety engineering, and real-world deployment, partnering closely with AI, robotics, and security teams.
You will report to the Director of Robot Safety.
What You Will Do
• Identify and assess AI-specific hazards in end-to-end autonomy systems
• Define and enforce safety constraints for AI-driven robot behavior involving humans, objects, and environments
• Partner with Functional Safety to translate AI risks into system-level requirements and mitigations
• Collaborate with AI teams to build runtime guardrails that validate and constrain AI-generated actions
• Evaluate risks from:
• dataset bias
• distribution shift
• model drift
• rare and edge-case failure modes
• Work with Cybersecurity to assess risks from:
• adversarial inputs
• prompt injection
• misuse scenarios
• Provide input on residual risk, uncertainty, and confidence levels in AI behavior
• Help define safety strategies for real-world deployment of autonomous systems
The Kind of Seniority We Mean
This is a staff-level, hands-on technical role. We are looking for someone who can deeply analyze AI system behavior, define safety frameworks, and work directly with engineering teams to implement safeguards in production systems. You are expected to operate with high autonomy, influence cross-functional teams, and contribute directly to the safety of deployed robots.
What Success Looks Like
• AI systems operate within well-defined safety envelopes
• Safety constraints are enforced in real-time decision-making systems
• Robust mitigations exist for edge cases and failure modes
• AI behavior is measurable, explainable, and auditable from a safety perspective
• Strong collaboration between AI, Safety, and Security teams
• Reduced risk from distribution shifts, adversarial inputs, and unexpected environments