Senior Principal Researcher – AI Agent & Multimodal Interaction System
Confidential
Posted: April 8, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
We're looking for a Senior Principal Researcher to lead the development of innovative AI agent and multimodal interaction systems in a cutting-edge research lab.
Required Skills
Job Description
Huawei Canada has an immediate permanent opening for a Senior Principal Researcher.
About the team:
The Human-Machine Interaction Lab unites global talents to redefine the relationship between humans and technology. Focused on innovation and user-centered design, the lab strives to advance human-computer interaction research. Our team includes researchers, engineers, and designers collaborating across disciplines to develop novel interactive systems, sensing technologies, wearable and IoT systems, human factors, computer vision, and multimodal interfaces. Through high-impact products and cutting-edge research, we aim to enhance user experiences and interactions with technology.
About the job:
• Define and drive the end-to-end vision for next-generation AI agent–native multimodal interaction systems. Define how agents perceive, reason, and act through multimodal signals (voice, vision, touch, context).
• Design interaction models centered on agent autonomy and collaboration, moving beyond traditional GUI/VUI toward intent-driven and context-aware experiences.
• Lead the design of seamless multimodal experiences across platforms (e.g., mobile, automotive, wearables, smart environments), ensuring consistency and adaptability.
• Lead the integration of multimodal foundation models (VLM, multimodal LLMs) into real-world interaction systems.
• Lead the design of seamless multimodal experiences across: Mobile, AI PC, automotive, AI glasses, wearable devices, and smart environments.
• Iterate quickly from concept to production-ready solutions, evaluating usability, latency, feedback loops, and real-world behavior prior to production.
• Identify opportunities to unify interaction frameworks across modalities, reducing fragmentation between voice, visual, and physical interfaces.
• Drive end-to-end experience thinking, from perception and reasoning to response and user feedback, rather than isolated component optimization.
• Stay at the forefront of industry trends (e.g., AI agents, multimodal foundation models, embodied AI) and translate them into product innovations and high-value IP / patents.