AI Integration Engineer
Confidential
Posted: April 7, 2026
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Builds the connective tissue between LLM APIs, existing product systems, and end users by integrating, orchestrating, and deploying AI capabilities into production software reliably, securely, and at scale.
Required Skills
Job Description
The AI Integration Engineer builds the connective tissue between LLM APIs, existing product systems, and end users. They are not training models. They are integrating, orchestrating, and deploying AI capabilities into production software, reliably, securely, and at scale.
Key Responsibilities:
● Integrate LLM APIs (Anthropic Claude, OpenAI GPT-4o, AWS Bedrock, Google
Gemini) into backend services and user-facing products
● Design and implement RAG pipelines: document ingestion, chunking strategy,
vector store selection, retrieval tuning
● Build agentic workflows using frameworks such as AgentCore, LangChain,
LlamaIndex, or custom orchestration patterns
● Manage prompt engineering, prompt versioning, and prompt evaluation
frameworks
● Implement guardrails for LLM outputs: validation, content filtering, fallback logic
● Monitor AI system performance: latency, cost-per-query, accuracy drift, token
usage
● Collaborate with frontend engineers to surface AI capabilities in product UIs
● Own the AI integration layer in the SDLC, from spec to CI/CD to production
observability