ARCHIVED
This job listing has been archived and is no longer accepting applications.
MisuJob - AI Job Search Platform MisuJob

Coding - Adversarial Prompt Expert

Reinforce Labs Inc

United States Remote part_time

Posted: February 17, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Job Description

Job Description

We are seeking an Adversarial Prompt Security Specialist with strong technical instincts and coding proficiency to join our Trust & Safety team. In this role, you will use your knowledge of LLM behavior and scripting skills to probe, bypass, and stress-test safety systems. Your focus will be on discovering vulnerabilities—crafting prompt injection sequences, writing scripts to automate exploit attempts, manipulating API interactions, and identifying novel attack vectors that evade existing safeguards. This is a hands-on offensive testing role that rewards creativity, persistence, and an attacker’s mindset over formal engineering credentials.

Key Responsibilities

• Code-Assisted Adversarial Probing: Write and execute scripts (primarily Python) to systematically test LLM safety boundaries. This includes automating prompt injection chains, encoding and obfuscating payloads, manipulating conversation context through API calls, and iterating on attack strategies programmatically rather than relying solely on manual interaction.

• Jailbreak Discovery and Development: Design multi-step jailbreak sequences that exploit model behavior through technical means, such as token-level manipulation, system prompt extraction, role-play escalation, instruction hierarchy subversion, and context window exploitation. Identify bypass vectors that circumvent safety classifiers and content filters.

• Cross-Vector Exploitation: Test attack surfaces that span code generation, tool use, multi-turn conversation, and multi-modal inputs. Explore how code-mediated interactions—such as requesting the model to write, execute, or interpret code—can be leveraged to bypass safety controls that apply to natural language interactions.

• Vulnerability Documentation: Document discovered vulnerabilities with clear severity assessments, step-by-step reproduction instructions, and sample exploit code. Provide context on why a given bypass is dangerous and recommend potential mitigations for the alignment and engineering teams.

• Attack Landscape Monitoring: Stay current with emerging adversarial techniques from the AI security research community, open-source exploit repositories, academic publications, and real-world misuse patterns. Adapt and apply novel methods to internal testing workflows.

• Safety Policy Input: Provide technical feedback to content policy and safety classification teams based on observed model behaviors. Flag gaps between intended safety enforcement and actual model output, particularly in edge cases involving code generation, indirect prompt injection, and agentic tool-use scenarios.

Candidate Profile

• Adversarial Mindset: You instinctively look for ways to break systems. You approach LLM safety from an attacker’s perspective and can creatively combine technical and social engineering techniques to find vulnerabilities others miss.

• Technically Resourceful: You are comfortable writing scripts to test ideas quickly, interacting with APIs, and using code as a tool for exploration—even if you don’t identify as a traditional software engineer. You solve problems by building things, not just describing them.

• Persistent and Methodical: You approach red-teaming as a structured practice. You systematically vary your attack strategies, document what works and what doesn’t, and iterate methodically rather than relying on luck.

• Clear Communicator: You can explain complex technical exploits to non-technical stakeholders—including policy, legal, and leadership teams—in a way that conveys both the mechanism and the real-world risk.

• Ethically Grounded: You understand the responsibility inherent in this work. You are motivated by strengthening AI safety and operate with integrity within established testing protocols.

Qualifications

• Proficiency in Python scripting, with the ability to write functional scripts for task automation, API interaction, and data manipulation. Formal software engineering training is not required.

• Demonstrated experience in adversarial prompt engineering, jailbreak development, or LLM red-teaming—whether in a professional, academic, independent research, or community context (e.g., bug bounties, CTFs, responsible disclosure).

• Working familiarity with LLM APIs (e.g., OpenAI, Anthropic, open-source model endpoints) and a practical understanding of how large language models process input, generate output, and enforce safety constraints.

• Knowledge of common LLM attack vectors, including direct and indirect prompt injection, payload encoding and obfuscation, context window manipulation, system prompt leakage, and role-play exploitation.

• Strong written communication skills, with the ability to produce clear vulnerability reports that include reproduction steps, severity context, and mitigation recommendations.

Preferred

• Background in cybersecurity, penetration testing, or application security—formal or self-taught. Relevant certifications (e.g., OSCP, CEH) are valued but not required.

• Familiarity with AI safety evaluation frameworks such as the OWASP Top 10 for LLM Applications, NIST AI RMF, or MITRE ATLAS.

• Understanding of LLM alignment techniques (e.g., RLHF, constitutional AI) and their known failure modes and exploitable edge cases.

• Experience with multi-modal model testing (vision, code generation, tool use) and awareness of cross-modal attack surfaces.

• Proficiency in additional scripting or programming languages (e.g., JavaScript, Bash, Go) that expand testing capabilities.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply