Pospíšil Petr | CyberPOPE Independent Consultant | Cyber Security Architect & vCISO
> ./ai_redteam.sh --target llm --mode adversarial

AI & LLM
Penetration Testing

Your AI application accepts natural language as input.
That makes every user interaction a potential attack vector.

01 // Why This Matters Now

EU AI Act

Full compliance required by August 2, 2026. High-risk AI systems require mandatory adversarial testing — non-compliance penalties reach €35M or 7% of global turnover.

NIS2 & DORA

AI systems automating critical decisions are increasingly in-scope under NIS2 and DORA. Security testing of AI components is no longer optional in regulated industries.

The Methodology Gap

Traditional pentesting doesn't test whether your RAG pipeline leaks documents or whether an AI agent can be tricked into unauthorised API calls. This gap is active and unaddressed in most organisations.

02 // What I Test

Your architecture determines your attack surface. We establish which layers are in scope during scoping.

Every AI App

Always in scope

Prompt injection, jailbreaking, system prompt extraction, and sensitive data disclosure — present in any LLM-based system.

LLM01 Prompt Injection LLM07 System Prompt Leakage

Knowledge Base / RAG

If your AI retrieves documents

Indirect prompt injection via retrieved content, cross-tenant knowledge leakage, and knowledge base poisoning.

LLM04 Data Poisoning LLM08 Vector Weaknesses
Highest risk

Tool Access / Agentic

If your AI can take actions

Tool call hijacking, privilege escalation through chained calls, and confused deputy attacks reaching real systems.

LLM06 Excessive Agency OWASP Agentic Top 10

Multi-Agent Orchestration

If agents delegate to agents

Cross-agent prompt injection, inter-agent auth weaknesses, and trust boundary violations between orchestrator and sub-agents.

Agentic Top 10 — Multi-Agent Trust LLM01 (cross-agent)

Running a self-hosted or fine-tuned model? A fifth layer covering model integrity, training data exposure, and membership inference is available on request.

Findings mapped to: OWASP LLM Top 10 (2025) OWASP Agentic Top 10 (2025) MITRE ATLAS EU AI Act / NIS2 / DORA

03 // Indicative Pricing

Exact quote after scoping call. Complexity of the AI architecture — number of attack surfaces, agentic integrations, and tool connections — determines the final price.

Limited offer

40% Portfolio Discount — Building My AI Testing Practice

I am offering a 40% discount on all AI testing engagements to clients who agree to be referenced as an anonymised case study. Your organisation's name is never disclosed — only the industry, AI architecture type, and anonymised findings. This offer is time-limited as my portfolio fills.

Security Assessment
Automated scan + limited manual validation
from €1,400
€2,300
40% portfolio discount applied
Delivery: ~1 week
  • Automated prompt injection & jailbreak battery
  • System prompt extraction attempts
  • Sensitive information disclosure testing
  • Basic output validation
  • RAG leakage check (if applicable)
  • Written report + debrief call

Ideal for: organisations with a chatbot or RAG assistant that has never been tested

Most common
Penetration Test
Full manual testing across all applicable layers
from €3,200
€5,300
40% portfolio discount applied
Delivery: 1–2 weeks
  • Manual attack chain development
  • RAG pipeline attacks & knowledge base poisoning
  • Tool call hijacking & privilege escalation
  • Cross-agent injection (if applicable)
  • EU AI Act / NIS2 / DORA regulatory mapping
  • Retest of critical findings + debrief call

Ideal for: agentic systems, RAG with tool access, regulated sectors requiring documented testing

Red Team Exercise
Goal-based adversarial simulation — custom scope
Custom scope
Delivery: 3+ weeks, custom timeline
  • Custom threat modelling workshop
  • Multi-layer attack chain development
  • Goal-based adversarial simulation
  • Multiple AI systems / business units
  • Executive business impact analysis
  • Custom delivery timeline
Discuss scope
Continuous Testing Retainer
from €800/month
€1,300/month
40% portfolio discount

AI systems change with every model update, new tool integration, and prompt revision. Monthly or quarterly automated retesting with a brief update report gives you an ongoing audit trail for compliance. The most cost-effective way to maintain coverage over time.

Ask about retainer

All prices indicative. Exact quote after scoping call. Each additional tool integration or agentic pipeline adds independent attack paths and is scoped separately.

Book Free Scoping Call

04 // How We Collaborate

01
Scoping & threat modelling

We map your AI architecture, identify which attack surfaces apply, and establish the blast radius of a potential compromise.

02
Passive reconnaissance

I fingerprint guardrails, attempt system prompt extraction, and build the attack plan before active testing begins.

03
Active testing

Systematic prompt injection, jailbreak, RAG, and tool-call testing — each finding confirmed through multiple reproductions.

04
Report & debrief

Executive summary with regulatory mapping + technical appendix with reproduction steps and remediation guidance. Debrief call included.

Is your AI deployment tested?

Most organisations deploying AI in 2026 have never had their AI systems tested by a security professional. With EU AI Act deadlines approaching, that's a compliance gap that won't stay quiet.

Have questions? See the FAQ →