Security Blog

Research, findings, and frameworks for securing AI agents in the enterprise.

$ frontiersec scan --target support-agent CRITICAL Data exfiltration via prompt injection HIGH Cross-customer data leak Guardrails: 1 (system prompt). Enforcement: none.

I Threat Modeled a Customer Support Agent. Here's What I Found.

A walkthrough of a real agent assessment using the ATLAS framework, including the exfiltration path nobody saw coming.

Read more →
7 attack vectors mapped

The 7 Ways AI Agents Get Compromised

A practical attack taxonomy for AI agents, with concrete scenarios for each one. Not theoretical, not hand-waving.

Read more →
get_customer send_email exfiltration path read_database execute_code arbitrary access

The Tool Combination Problem Nobody's Auditing

Individual tools look safe. Combined, they create exfiltration paths, fraud chains, and privilege escalation.

Read more →
system: "Never share customer data" ✓ suggestion bypassed user: "Ignore previous instructions..." ✗ no enforcement

System Prompts Are Not Guardrails

Why "don't share customer data" in a system prompt is a suggestion, not a security control, and what to use instead.

Read more →
Agent 1 sanctioned ✓ logged Agent 2 sanctioned ✓ logged ??? shadow ✗ no logs you said 2 agents. we found more.

Shadow Agents: The AI Security Problem You Can't See

Your employees are running AI agents with personal API keys and company data. How to find them and what to do about it.

Read more →
1/5 How many agents are running? 2/5 What can they access? 3/5 Tested against injection? Detection speed? Shadow agents? readiness score: needs attention

5 Questions Every CISO Should Ask About AI Agents

A quick diagnostic for security leaders to assess whether their organization's AI agents are governed, monitored, and controlled.

Read more →