HomeCompaniesSuperagent

Security and compliance for AI products

Superagent provides guardrails that make AI products secure and compliant. It's a suite of purpose-trained small language models that run at runtime to protect, verify, and redact data flowing through AI systems. - GUARD detects and blocks unsafe inputs, prompt injections, and malicious tool calls before they reach your models - VERIFY grounds and validates outputs against enterprise sources to ensure every generation is factual and policy-aligned - REDACT removes sensitive data (PII, PHI, secrets) to maintain privacy and compliance Superagent integrates via API, SDKs, CLI, MCP, and web Playground, with a unified dashboard for policies, logs, and compliance visibility.
Active Founders
Alan Zabihi
Alan Zabihi
Founder
CEO & Co-founder of Superagent
Ismail Pelaseyed
Ismail Pelaseyed
Founder
CTO & Co-founder of Superagent.
Company Launches
Superagent — Defender of AI agents 🥷
See original launch post

Hi YC 👋 We’re Alan and Ismail, founders of Superagent (YC W24).

We’re building open-source defense for AI agents. Our product protects agents from prompt injections, malicious tool calls, and customer data leaks — in production, in CI/CD, and wherever they run.

The Problem

AI agents introduce new attack surfaces that traditional security practices don’t cover:

  • At runtime: users can inject adversarial prompts that hijack an agent or force it to run harmful commands.
  • At the model layer: unsafe or poisoned outputs can embed backdoors into your stack.
  • In CI/CD: AI-generated code can contain harmful logic that slips through review and ships to production.

Without protection, agents can leak customer data or trigger destructive actions that impact your product and your users.

How Superagent Works

At the core is SuperagentLM, our small language model trained specifically for agentic security. Unlike static rules or regex filters, it reasons about inputs and outputs to catch subtle and novel attacks.

Superagent integrates at three key points:

  1. Inference providers — filter requests and responses at the API layer
  2. Agent frameworks — run runtime checks on every input, output, and tool call
  3. CI/CD pipelines — fail risky builds before unsafe code ships

Here’s a quick example of how to use it with Exa (YC S21):

Every request is inspected in real time. Unsafe ones are blocked. Safe ones go through — with reasoning logs you can audit later.

Why We’re Building This

We’ve been working closely with builders of AI agents for the last couple of years, building tools for them. What we noticed is that many teams are basically trying to system-prompt their way to security. Vibe security (VibeSec) obviously doesn’t work.

Some of the most popular agentic apps today are surprisingly unsafe in this way. So we decided to see if we could fix it. That’s the motivation behind Superagent: giving teams a real way to ship fast and ship safe.

Get Involved

📖 Read the docs

📅 Book a call

We’d love your feedback: what’s your biggest concern about running agents in production? Book a call or drop a comment!

Alan & Ismail 🥷🥷
Superagent (YC W24)

Previous Launches
Stops prompt injections, backdoors, and data leaks.
Open-source sandboxing and observability for Claude Code, Gemini CLI, et al.
Embed OpenAI Codex or Claude Code directly into your app.
Open-source platform that allows anyone to build AI-agent workflows
YC Photos
Superagent
Founded:2024
Batch:Winter 2024
Team Size:2
Status:
Active
Location:San Francisco
Primary Partner:Nicolas Dessaigne