HomeCompaniesSuperagent

Making AI apps safe

Superagent helps developers make their AI apps safe. We provide: - An SDK with guardrails for runtime protection — stops data leaks, blocks harmful actions - Red teaming assessments to surface failures - A safety page to prove it to your customers Powered by a safety agent that runs in your environment — works with any model.
Active Founders
Alan Zabihi
Alan Zabihi
Founder
CEO & Co-founder of Superagent
Ismail Pelaseyed
Ismail Pelaseyed
Founder
CTO & Co-founder of Superagent.
Company Launches
Superagent — Defender of AI agents 🥷
See original launch post

Hi YC 👋 We’re Alan and Ismail, founders of Superagent (YC W24).

We’re building open-source defense for AI agents. Our product protects agents from prompt injections, malicious tool calls, and customer data leaks — in production, in CI/CD, and wherever they run.

The Problem

AI agents introduce new attack surfaces that traditional security practices don’t cover:

  • At runtime: users can inject adversarial prompts that hijack an agent or force it to run harmful commands.
  • At the model layer: unsafe or poisoned outputs can embed backdoors into your stack.
  • In CI/CD: AI-generated code can contain harmful logic that slips through review and ships to production.

Without protection, agents can leak customer data or trigger destructive actions that impact your product and your users.

How Superagent Works

At the core is SuperagentLM, our small language model trained specifically for agentic security. Unlike static rules or regex filters, it reasons about inputs and outputs to catch subtle and novel attacks.

Superagent integrates at three key points:

  1. Inference providers — filter requests and responses at the API layer
  2. Agent frameworks — run runtime checks on every input, output, and tool call
  3. CI/CD pipelines — fail risky builds before unsafe code ships

Here’s a quick example of how to use it with Exa (YC S21):

uploaded image

Every request is inspected in real time. Unsafe ones are blocked. Safe ones go through — with reasoning logs you can audit later.

Why We’re Building This

We’ve been working closely with builders of AI agents for the last couple of years, building tools for them. What we noticed is that many teams are basically trying to system-prompt their way to security. Vibe security (VibeSec) obviously doesn’t work.

Some of the most popular agentic apps today are surprisingly unsafe in this way. So we decided to see if we could fix it. That’s the motivation behind Superagent: giving teams a real way to ship fast and ship safe.

Get Involved

📖 Read the docs

📅 Book a call

We’d love your feedback: what’s your biggest concern about running agents in production? Book a call or drop a comment!

Alan & Ismail 🥷🥷
Superagent (YC W24)

uploaded image

Previous Launches
Stops prompt injections, backdoors, and data leaks.
Open-source sandboxing and observability for Claude Code, Gemini CLI, et al.
Embed OpenAI Codex or Claude Code directly into your app.
Open-source platform that allows anyone to build AI-agent workflows
YC Photos
Superagent
Founded:2024
Batch:Winter 2024
Team Size:2
Status:
Active
Location:San Francisco
Primary Partner:Nicolas Dessaigne