Hi YC 👋 We’re Alan and Ismail, founders of Superagent (YC W24).
We’re building open-source defense for AI agents. Our product protects agents from prompt injections, malicious tool calls, and customer data leaks — in production, in CI/CD, and wherever they run.
AI agents introduce new attack surfaces that traditional security practices don’t cover:
Without protection, agents can leak customer data or trigger destructive actions that impact your product and your users.
At the core is SuperagentLM, our small language model trained specifically for agentic security. Unlike static rules or regex filters, it reasons about inputs and outputs to catch subtle and novel attacks.
Superagent integrates at three key points:
Here’s a quick example of how to use it with Exa (YC S21):
Every request is inspected in real time. Unsafe ones are blocked. Safe ones go through — with reasoning logs you can audit later.
We’ve been working closely with builders of AI agents for the last couple of years, building tools for them. What we noticed is that many teams are basically trying to system-prompt their way to security. Vibe security (VibeSec) obviously doesn’t work.
Some of the most popular agentic apps today are surprisingly unsafe in this way. So we decided to see if we could fix it. That’s the motivation behind Superagent: giving teams a real way to ship fast and ship safe.
We’d love your feedback: what’s your biggest concern about running agents in production? Book a call or drop a comment!
Alan & Ismail 🥷🥷
Superagent (YC W24)