HomeCompaniescompliant-llm

compliant-llm

Detect every data leak into third-party GenAI tools

CompliantLLM detects every data leak into third-party GenAI tools used in your company by monitoring employee interactions. Security and tech leaders get visibiliity and control over the data flowing into various Gen AI apps used in their organizations.
Active Founders
Neha Nupoor
Neha Nupoor
Founder
Building CompliantLLM - Preventing data exfiltration into Gen AI apps. Built AI Customer Support at Uber.
Kaushik Srinivasan
Kaushik Srinivasan
Founder
Working on preventing data leaks into Gen AI apps. Was previously building payment systems at Google Pay and AI customer support at Uber.
Company Launches
CompliantLLM - Detect Data Leaking into 3rd Party GenAI tools
See original launch post

Hi YC,

We’re Kaushik and Neha, cofounders of CompliantLLM.

CompliantLLM detects every data leak into third-party GenAI tools used in your company.

If you’re worried about data exfiltration risks at your organization, reach out to us at founders@fiddlecube.ai. We would love to learn more!

https://www.youtube.com/watch?v=XbheLR6CIMc

TL;DR

  • GenAI products introduce new surfaces for data leaks – a growing concern with increasing AI usage in companies.
  • CompliantLLM analyzes AI logs and finds GenAI-specific attacks and anomalies to detect data breaches.
  • It surfaces every PII-leak or unauthorized/malicious data access, across approved and unapproved AI workflows.

Problem

  • GenAI introduces new ways for leaking sensitive data or breaching data access controls. The problem is getting worse as AI agents connect to more workflows with increasing autonomy.
  • Currently, leaders lack visibility and control into which GenAI apps are in use and the potential data-exfiltration risks that they create.

Source: https://www.ibm.com/reports/data-breach



Solution

CompliantLLM monitors employee interactions with third-party GenAI tools and detects every data exfiltration incident.

We monitor users’ past actions, their role, and data access permissions across teams, identifying unauthorized data access and prompt injection attacks.

We segment the violations into meaningful and malicious types, and give the teams an option to take an unobtrusive approach towards AI governance in the company.

Ask

If this is on your mind, we would love to talk more. Reach out to us at founders@fiddlecube.ai

Team

Kaushik was previously an engineer leading payments solutions at Google Pay, where he faced the problem of building compliant payment gateway solutions. He worked at Uber before this on their customer support AI efforts, where he met Neha.

Neha previously led the automation of customer support at Uber using AI Agents. She brings her experience of building real-time AI workflows that worked at Uber’s scale and complexity to build CompliantLLM.

Previous Launches
Synthetic data platform to streamline dataset generation for custom LLM training
Create high-quality datasets for fine-tuning and reinforcement learning.
YC Photos
compliant-llm
Founded:2023
Batch:Winter 2023
Team Size:3
Status:
Active
Location:San Francisco
Primary Partner:Harj Taggar