LiteLLM

Call every LLM API like it's OpenAI [100+ LLMs]

Founding Backend Engineer

$160K - $220K0.50% - 3.00%San Francisco, CA, US
Job type
Full-time
Role
Engineering, Backend
Experience
1+ years
Visa
US citizen/visa only
Skills
Python, Natural Language Processing, PostgreSQL
Connect directly with founders of the best YC-funded startups.
Apply to role ›
Krrish Dholakia
Krrish Dholakia
Founder

About the role

TLDR

LiteLLM is an open-source LLM Gateway with 28K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a founding full-stack engineer to help scale the platform. We’re based in San Francisco.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We just hit $2.5M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)

What you will be working on

Skills: Python, LLM APIs, FastAPI, High-throughput/low-latency

As a Founding Backend Engineer, you'll be responsible for ensuring LiteLLM unifies the format for calling LLM APIs in the OpenAI spec. This involves writing transformations to convert API requests from OpenAI spec to various LLM provider formats. You'll work directly with the CEO and CTO on critical projects including:

  • Migrating key systems from httpx to aiohttp for 10x higher throughput
  • Adding support for Anthropic and Bedrock Anthropic 'thinking' parameter
  • Handling provider-specific quirks like OpenAI o-1 streaming limitations
  • Scaling aggregate spend computation for 1M+ logs
  • Implementing cost tracking and logging for Anthropic API

What is our tech stack

The tech stack includes Python, FastAPI, JS/TS, Redis, Postgres, S3, GCS Storage, Datadog, and Slack API.

Who we are looking for

  • 1-2 years of backend/full-stack experience with production systems
  • Passion for open source and user engagement
  • Experience scaling high-performance infrastructure
  • Strong work ethic and ability to thrive in small teams
  • Eagerness to shape growing infrastructure

About the interview

Interview Process

Our interview process is:

  • Intro call - 30 min

    Behavioral discussion about your working style, expectations, and the company’s direction.

  • Hackerrank - 1 hr

    A hackerrank covering basic python questions

  • Virtual Onsite - 3 hrs

    Virtual onsite with founders, which involves solving an issue on LiteLLM’s github together, a presentation of a technical project and a system design question

About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

LiteLLM
Founded:2023
Batch:W23
Team Size:2
Status:
Active
Founders
Krrish Dholakia
Krrish Dholakia
Founder
Ishaan Jaffer
Ishaan Jaffer
Founder