LiteLLM

Call every LLM API like it's OpenAI [100+ LLMs]

Technical Support Engineer

$80K - $100KSan Francisco, CA, US
Job type
Full-time
Role
Support
Experience
Any (new grads ok)
Visa
US citizen/visa only
Connect directly with founders of the best YC-funded startups.
Apply to role ›
Ishaan Jaffer
Ishaan Jaffer
Founder

About the role

TLDR

LiteLLM is an open-source LLM Gateway with 28K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a performance engineer to help scale the platform to handle 5K RPS (Requests per second). We’re based in San Francisco.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We just hit $2.5M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

About the Role

We’re looking for a Technical Support Engineer to help our customers troubleshoot issues, optimize their setup, and get the most value out of our product. You’ll be the first line of defense for incoming technical questions, ensuring timely resolution and a great customer experience.

This role is ideal for someone who enjoys solving complex technical problems, communicating clearly with both technical and non-technical users, and continuously improving internal processes and documentation.

Responsibilities

  • Diagnose and troubleshoot technical problems across on LiteLLM. This can be on the backend or frontend.
  • Reproduce issues and create clear, actionable bug reports for engineering.
  • Submit small pull requests for simple fixes, documentation updates, or configuration changes.
  • Escalate bugs to product engineering and close the loop with the customer once shipped.
  • Maintain a high standard of customer empathy—clear communication, active listening, and ownership of problems until resolution.
  • Keep support documentation up to date.

Why Work At LiteLLM?

  • You love being on the front lines with customers at a fast-growing startup, quickly scoping their issues, leading troubleshooting discussions, and driving them to successful outcomes.
  • You love working with developers (30k+ Github stars)
  • You want to work hard (966 work culture)
  • You want to learn how AI is transforming businesses (Businesses run ALL LLM calls through LiteLLM)

About the interview

  1. Intro call - 30 mins.
  2. Take-Home Exercise - You'll work with real customer tickets to demonstrate your troubleshooting approach. This exercise evaluates your ability to respond to customer issues, debug effectively, and quickly set up LiteLLM to reproduce reported problems.
  3. 3-Day Onsite - You'll join us in person to work directly with our team. This collaborative period allows us to assess how you support customers in real-time and how well we work together.

About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

LiteLLM
Founded:2023
Batch:W23
Team Size:2
Status:
Active
Founders
Krrish Dholakia
Krrish Dholakia
Founder
Ishaan Jaffer
Ishaan Jaffer
Founder