StackAI

AI Agents for the Enterprise

AI Infrastructure Engineer

SF Office - 171 2nd, 4th floor
Job type
Full-time
Role
Engineering, Full stack
Visa
US citizen/visa only
Connect directly with founders of the best YC-funded startups.
Apply to role ›
Bernard Aceituno
Bernard Aceituno
Founder

About the role

About the Role

We’re hiring an AI Infrastructure Engineer to shape and scale the backend systems that power our AI platform. As a Series A company, your work will be foundational, enabling safe, efficient, and reliable AI workflows from end to end.

What You’ll Do

  • Design and implement scalable backend architectures for AI workloads (inference, orchestration, monitoring).

  • Own distributed job orchestration with Temporal and related systems.

  • Improve data pipeline performance by designing smarter caching strategies (e.g., file deduplication, hot/cold storage, Redis caching layers) to reduce redundant compute and API calls.

  • Build observability, monitoring, retries, and fault tolerance into all workflows.

  • Manage infrastructure reliability, incident response, and performance.

  • Develop tooling and platform infrastructure to support rapid growth.

  • Partner with ML engineers to bring models to production at scale.

What We’re Looking For

  • 4+ years of backend engineering (Python is a must).

  • Strong background in distributed systems, job orchestration, and task queues.

  • Deep knowledge of concurrency, parallelism, and multithreading—including async/await, event loops, thread pools, synchronization primitives, deadlocks, and race conditions—is a must. You should know how to design systems that maximize throughput without sacrificing correctness or safety.

  • Hands-on experience with Temporal, Redis, Airflow, Celery, RabbitMQ (or similar).

  • Experience with LLM serving and routing fundamentals (rate limiting, streaming, load balancing, budgets).

  • Comfortable with containers & orchestration: Docker, Kubernetes.

  • Familiarity with cloud platforms (AWS/GCP) and IaC (Terraform).

  • Experience with multiple storage systems: S3, Postgres, MongoDB, Redis, and Elasticsearch.

  • Track record scaling systems in startups or fast-paced environments.

  • Understanding of deploying, monitoring, and optimizing AI/ML systems in production with strong CI/CD practices.

Why You’ll Love Working Here

  • Play a foundational role at a fast-growing Series A startup that is shaping the future of AI in enterprise workflows.

  • Collaborate across Product, ML, and Platform teams, being the bridge between AI logic and scalable execution.

  • Build infrastructure that enables real value for large enterprises: low-code, secure, and scalable AI workflows.

  • Join a company that’s scaling thoughtfully and values developer experience.

About StackAI

Stack AI is a no-code drag-and-drop tool to quickly design, test, and deploy AI workflows that leverage Large Language Models (LLMs), such as ChatGPT, to automate any business process.

Our core value is to make it extremely easy to build arbitrarily complex AI pipelines using a visual interface that allows you to connect different data sources with different AI models.

Our customers use Stack AI to build applications such as:

  • Chatbots and Assistants: AI agents that interact with users, answer questions, and complete tasks, using your internal data and APIs.
  • Document Processing: apps to answer questions, summarize, and extract insights from any document, no matter how long.
  • Answer Questions on Databases: connect GPT-like models to databases (such as Notion, Airtable, or Postgres) and ask questions about them.
  • Content Creation: generate tags, summaries, and transfer styles or formats between documents and data sources.
StackAI
Founded:2022
Batch:W23
Team Size:21
Status:
Active
Location:San Francisco
Founders
Bernard Aceituno
Bernard Aceituno
Founder
Toni Rosinol
Toni Rosinol
Co-Founder & CEO