Deploy AI models locally, privately, and offline in any app using Cactus. Cactus is a blazing-fast inference engine optimized for smartphones and comes with React Native, Flutter, and Kotlin bindings.
Cactus is a cross-platform & open-source framework for doing inference on smartphones, wearables, and other low power devices. It supports any LLM or VLM available on HuggingFace directly.
The recently released Google AI Edge and Apple Foundation Frameworks are platform-specific and primarily support specific models from the companies.
To this end, Cactus:
Is available in Flutter and React-Native for cross-platform developers, since most apps are built with these today.
Supports any GGUF model you can find on Huggingface; Qwen, Gemma, Llama, DeepSeek, Phi, Mistral, SmolLM, SmolVLM, InternVLM, Jan Nano etc.
Accommodates from FP32 to as low as 2-bit quantized models, for better
efficiency and less device strain.
Have MCP tool-calls to make models performant, truly helpful (set reminder, gallery search, reply messages) and more.
Fallback to big cloud models for complex, constrained or large-context tasks, ensuring robustness and high availability.
So far, our customers have built:
LLMs and embedding models
Real-time vision inference
Tell us how we can make it great!
Repo: https://github.com/cactus-compute/cactus
Discord: https://discord.gg/nPGWGxXSwr
We met through YC co-founder matching in London four years ago!