CrateRunnerCrateRunner

Ready-to-ship apps

Production-ready, self-hosted applications delivered as containers. Deploy to your infrastructure in minutes with full governance and fleet operations.

Private AIEdge/On-premSelf-hosted

OpenClawHOT

Self-hosted gateway for AI agents across your chat apps

  • What it's for: Route messages from WhatsApp/Telegram/Discord/iMessage to AI agents.
  • Why self-hosted: Run on your infrastructure—keep data under your control.
  • Typical deployment: Gateway + Control UI on a server, on-prem, or edge node (Docker optional).
Learn more
Private AI

Open WebUI

Self-hosted ChatGPT-style interface for local LLMs

  • What it's for: ChatGPT-style interface for your team.
  • Why self-hosted: Conversations stay on your infrastructure.
  • Typical deployment: Engineering teams, knowledge workers, customer support.
Learn more
Private AIInternal Tools

AnythingLLM

All-in-one private AI workspace

  • What it's for: Chat with documents and build RAG pipelines.
  • Why self-hosted: Sensitive documents never leave your network.
  • Typical deployment: Enterprise knowledge bases, internal support bots.
Learn more
Private AIDeveloper Platform

Dify

Open-source LLM app development platform

  • What it's for: Build LLM-powered apps with visual workflows.
  • Why self-hosted: Protect prompts and training data.
  • Typical deployment: AI product teams, internal tooling, chatbot development.
Learn more
ObservabilityDeveloper Platform

Langfuse

Self-hosted LLM observability and analytics

  • What it's for: Trace, debug, and monitor your LLM applications.
  • Why self-hosted: Full telemetry data stays on your servers.
  • Typical deployment: AI product teams, ML engineers, production LLM apps.
Learn more

More apps coming soon

We're adding new self-hosted apps and bundles every week.

  • Package your app into a CrateRunner bundle
  • Edge/on-prem friendly by default
  • Governance, upgrades, and fleet ops included
Join Beta