Skip to main content

Deployment patterns

This page outlines common ways to run KanbanAI beyond local development: single-origin Bun server, compiled binaries, and running behind a reverse proxy or process manager.

Single-origin Bun server

For small self-hosted setups, you can run the Bun-based server directly:
  1. Build and start:
    bun install
    bun run prod
    
    This will:
    • Build the client and server.
    • Copy client/dist into server/static (generated output; don’t commit it).
    • Generate an embedded static bundle.
    • Start a single Hono/Bun server that serves:
      • API under /api/v1 (shim at /api).
      • The React app and static assets with SPA fallback.
  2. Configure environment and options:
    • HOST / PORT – listening interface and port (default 127.0.0.1:3000).
    • --no-auto-open – do not automatically open the browser when the server starts.
    • DATABASE_URL – SQLite file path.
    • KANBANAI_MIGRATIONS_DIR – optional external migrations directory.
    • KANBANAI_STATIC_DIR – optional external client/dist directory.
    • LOG_LEVEL, KANBANAI_DEBUG, DEBUG – logging behavior.

Compiled binaries + CLI wrapper

If you prefer a single executable:
  1. Build binaries (for development or custom deployment):
    bun run build:binary
    
    This emits:
    • dist/kanban-ai-linux-x64
    • dist/kanban-ai-linux-arm64
    • dist/kanban-ai-darwin-arm64
    • dist/kanban-ai-win-x64.exe
  2. Use the npm CLI wrapper in production:
    npx kanban-ai -- --help
    npx kanban-ai -- --port 3000 --no-auto-open
    
    The wrapper:
    • Resolves the correct binary for your platform (downloading if needed).
    • Caches binaries under ~/.cache/kanban-ai/binary (or KANBANAI_HOME/.cache/kanban-ai/binary).
    • Passes any arguments after -- through to the binary.
  3. Configure environment:
    • Use the same env vars as bun run prod (HOST, PORT, DATABASE_URL, etc.).

Process managers (systemd, supervisord, pm2)

For long-running services, wrap the binary or Bun command in a process manager. Example systemd service:
[Unit]
Description=KanbanAI
After=network.target

[Service]
Type=simple
User=kanban
WorkingDirectory=/opt/kanban-ai
Environment=PORT=3000
Environment=HOST=127.0.0.1
ExecStart=/opt/kanban-ai/dist/kanban-ai-linux-x64
Restart=on-failure

[Install]
WantedBy=multi-user.target
Adjust paths and environment variables to your environment.

Graceful shutdown

KanbanAI handles termination signals (SIGTERM, SIGINT) gracefully:
  • When the process receives SIGTERM or SIGINT, it initiates a graceful shutdown.
  • OpenCode servers are closed cleanly before the process exits.
  • The graceful shutdown has a 5-second timeout; if it doesn’t complete in time, the process exits with code 1.
  • No database corruption occurs from sudden termination, as SQLite handles this safely.

Reverse proxy (TLS & domains)

When exposing KanbanAI on the internet:
  • Run the server/binary on an internal port (e.g. 127.0.0.1:3000).
  • Put a reverse proxy in front (Nginx, Caddy, Traefik, etc.) to:
    • Terminate TLS.
    • Handle custom domains and HTTP/2.
    • Optionally enforce HTTP auth or IP allowlists.
  • Ensure WebSocket upgrade is forwarded for:
    • /api/v1/ws
    • /api/v1/ws/dashboard

Backups

  • Include in your backup strategy:
    • The SQLite database file (DATABASE_URL path).
    • Project Git repositories (managed separately from KanbanAI).
    • Optionally ~/.cache/kanban-ai if you want to preserve cached binaries and worktrees.
For CLI specifics and environment options, see CLI & binaries. For data layout, see Data & storage.