What This Stack Does
This stack builds a fully self-hosted AI agent infrastructure that keeps all your data on your own hardware. OpenClaw serves as the agent gateway — connecting your preferred LLM to messaging apps, system tools, and automation skills. Supermemory adds the persistent memory layer that OpenClaw lacks natively, ensuring your agent remembers your preferences, projects, and patterns across sessions. Lume provides macOS and Linux VMs for safely sandboxing agent operations, and Beszel monitors the entire stack with minimal overhead.
The Bottom Line
The workflow starts with OpenClaw receiving tasks through WhatsApp, Telegram, or Discord. When the agent needs to execute code or system commands, it does so inside a Lume VM rather than on your host machine — protecting your system from accidental destructive operations. Supermemory's MCP server gives the agent persistent context about your projects and preferences, making every interaction more relevant. Beszel tracks CPU, memory, and Docker container metrics across all components, alerting you if the agent stack consumes unexpected resources. Together, these four tools create a private, monitored, sandboxed AI assistant that runs entirely on your infrastructure.