One command. Private AI assistant. Your hardware.
The one-click installer for OpenClaw on NVIDIA hardware. Turn your DGX Spark, Jetson, or RTX GPU into a private AI assistant you control via WhatsApp and Telegram, with local voice transcription and full privacy.
$ curl -fsSL https://clawspark.dev/install.sh | bash
Everything you need to go from bare hardware to a fully private AI assistant, handled automatically.
Auto-detects your NVIDIA hardware: DGX Spark, Jetson, or RTX GPUs. No manual configuration needed.
Picks the optimal AI model for your specific hardware, maximizing performance without running out of memory.
OpenClaw, inference engine, and all skills deployed in a single command. No Docker headaches, no dependency hell.
Send WhatsApp or Telegram voice notes. Whisper transcribes them locally on your GPU, keeping conversations private.
Built-in chat UI and ClawMetry metrics dashboard. Track tokens, costs, sessions, and agent activity in real time.
Optional Tailscale integration. Access your AI from your phone in another country, securely over your Tailnet.
Your AI runs on your hardware and stays locked down. Every layer is hardened by default.
Gateway binds to 127.0.0.1. Your AI is not accessible from the network.
Random 256-bit token generated at install. Every API call requires it.
Dangerous tools (exec, write, browser, process) blocked by default. Bot cannot run commands on your machine.
SOUL.md and TOOLS.md are read-only. The AI cannot modify its own guardrails.
UFW configured automatically. Deny incoming, allow outgoing, SSH only.
Optional complete network isolation via clawspark airgap on. Zero outbound traffic.
A clean stack from messaging layer to GPU. Everything runs locally on a single machine.
WhatsApp / Telegram / Web UI | OpenClaw Gateway (port 18789) | | | Agent Node Host Baileys (LLM) (Tools) (WhatsApp) | | Ollama read, web_fetch, (port 11434) message | Qwen 3.5 35B (GB10 GPU)
clawspark automatically detects your GPU and selects the best configuration.
| Hardware | VRAM | Recommended Model | Performance |
|---|---|---|---|
| NVIDIA DGX Spark | 128 GB | Qwen 3.5 35B-A3B (MoE) / Qwen 3.5 122B | ~59 tok/s |
| Jetson AGX Orin | 64 GB | Nemotron 3 Nano 30B | ~25 tok/s |
| RTX 5090 | 32 GB | Qwen 3.5 35B-A3B (Q4) | ~40 tok/s |
| RTX 4090 | 24 GB | GLM 4.7 Flash | ~35 tok/s |
From bare hardware to a working private AI assistant in minutes.
Scans your system for NVIDIA GPUs and identifies the exact hardware model and available VRAM.
Selects the best AI model, quantization level, and inference engine for your specific hardware.
Pulls the model weights and required containers. Progress bars keep you informed.
Deploys OpenClaw with the inference engine, Whisper for voice, and all configured skills.
Locks down the installation: localhost-only access, firewall rules, and optional air gap mode.
Guides you through linking WhatsApp or Telegram so you can start messaging your AI assistant.
Your private AI assistant is live. Talk to it via voice or text, entirely on your own hardware.
Skills, tools, and CLI commands that ship with every clawspark installation.
One command is all it takes. Run it on any supported NVIDIA machine.
$ curl -fsSL https://clawspark.dev/install.sh | bash
Requires an NVIDIA GPU with CUDA drivers installed. Works on Ubuntu 22.04+ and JetPack 6+.