Skip to content

Install ClawMux — macOS Split

Hub runs locally on Mac. TTS and STT run on a remote GPU server (Linux with NVIDIA GPU).

Requirements: macOS 12+, Python 3.10+, Claude Code, remote Linux server with NVIDIA GPU accessible via network

1. System Check (Mac)

# Python
python3 --version  # Must be 3.10+

# tmux
which tmux || brew install tmux

# Claude Code
claude --version  # Must be installed and authenticated

2. Clone Repository (Mac)

if [ ! -d "$HOME/GIT/clawmux" ]; then
    mkdir -p "$HOME/GIT"
    git clone https://github.com/zeulewan/clawmux.git "$HOME/GIT/clawmux"
fi
cd "$HOME/GIT/clawmux"

3. Python Environment (Mac)

cd "$HOME/GIT/clawmux"
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

4. GPU Server Setup

SSH into your GPU server and set up TTS/STT:

# On the GPU server — clone the repo and install services:
git clone https://github.com/zeulewan/clawmux.git ~/GIT/clawmux
cd ~/GIT/clawmux

# Install Whisper and Kokoro services
bash services/whisper/install.sh
bash services/kokoro/install.sh

# Start Whisper STT (port 2022)
bash services/whisper/start.sh

# Start Kokoro TTS (port 8880)
bash services/kokoro/start.sh

Make the services accessible from your Mac. Options:

  • Tailscale (recommended): Both machines on the same tailnet. Use Tailscale IPs.
    # On GPU server — expose via Tailscale serve
    sudo tailscale serve --bg --https=8881 http://127.0.0.1:8880
    sudo tailscale serve --bg --https=2023 http://127.0.0.1:2022
    
  • SSH tunnel: Forward ports from Mac to GPU server.
    # On Mac — tunnel TTS and STT
    ssh -NL 8880:127.0.0.1:8880 -L 2022:127.0.0.1:2022 user@gpu-server &
    

5. Configure Remote TTS/STT

After the hub starts, configure remote TTS/STT URLs:

# Replace GPU_SERVER with the Tailscale IP or hostname of your GPU server
GPU_SERVER="100.x.x.x"  # Or hostname.ts.net

curl -X PUT http://localhost:3460/api/settings \
  -H "Content-Type: application/json" \
  -d "{\"tts_url\": \"http://${GPU_SERVER}:8880\", \"stt_url\": \"http://${GPU_SERVER}:2022\"}"

If using SSH tunnels, the URLs are http://127.0.0.1:8880 and http://127.0.0.1:2022.

6. Register MCP Server

INSTALL_DIR="$HOME/GIT/clawmux"
claude mcp add -s user clawmux -- "$INSTALL_DIR/.venv/bin/python" "$INSTALL_DIR/server/mcp_server.py"

7. Install Slash Commands

mkdir -p ~/.claude/commands
cp "$HOME/GIT/clawmux/.claude/commands/clawmux.md" ~/.claude/commands/clawmux.md

8. Install CLI

# Review the script first
cat "$HOME/GIT/clawmux/clawmux" | head -20

# Option A: User-local install (no sudo needed)
mkdir -p ~/.local/bin
cp "$HOME/GIT/clawmux/clawmux" ~/.local/bin/clawmux
chmod +x ~/.local/bin/clawmux
# Ensure ~/.local/bin is in your PATH

# Option B: System-wide install
sudo cp "$HOME/GIT/clawmux/clawmux" /usr/local/bin/clawmux
sudo chmod +x /usr/local/bin/clawmux

9. Start the Hub

cd "$HOME/GIT/clawmux"
./start-hub.sh

Or in tmux:

tmux new-session -d -s clawmux "cd $HOME/GIT/clawmux && ./start-hub.sh"

10. Tailscale HTTPS (Optional)

For remote access from phone/tablet:

sudo tailscale serve --bg --https=3460 http://127.0.0.1:3460

11. Verify

# Hub running
curl -s http://localhost:3460/api/sessions | python3 -c "import sys,json; print(len(json.load(sys.stdin)), 'sessions')"

# TTS working (via configured URL)
curl -s http://${GPU_SERVER}:8880/v1/models | head -c 100

# STT working (via configured URL)
curl -s http://${GPU_SERVER}:2022/v1/models | head -c 100

# Browser UI
echo "Open http://localhost:3460 in your browser"

Troubleshooting

Problem Fix
TTS/STT connection refused Check GPU server services: curl -s http://127.0.0.1:2022/v1/models / curl -s http://127.0.0.1:8880/v1/models
Tailscale not connecting Ensure both machines are on the same tailnet: tailscale status
SSH tunnel dies Use autossh for persistent tunnels: brew install autossh
High latency Tailscale direct connection preferred over relayed. Check tailscale ping gpu-server
MCP tools not found Wait 10s after starting Claude Code, then retry