Build remote deployment system using Headscale (self-hosted Tailscale)

Agent-Logs-Url: https://github.com/naturallaw777/staging_alpha/sessions/7fa16927-250f-4af4-bb11-e22ef7b2c997

Co-authored-by: naturallaw777 <99053422+naturallaw777@users.noreply.github.com>
This commit is contained in:
copilot-swe-agent[bot]
2026-04-11 23:33:35 +00:00
committed by GitHub
parent 9ec8618f7d
commit 8f97aa416f
7 changed files with 849 additions and 0 deletions

2
.gitignore vendored
View File

@@ -6,3 +6,5 @@ role-state.nix
__pycache__/
*.pyc
*.pyo
iso/secrets/enroll-token
iso/secrets/provisioner-url

View File

@@ -0,0 +1,378 @@
# Remote Deployment via Headscale (Self-Hosted Tailscale)
This guide covers the Sovran Systems remote deployment system built on [Headscale](https://headscale.net) — a self-hosted, open-source implementation of the Tailscale coordination server. Freshly booted ISOs automatically join a private WireGuard mesh VPN without any per-machine key pre-generation.
---
## Architecture Overview
```
┌─────────────────────────────────────────────────────────┐
│ Internet │
└────────────┬─────────────────────┬──────────────────────┘
│ │
▼ ▼
┌────────────────────┐ ┌─────────────────────────────────┐
│ Admin Workstation │ │ Sovran VPS │
│ │ │ ┌─────────────────────────────┐ │
│ tailscale up │ │ │ Headscale (port 8080) │ │
│ --login-server │◄──┼─►│ Coordination server │ │
│ hs.example.com │ │ ├─────────────────────────────┤ │
│ │ │ │ Provisioning API (9090) │ │
└────────────────────┘ │ │ POST /register │ │
│ │ GET /machines │ │
│ │ GET /health │ │
│ ├─────────────────────────────┤ │
│ │ Caddy (80/443) │ │
│ │ hs.example.com → :8080 │ │
│ │ prov.example.com → :9090 │ │
│ └─────────────────────────────┘ │
└─────────────────────────────────┘
│ WireGuard mesh (Tailnet)
┌─────────────────────────────────┐
│ Deploy Target Machine │
│ │
│ Boot live ISO → │
│ sovran-auto-provision → │
│ POST /register → │
│ tailscale up --authkey=... │
└─────────────────────────────────┘
```
**Components:**
- **`sovran-provisioner.nix`** — NixOS module deployed on a separate VPS; runs Headscale + provisioning API + Caddy.
- **Live ISO** (`iso/common.nix`) — Auto-registers with the provisioning server and joins the Tailnet on boot.
- **`remote-deploy.nix`** — Post-install NixOS module that uses Tailscale/Headscale for ongoing access (plus the existing reverse SSH tunnel as a fallback).
---
## Part 1: VPS Setup — Deploy `sovran-provisioner.nix`
### Prerequisites
- A NixOS VPS (any provider) with a public IP
- Two DNS A records pointing to your VPS:
- `hs.yourdomain.com` → VPS IP (Headscale coordination server)
- `prov.yourdomain.com` → VPS IP (Provisioning API)
- Ports 80, 443 (TCP) and 3478 (UDP, STUN/DERP) open in your VPS firewall
### DNS Records
| Type | Name | Value |
|------|-----------------------|------------|
| A | `hs.yourdomain.com` | `<VPS IP>` |
| A | `prov.yourdomain.com` | `<VPS IP>` |
### NixOS Configuration
Add the following to your VPS's `/etc/nixos/configuration.nix`:
```nix
{ config, lib, pkgs, ... }:
{
imports = [
./hardware-configuration.nix
/path/to/sovran-provisioner.nix # or fetch from the repo
];
sovranProvisioner = {
enable = true;
domain = "prov.yourdomain.com";
headscaleDomain = "hs.yourdomain.com";
# Optional: set a static token instead of auto-generating one
# enrollToken = "your-secret-token-here";
# Optional: customise defaults
headscaleUser = "sovran-deploy"; # namespace for deploy machines
adminUser = "admin"; # namespace for your workstation
keyExpiry = "1h"; # pre-auth keys expire after 1 hour
rateLimitMax = 10; # max registrations per window
rateLimitWindow = 60; # window in seconds
};
# Required for Caddy ACME (Let's Encrypt)
networking.hostName = "sovran-vps";
system.stateVersion = "24.11";
}
```
### Deploy
```bash
nixos-rebuild switch
```
Caddy will automatically obtain TLS certificates via Let's Encrypt.
### Retrieve the Enrollment Token
```bash
cat /var/lib/sovran-provisioner/enroll-token
```
Keep this token secret — it is used to authenticate ISO registrations. If you set `enrollToken` statically in `configuration.nix`, that value is used directly (but avoid committing secrets to version control).
---
## Part 2: Admin Workstation Setup
Join your Tailnet as an admin so you can reach deployed machines:
### Install Tailscale
Follow the [Tailscale installation guide](https://tailscale.com/download) for your OS, or on NixOS:
```nix
services.tailscale.enable = true;
```
### Join the Tailnet
```bash
sudo tailscale up --login-server https://hs.yourdomain.com
```
Tailscale prints a URL. Open it and copy the node key (starts with `mkey:`).
### Approve the Node in Headscale
On the VPS:
```bash
headscale nodes register --user admin --key mkey:xxxxxxxxxxxxxxxx
```
Your workstation is now on the Tailnet. You can list nodes:
```bash
headscale nodes list
```
---
## Part 3: Building the Deploy ISO
### Add Secrets (gitignored)
The secrets directory `iso/secrets/` is gitignored. Populate it before building:
```bash
# Copy the enrollment token from the VPS
ssh root@<VPS> cat /var/lib/sovran-provisioner/enroll-token > iso/secrets/enroll-token
# Set the provisioner URL
echo "https://prov.yourdomain.com" > iso/secrets/provisioner-url
```
These files are baked into the ISO at build time. If the files are absent the ISO still builds — the auto-provision service exits cleanly with "No enroll token found, skipping auto-provision", leaving DIY users unaffected.
### Build the ISO
```bash
nix build .#nixosConfigurations.sovran_systemsos-iso.config.system.build.isoImage
```
The resulting ISO is in `./result/iso/`.
---
## Part 4: Deployment Workflow
### Step-by-Step
1. **Hand the ISO to the remote person** — they burn it to a USB drive and boot.
2. **ISO boots and auto-registers**`sovran-auto-provision.service` runs automatically:
- Reads `enroll-token` and `provisioner-url` from `/etc/sovran/`
- `POST https://prov.yourdomain.com/register` with hostname + MAC
- Receives a Headscale pre-auth key
- Runs `tailscale up --login-server=... --authkey=...`
- The machine appears in `headscale nodes list` within ~30 seconds
3. **Approve the node (if not using auto-approve)** — on the VPS:
```bash
headscale nodes list
# Note the node key for the new machine
```
4. **SSH from your workstation** — once the machine is on the Tailnet:
```bash
# Get the machine's Tailscale IP
headscale nodes list | grep sovran-deploy-
# SSH in
ssh root@100.64.x.x # password: sovran-remote (live ISO default)
```
5. **Run the headless installer**:
```bash
# Basic install (relay tunnel)
sudo sovran-install-headless.sh \
--disk /dev/sda \
--role server \
--deploy-key "ssh-ed25519 AAAA..." \
--relay-host relay.yourdomain.com
# With Tailscale for post-install access
sudo sovran-install-headless.sh \
--disk /dev/sda \
--role server \
--deploy-key "ssh-ed25519 AAAA..." \
--headscale-server "https://hs.yourdomain.com" \
--headscale-key "$(headscale preauthkeys create --user sovran-deploy --expiration 2h --output json | jq -r '.key')"
```
6. **Machine reboots into Sovran_SystemsOS** — `deploy-tailscale-connect.service` runs:
- Reads `/var/lib/secrets/headscale-authkey`
- Joins the Tailnet with a deterministic hostname (`sovran-<hostname>`)
- The reverse SSH tunnel also activates if `relayHost` was set
7. **Post-install SSH and RDP**:
```bash
# SSH over Tailnet
ssh root@<tailscale-ip>
# RDP over Tailnet (if desktop role)
xfreerdp /v:<tailscale-ip> /u:free /p:free
```
8. **Disable deploy mode** — edit `/etc/nixos/custom.nix` on the target, set `enable = false`, then:
```bash
sudo nixos-rebuild switch
```
---
## Part 5: Post-Install Access
### SSH
```bash
# Over Tailnet
ssh root@100.64.x.x
# Over reverse tunnel (if configured)
ssh -p 2222 root@relay.yourdomain.com
```
### RDP (desktop/server roles)
```bash
# Over Tailnet
xfreerdp /v:100.64.x.x /u:free /p:free /dynamic-resolution
```
---
## Security Model
| Concern | Mitigation |
|---------|-----------|
| Enrollment token theft | Token only triggers key generation; it does not grant access to the machine itself |
| Rogue device joins Tailnet | Visible in `headscale nodes list`; removable instantly with `headscale nodes delete` |
| Pre-auth key reuse | Keys are ephemeral and expire in 1 hour (configurable via `keyExpiry`) |
| Rate limiting | Provisioning API limits to 10 registrations/minute by default (configurable) |
| SSH access | Requires ed25519 key injected at install time; password authentication disabled |
| Credential storage | Auth key written to `/var/lib/secrets/headscale-authkey` (mode 600) on the installed OS |
### Token Rotation
To rotate the enrollment token:
1. On the VPS:
```bash
openssl rand -hex 32 > /var/lib/sovran-provisioner/enroll-token
chmod 600 /var/lib/sovran-provisioner/enroll-token
```
2. Update `iso/secrets/enroll-token` and rebuild the ISO.
Old ISOs with the previous token will fail to register (receive 401).
---
## Monitoring
### List Active Tailnet Nodes
```bash
# On the VPS
headscale nodes list
```
### List Registered Machines (Provisioning API)
```bash
curl -s -H "Authorization: Bearer $(cat /var/lib/sovran-provisioner/enroll-token)" \
https://prov.yourdomain.com/machines | jq .
```
### Health Check
```bash
curl https://prov.yourdomain.com/health
# {"status": "ok"}
```
### Provisioner Logs
```bash
journalctl -u sovran-provisioner -f
```
### Headscale Logs
```bash
journalctl -u headscale -f
```
---
## Cleanup
### Remove a Machine from the Tailnet
```bash
headscale nodes list
headscale nodes delete --identifier <id>
```
### Disable Deploy Mode on an Installed Machine
Edit `/etc/nixos/custom.nix`:
```nix
sovran_systemsOS.deploy.enable = false;
```
Then rebuild:
```bash
nixos-rebuild switch
```
This stops the reverse tunnel and Tailscale connect services.
### Revoke All Active Pre-Auth Keys
```bash
headscale preauthkeys list --user sovran-deploy
headscale preauthkeys expire --user sovran-deploy --key <key>
```
---
## Reference
| Component | Port | Protocol | Description |
|-----------|------|----------|-------------|
| Caddy | 80 | TCP | HTTP → HTTPS redirect |
| Caddy | 443 | TCP | HTTPS (Let's Encrypt) |
| Headscale | 8080 | TCP | Coordination server (proxied by Caddy) |
| Provisioner | 9090 | TCP | Registration API (proxied by Caddy) |
| DERP/STUN | 3478 | UDP | WireGuard relay fallback |
| Tailscale | N/A | WireGuard | Mesh VPN between nodes |

View File

@@ -63,6 +63,9 @@ in
git
curl
openssh
tailscale
jq
xxd
];
# Remote install support — SSH on the live ISO
@@ -88,6 +91,88 @@ in
environment.etc."sovran/flake".source = sovranSource;
environment.etc."sovran/installer.py".source = ./installer.py;
# These files are gitignored — set at build time by placing them in iso/secrets/
environment.etc."sovran/enroll-token" = lib.mkIf (builtins.pathExists ./secrets/enroll-token) {
text = builtins.readFile ./secrets/enroll-token;
mode = "0600";
};
environment.etc."sovran/provisioner-url" = lib.mkIf (builtins.pathExists ./secrets/provisioner-url) {
text = builtins.readFile ./secrets/provisioner-url;
mode = "0644";
};
# Tailscale client for mesh VPN
services.tailscale.enable = true;
# Auto-provision service — registers with provisioning server and joins Tailnet
systemd.services.sovran-auto-provision = {
description = "Auto-register with Sovran provisioning server and join Tailnet";
after = [ "network-online.target" "tailscaled.service" ];
wants = [ "network-online.target" "tailscaled.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
path = [ pkgs.tailscale pkgs.curl pkgs.jq pkgs.coreutils pkgs.iproute2 pkgs.xxd ];
script = ''
TOKEN_FILE="/etc/sovran/enroll-token"
URL_FILE="/etc/sovran/provisioner-url"
[ -f "$TOKEN_FILE" ] || { echo "No enroll token found, skipping auto-provision"; exit 0; }
[ -f "$URL_FILE" ] || { echo "No provisioner URL found, skipping auto-provision"; exit 0; }
TOKEN=$(cat "$TOKEN_FILE")
PROV_URL=$(cat "$URL_FILE")
[ -n "$TOKEN" ] || exit 0
[ -n "$PROV_URL" ] || exit 0
# Wait for network + tailscaled
sleep 10
# Collect machine info
HOSTNAME="sovran-deploy-$(head -c 8 /dev/urandom | xxd -p)"
MAC=$(ip link show | grep ether | head -1 | awk '{print $2}' || echo "unknown")
echo "Registering with provisioning server at $PROV_URL..."
# Retry up to 6 times (covers slow DHCP)
RESPONSE=""
for i in $(seq 1 6); do
RESPONSE=$(curl -sf --max-time 15 -X POST \
"$PROV_URL/register" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"hostname\": \"$HOSTNAME\", \"mac\": \"$MAC\"}" 2>/dev/null) && break
echo "Attempt $i failed, retrying in 10s..."
sleep 10
done
if [ -z "$RESPONSE" ]; then
echo "ERROR: Failed to register with provisioning server after 6 attempts"
exit 1
fi
HS_KEY=$(echo "$RESPONSE" | jq -r '.headscale_key')
LOGIN_SERVER=$(echo "$RESPONSE" | jq -r '.login_server')
if [ -z "$HS_KEY" ] || [ "$HS_KEY" = "null" ]; then
echo "ERROR: No Headscale key in response: $RESPONSE"
exit 1
fi
echo "Joining Tailnet via $LOGIN_SERVER as $HOSTNAME..."
tailscale up \
--login-server="$LOGIN_SERVER" \
--authkey="$HS_KEY" \
--hostname="$HOSTNAME"
TAILSCALE_IP=$(tailscale ip -4)
echo "Successfully joined Tailnet as $HOSTNAME ($TAILSCALE_IP)"
'';
};
environment.etc."xdg/autostart/sovran-installer.desktop".text = ''
[Desktop Entry]
Type=Application

0
iso/secrets/.gitkeep Normal file
View File

View File

@@ -14,6 +14,8 @@ Options:
--relay-user USER Relay username (default: deploy)
--relay-port PORT Relay SSH port (default: 22)
--tunnel-port PORT Reverse tunnel port on relay (default: 2222)
--headscale-server URL Headscale login server for post-install Tailnet
--headscale-key KEY Headscale pre-auth key for the installed OS
USAGE
}
@@ -28,6 +30,8 @@ RELAY_HOST=""
RELAY_USER="deploy"
RELAY_PORT="22"
TUNNEL_PORT="2222"
HEADSCALE_SERVER=""
HEADSCALE_KEY=""
FLAKE="/etc/sovran/flake"
LOG="/tmp/sovran-headless-install.log"
@@ -58,6 +62,8 @@ while [[ $# -gt 0 ]]; do
--relay-user) RELAY_USER="$2"; shift 2 ;;
--relay-port) RELAY_PORT="$2"; shift 2 ;;
--tunnel-port) TUNNEL_PORT="$2"; shift 2 ;;
--headscale-server) HEADSCALE_SERVER="$2"; shift 2 ;;
--headscale-key) HEADSCALE_KEY="$2"; shift 2 ;;
-h|--help)
usage
exit 0
@@ -225,6 +231,7 @@ if [[ -n "$DEPLOY_KEY" ]]; then
relayUser = "${RELAY_USER}";
relayPort = ${RELAY_PORT};
reverseTunnelPort = ${TUNNEL_PORT};
$([ -n "${HEADSCALE_SERVER}" ] && echo " headscaleServer = \"${HEADSCALE_SERVER}\";")
};
}
EOF
@@ -232,6 +239,14 @@ else
cp /mnt/etc/nixos/custom.template.nix /mnt/etc/nixos/custom.nix
fi
# ── Write Headscale auth key if provided ─────────────────────────────────────
if [[ -n "$HEADSCALE_KEY" ]]; then
mkdir -p /mnt/var/lib/secrets
echo "$HEADSCALE_KEY" > /mnt/var/lib/secrets/headscale-authkey
chmod 600 /mnt/var/lib/secrets/headscale-authkey
log "Headscale auth key written to /mnt/var/lib/secrets/headscale-authkey"
fi
# ── Step 11: Copy configs to host for flake evaluation ───────────────────────
log "=== Copying config files to host /etc/nixos for flake evaluation ==="
mkdir -p /etc/nixos
@@ -252,3 +267,5 @@ log "You can now reboot into Sovran_SystemsOS."
log "After reboot, the machine will be accessible via SSH on port 22 (if --deploy-key was provided)."
[[ -n "$RELAY_HOST" ]] && \
log "Reverse tunnel will connect to ${RELAY_USER}@${RELAY_HOST}:${RELAY_PORT} — forward port ${TUNNEL_PORT} maps to the machine's SSH."
[[ -n "$HEADSCALE_SERVER" ]] && \
log "Tailscale will connect to Headscale at ${HEADSCALE_SERVER} on first boot."

View File

@@ -37,6 +37,18 @@ in
default = "";
description = "Deployer's SSH public key for root access";
};
headscaleServer = lib.mkOption {
type = lib.types.str;
default = "";
description = "Headscale login server URL (e.g. https://hs.sovransystems.com). If set, Tailscale is used for post-install connectivity.";
};
headscaleAuthKeyFile = lib.mkOption {
type = lib.types.str;
default = "/var/lib/secrets/headscale-authkey";
description = "Path to file containing the Headscale pre-auth key for post-install enrollment";
};
};
config = lib.mkIf cfg.enable {
@@ -69,6 +81,45 @@ in
ignoreIP = [ "127.0.0.0/8" ];
};
# ── Tailscale / Headscale VPN (only when headscaleServer is configured) ──
services.tailscale = lib.mkIf (cfg.headscaleServer != "") {
enable = true;
};
environment.systemPackages = lib.mkIf (cfg.headscaleServer != "") [ pkgs.tailscale ];
systemd.services.deploy-tailscale-connect = lib.mkIf (cfg.headscaleServer != "") {
description = "Connect to Headscale Tailnet for post-install remote access";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" "tailscaled.service" ];
wants = [ "network-online.target" "tailscaled.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
AUTH_KEY_FILE="${cfg.headscaleAuthKeyFile}"
if [ ! -f "$AUTH_KEY_FILE" ]; then
echo "Headscale auth key file not found: $AUTH_KEY_FILE skipping Tailscale enrollment"
exit 0
fi
AUTH_KEY=$(cat "$AUTH_KEY_FILE")
[ -n "$AUTH_KEY" ] || { echo "Auth key file is empty, skipping"; exit 0; }
HOSTNAME_SUFFIX=$(hostname | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9-]/-/g; s/-\{2,\}/-/g; s/^-//; s/-$//')
HOSTNAME="sovran-$HOSTNAME_SUFFIX"
echo "Joining Tailnet via ${cfg.headscaleServer} as $HOSTNAME..."
${pkgs.tailscale}/bin/tailscale up \
--login-server="${cfg.headscaleServer}" \
--authkey="$AUTH_KEY" \
--hostname="$HOSTNAME"
echo "Tailscale IP: $(${pkgs.tailscale}/bin/tailscale ip -4 2>/dev/null || echo 'pending')"
'';
path = [ pkgs.tailscale pkgs.coreutils ];
};
# ── Reverse tunnel service (only when relayHost is configured) ───────────
systemd.services.deploy-reverse-tunnel = lib.mkIf (cfg.relayHost != "") {
description = "Deploy reverse SSH tunnel to ${cfg.relayHost}";

316
sovran-provisioner.nix Normal file
View File

@@ -0,0 +1,316 @@
{ config, lib, pkgs, ... }:
let
cfg = config.sovranProvisioner;
# ── Python provisioning API ──────────────────────────────────────────────────
provisionerScript = pkgs.writeTextFile {
name = "sovran-provisioner-app.py";
text = ''
#!/usr/bin/env python3
"""Sovran Systems Machine Provisioning Server"""
import subprocess, secrets, json, time, os, fcntl, threading
from datetime import datetime, timezone
from flask import Flask, request, jsonify
app = Flask(__name__)
STATE_DIR = os.environ.get("SOVRAN_STATE_DIR", "/var/lib/sovran-provisioner")
TOKEN_FILE = os.environ.get("SOVRAN_ENROLL_TOKEN_FILE", f"{STATE_DIR}/enroll-token")
HEADSCALE_USER = os.environ.get("HEADSCALE_USER", "sovran-deploy")
KEY_EXPIRY = os.environ.get("KEY_EXPIRY", "1h")
HEADSCALE_DOMAIN = os.environ.get("HEADSCALE_DOMAIN", "localhost")
RATE_LIMIT_MAX = int(os.environ.get("RATE_LIMIT_MAX", "10"))
RATE_LIMIT_WINDOW = int(os.environ.get("RATE_LIMIT_WINDOW", "60"))
_rate_lock = threading.Lock()
rate_state = {"count": 0, "window_start": time.time()}
def get_enroll_token():
try:
with open(TOKEN_FILE, "r") as f:
return f.read().strip()
except FileNotFoundError:
return ""
def check_rate_limit():
now = time.time()
with _rate_lock:
if now - rate_state["window_start"] > RATE_LIMIT_WINDOW:
rate_state["count"] = 0
rate_state["window_start"] = now
rate_state["count"] += 1
return rate_state["count"] <= RATE_LIMIT_MAX
def validate_token(req):
token = req.headers.get("Authorization", "").replace("Bearer ", "")
expected = get_enroll_token()
if not expected:
return False
return secrets.compare_digest(token, expected)
def create_headscale_key():
result = subprocess.run(
["headscale", "preauthkeys", "create",
"--user", HEADSCALE_USER,
"--expiration", KEY_EXPIRY,
"--ephemeral",
"--output", "json"],
capture_output=True, text=True
)
if result.returncode != 0:
raise RuntimeError(f"headscale error: {result.stderr}")
data = json.loads(result.stdout)
return data.get("key", data.get("preAuthKey", {}).get("key", ""))
_reg_lock = threading.Lock()
def load_registrations():
path = f"{STATE_DIR}/registrations.json"
try:
with open(path, "r") as f:
return json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
return []
def save_registration(entry):
path = f"{STATE_DIR}/registrations.json"
with _reg_lock:
regs = load_registrations()
regs.append(entry)
# Keep last 1000 entries
regs = regs[-1000:]
with open(path, "w") as f:
fcntl.flock(f, fcntl.LOCK_EX)
json.dump(regs, f, indent=2)
@app.route("/health", methods=["GET"])
def health():
return jsonify({"status": "ok"})
@app.route("/register", methods=["POST"])
def register():
if not check_rate_limit():
return jsonify({"error": "rate limited"}), 429
if not validate_token(request):
return jsonify({"error": "unauthorized"}), 401
body = request.get_json(silent=True) or {}
try:
key = create_headscale_key()
except RuntimeError as e:
app.logger.error(f"Headscale key creation failed: {e}")
return jsonify({"error": "internal server error"}), 500
entry = {
"hostname": body.get("hostname", "unknown"),
"mac": body.get("mac", "unknown"),
"ip": request.remote_addr,
"registered_at": datetime.now(timezone.utc).isoformat(),
"key_prefix": key[:12] + "..." if key else "none",
}
save_registration(entry)
app.logger.info(f"Machine registered: {entry}")
return jsonify({
"headscale_key": key,
"login_server": f"https://{HEADSCALE_DOMAIN}",
})
@app.route("/machines", methods=["GET"])
def list_machines():
if not validate_token(request):
return jsonify({"error": "unauthorized"}), 401
return jsonify(load_registrations())
if __name__ == "__main__":
app.run(host="127.0.0.1", port=9090)
'';
};
provisionerPython = pkgs.python3.withPackages (ps: [ ps.flask ]);
provisionerApp = pkgs.writeShellScriptBin "sovran-provisioner" ''
exec ${provisionerPython}/bin/python3 ${provisionerScript}
'';
in
{
options.sovranProvisioner = {
enable = lib.mkEnableOption "Sovran Systems provisioning server";
domain = lib.mkOption {
type = lib.types.str;
description = "Domain for the provisioning API (e.g. prov.sovransystems.com)";
};
headscaleDomain = lib.mkOption {
type = lib.types.str;
description = "Domain for the Headscale coordination server (e.g. hs.sovransystems.com)";
};
enrollToken = lib.mkOption {
type = lib.types.str;
default = "";
description = "Static enrollment token. If empty, one is auto-generated on first boot.";
};
headscaleUser = lib.mkOption {
type = lib.types.str;
default = "sovran-deploy";
description = "Headscale user/namespace for deployed machines";
};
adminUser = lib.mkOption {
type = lib.types.str;
default = "admin";
description = "Headscale user/namespace for admin workstations";
};
keyExpiry = lib.mkOption {
type = lib.types.str;
default = "1h";
description = "How long each auto-generated Headscale pre-auth key lives";
};
rateLimitMax = lib.mkOption {
type = lib.types.int;
default = 10;
description = "Max registrations per rate-limit window";
};
rateLimitWindow = lib.mkOption {
type = lib.types.int;
default = 60;
description = "Rate-limit window in seconds";
};
stateDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/sovran-provisioner";
description = "Directory for provisioner state (enrollment token, logs)";
};
};
config = lib.mkIf cfg.enable {
# ── Headscale ────────────────────────────────────────────────────────────
services.headscale = {
enable = true;
address = "127.0.0.1";
port = 8080;
settings = {
server_url = "https://${cfg.headscaleDomain}";
db_type = "sqlite3";
db_path = "/var/lib/headscale/db.sqlite";
prefixes = {
v4 = "100.64.0.0/10";
v6 = "fd7a:115c:a1e0::/48";
};
derp = {
server = {
enabled = true;
region_id = 999;
stun_listen_addr = "0.0.0.0:3478";
};
urls = [];
auto_update_enabled = false;
};
dns = {
magic_dns = true;
base_domain = "sovran.tail";
nameservers.global = [ "1.1.1.1" "9.9.9.9" ];
};
};
};
# ── Caddy reverse proxy ───────────────────────────────────────────────────
services.caddy = {
enable = true;
virtualHosts = {
"${cfg.headscaleDomain}" = {
extraConfig = "reverse_proxy localhost:8080";
};
"${cfg.domain}" = {
extraConfig = "reverse_proxy localhost:9090";
};
};
};
# ── Provisioner init service (generate token + create headscale users) ────
systemd.services.sovran-provisioner-init = {
description = "Initialize Sovran provisioner state";
wantedBy = [ "multi-user.target" ];
before = [ "sovran-provisioner.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
mkdir -p ${cfg.stateDir}
# Generate enrollment token if not exists and not set statically
TOKEN_FILE="${cfg.stateDir}/enroll-token"
${if cfg.enrollToken != "" then ''
echo "${cfg.enrollToken}" > "$TOKEN_FILE"
'' else ''
if [ ! -f "$TOKEN_FILE" ]; then
${pkgs.openssl}/bin/openssl rand -hex 32 > "$TOKEN_FILE"
chmod 600 "$TOKEN_FILE"
echo "Generated new enrollment token: $(cat $TOKEN_FILE)"
fi
''}
# Ensure headscale users exist
${pkgs.headscale}/bin/headscale users create ${cfg.headscaleUser} 2>/dev/null || true
${pkgs.headscale}/bin/headscale users create ${cfg.adminUser} 2>/dev/null || true
# Initialize registrations log
[ -f "${cfg.stateDir}/registrations.json" ] || echo "[]" > "${cfg.stateDir}/registrations.json"
'';
path = [ pkgs.headscale pkgs.openssl pkgs.coreutils ];
};
# ── Provisioning API service ──────────────────────────────────────────────
systemd.services.sovran-provisioner = {
description = "Sovran Systems Provisioning API";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" "headscale.service" "sovran-provisioner-init.service" ];
wants = [ "network-online.target" ];
environment = {
SOVRAN_ENROLL_TOKEN_FILE = "${cfg.stateDir}/enroll-token";
SOVRAN_STATE_DIR = cfg.stateDir;
HEADSCALE_USER = cfg.headscaleUser;
KEY_EXPIRY = cfg.keyExpiry;
HEADSCALE_DOMAIN = cfg.headscaleDomain;
RATE_LIMIT_MAX = toString cfg.rateLimitMax;
RATE_LIMIT_WINDOW = toString cfg.rateLimitWindow;
};
serviceConfig = {
ExecStart = "${provisionerApp}/bin/sovran-provisioner";
User = "sovran-provisioner";
Group = "sovran-provisioner";
StateDirectory = "sovran-provisioner";
Restart = "always";
RestartSec = "5s";
# Give access to headscale CLI
SupplementaryGroups = [ "headscale" ];
};
path = [ pkgs.headscale ];
};
# ── System user for provisioner ───────────────────────────────────────────
users.users.sovran-provisioner = {
isSystemUser = true;
group = "sovran-provisioner";
home = cfg.stateDir;
};
users.groups.sovran-provisioner = {};
# ── Firewall ──────────────────────────────────────────────────────────────
networking.firewall.allowedTCPPorts = [ 80 443 ];
networking.firewall.allowedUDPPorts = [ 3478 ]; # STUN for DERP
};
}