Data Plane Separation
Today we separated GhostPort's control and data planes into independent WireGuard tunnels running on dedicated infrastructure. This eliminates a single point of failure and unlocks the full bandwidth of the relay.
The Problem
GhostPort originally ran everything through a single WireGuard tunnel (wg0) on one EC2 instance. Fleet management API, Claude-to-Claude bridge messages, device check-ins, and all user internet traffic shared the same pipe. If that instance went down, every GhostPort device lost internet access and fleet communication simultaneously.
The Architecture
We now run two independent tunnels to two separate EC2 instances:
wg0 — Control Plane (EC2, Virginia)
Fleet API, bridge messaging, device registration, command queue, Stripe webhooks. Lightweight traffic — API calls and heartbeats only.
wg1 — Data Plane (EC2 ARM64, Virginia)
Internet relay for DoubleHop and Z-HOP privacy modes. All user browsing, streaming, and gaming traffic exits here. Dedicated Unbound DNS resolver on the tunnel interface.
The Pi routes traffic intelligently: default internet goes through wg1, fleet control stays on wg0, and Tailscale management runs on its own interface. A control plane outage no longer kills user internet — and a data plane restart doesn't disrupt fleet operations.
Performance Results
Raw throughput on the data plane measured at 3,330 Mbps down / 3,132 Mbps up from Ashburn, VA with 2.7ms latency to the test server. That's essentially unlimited headroom — the bottleneck will always be the user's ISP, never our relay.
Tunnel jitter measured at 0.023ms, which makes competitive gaming viable through the VPN. For context, most gamers consider anything under 1ms jitter acceptable.
What We Fixed Along the Way
- MTU optimization — Data plane WireGuard was defaulting to 8921 (jumbo frames). Set to 1420 to match the Pi's ISP link, resulting in a 3x download speed improvement by eliminating fragmentation.
- Persistent keepalive — Set to 25 seconds on all peers across both planes. Prevents NAT timeouts on residential connections where the Pi sits behind an ISP gateway.
- Endpoint pinning — Pi-side routing fix ensures host routes to both EC2 endpoints are pinned via the WAN gateway before tunnels come up. Prevents
AllowedIPs 0.0.0.0/0from hijacking the default route and killing the tunnel's own connectivity. - DNS isolation — Unbound on the data plane only accepts queries from the WireGuard subnet (
[internal subnet]). Zero DNS leak confirmed. - QUIC blocking — UDP 443 blocked in the forward chain to prevent browsers from bypassing the DNS resolver via QUIC.
Security Hardening
- SSH locked to Tailscale — Data plane SSH only listens on the Tailscale interface and VPC private IP. No public SSH exposure.
- Minimal attack surface — Only the WireGuard port is open to the internet on the data plane. No web server, no API, no management ports.
- Separate blast radius — Compromising the data plane gives no access to fleet credentials, bridge secrets, or device management.
What's Next
The tunnel latency is the physics of the situation — the distance between the Pi and the exit node. A West Coast exit node would cut that to 20–25ms for West Coast users. That's the next infrastructure upgrade on the roadmap.
WiFi WAN also shipped today — GhostPort can now connect to upstream WiFi instead of requiring an Ethernet backhaul. Combined with dual-tunnel, GhostPort is now a fully portable privacy device.