Architecture Maps

Tailscale Internals

Interactive architecture map of Tailscale — the zero-config mesh VPN built on WireGuard. Covering the coordination server, DERP relays, NAT traversal, MagicDNS, ACL policies, Taildrop, subnet routers, exit nodes, Funnel/Serve, SSH, key management, and Headscale.

Founded 2019 WireGuard Protocol Go / TypeScript Open-Source Client (BSD-3) Mesh VPN (Peer-to-Peer)
01

System Overview

Tailscale creates a flat, encrypted mesh network (a "tailnet") connecting all your devices. Unlike traditional VPNs that route all traffic through a central gateway, Tailscale establishes direct peer-to-peer WireGuard tunnels between devices. The coordination server handles authentication and key exchange, but never touches your data traffic.

P2P
Direct Mesh Tunnels
100ms
Typical Setup Time
100.x.y.z
CGNAT Address Space
~20
Global DERP Regions
Interactive Architecture Diagram — Click nodes for details
Control Plane
Data Plane
NAT Traversal
DERP Relay
Auth / Keys
Services
Client
Open Source Alt
02

WireGuard Protocol

WireGuard is the cryptographic foundation of Tailscale. It's a modern VPN protocol with only ~4,000 lines of kernel code (vs. OpenVPN's ~100K). Tailscale wraps WireGuard in a userspace implementation (wireguard-go) and manages all the key distribution and endpoint discovery that WireGuard itself doesn't handle.

Noise Protocol Framework

WireGuard uses the Noise_IKpsk2 handshake pattern — combining Curve25519 key exchange, ChaCha20-Poly1305 encryption, and BLAKE2s hashing. The handshake completes in a single round trip (1-RTT), establishing a session in milliseconds.

Data Plane

Cryptokey Routing

Each WireGuard peer has a public key and a list of allowed IP ranges. This "cryptokey routing table" maps IP destinations to peer public keys, ensuring packets are always encrypted for the correct recipient. Tailscale auto-configures these tables.

Data Plane

wireguard-go

Tailscale uses a userspace WireGuard implementation written in Go instead of the kernel module. This allows cross-platform operation (macOS, Windows, iOS, Android) and lets Tailscale control tunnel management, NAT traversal, and key rotation from a single process.

Data Plane

Zero Overhead Design

WireGuard adds minimal overhead: 32-byte header per packet, no session negotiation state for silent peers, and near-native throughput. Idle peers consume zero bandwidth — no keepalive traffic unless explicitly configured. The protocol is "silent" by default.

Data Plane
Why Userspace?

Running WireGuard in userspace (wireguard-go) rather than as a kernel module costs some throughput (~2-3 Gbps vs. kernel's ~10+ Gbps) but gains enormous flexibility. Tailscale can intercept packets for MagicDNS, manage NAT traversal, hot-swap endpoints, and rotate keys — all without kernel module loading or root privileges on some platforms.

03

Coordination Server (Control Plane)

The coordination server is Tailscale's central brain. It authenticates devices, distributes public keys, pushes network maps, and enforces ACL policies. Critically, it never sees your actual traffic — the data plane is entirely peer-to-peer.

Control Plane Message Flow
Device Client
tailscaled
Auth (SSO/OIDC)
IdP Login
Coord Server
Key Exchange
Network Map
Peer Endpoints
Direct P2P
WireGuard Tunnel

Network Map (NetMap)

The coordination server sends each client a "network map" — a JSON/protobuf document listing every peer in the tailnet, their public keys, advertised IPs, DERP home regions, and capabilities. The client uses this to configure its local WireGuard interface.

Control Plane

Long-Poll Updates

Clients maintain a persistent HTTPS connection to the coordination server via long-polling (MapRequest). When the network topology changes — a device joins, leaves, or changes IP — an incremental map update is pushed to all affected peers within seconds.

Control Plane

Identity Provider Integration

Authentication delegates to existing identity providers: Google Workspace, Microsoft Entra ID, Okta, GitHub, or any OIDC provider. Tailscale never stores passwords — it trusts the IdP's assertion. Device identity is then bound to a node key.

Auth

Tailnet

A "tailnet" is the private mesh network belonging to one organization or personal account. All devices in a tailnet share the 100.x.y.z CGNAT address space. Each device gets a stable Tailscale IP that doesn't change even if its physical network changes.

Control Plane
Control Plane Separation

This is the key architectural insight: the coordination server handles who can talk to whom (authentication, authorization, key distribution), but never touches the data. Even if Tailscale's servers were compromised, an attacker would only get public keys and endpoint metadata — they could not decrypt traffic, because the WireGuard private keys never leave the device.

04

NAT Traversal

The hardest problem in peer-to-peer networking is getting through NATs and firewalls. Tailscale implements a sophisticated NAT traversal stack combining STUN, ICE-like candidate discovery, birthday paradox UDP hole punching, and hard NAT probing — achieving direct connections ~94% of the time.

NAT Traversal Decision Tree
Discover NAT Type
STUN Probing
Exchange Candidates
via Coord Server
UDP Hole Punch
Both Peers Punch
Direct Path
or DERP Fallback

STUN (Session Traversal)

Tailscale runs STUN servers in every DERP region. When a client starts, it sends STUN binding requests to discover its public IP and port mapping. This reveals the NAT type (endpoint-independent, address-dependent, or port-dependent) and whether UDP is available at all.

NAT Traversal

UDP Hole Punching

Both peers simultaneously send UDP packets to each other's discovered endpoints. For endpoint-independent NATs, this works immediately. For harder NATs, Tailscale uses the "birthday paradox" technique — probing many port candidates to find one that both NATs will accept.

NAT Traversal

Hard NAT Detection

Some NATs assign random ports per destination ("hard NATs"). Tailscale detects this via multiple STUN servers and applies aggressive probing techniques. For double-hard-NAT (both peers behind hard NATs), DERP relay is the fallback, but this is rare (~6% of connections).

NAT Traversal

Endpoint Discovery

Each device discovers all its potential endpoints: local LAN addresses, public STUN-mapped addresses, and DERP relay addresses. These candidates are reported to the coordination server and shared with peers via the network map, enabling multi-path connectivity.

NAT Traversal
94% Direct Connection Rate

Tailscale reports that ~94% of peer connections establish direct UDP paths without relaying. This is achieved through aggressive NAT traversal techniques and the observation that most real-world NATs are endpoint-independent (easy NATs). When direct paths fail, DERP ensures 100% connectivity.

05

DERP Relay Servers

DERP (Designated Encrypted Relay for Packets) is Tailscale's fallback relay system. When direct peer-to-peer connections fail due to restrictive NATs or firewalls blocking UDP, DERP relays traffic over HTTPS — which works everywhere, even on corporate networks that only allow port 443.

HTTPS-Based Relay

DERP uses a custom protocol tunneled over HTTP/HTTPS connections. This ensures it works through the most restrictive firewalls and web proxies. Packets remain WireGuard-encrypted end-to-end; DERP only sees opaque encrypted blobs.

DERP Relay

Global Relay Network

Tailscale operates ~20 DERP regions worldwide (US, EU, Asia, Oceania, South America). Each device has a "home DERP" region (the closest one). Peers always know how to reach each other via their home DERP, providing a guaranteed fallback path.

DERP Relay

Encrypted Relay

DERP cannot read your traffic. Packets are WireGuard-encrypted before reaching DERP, and only the destination peer has the private key to decrypt them. DERP is a dumb packet forwarder — it routes based on node public keys, not IP addresses.

DERP Relay

Custom DERP Servers

Organizations can run their own DERP relay servers for lower latency or data sovereignty. The DERP server code is open-source (part of the Tailscale client repo). Custom DERP servers are configured via the admin console and distributed to clients via the network map.

DERP Relay
DERP as Connectivity Guarantee

DERP is not just a fallback — it's the initial communication path. When two peers first connect, they immediately relay through DERP while NAT traversal happens in the background. This means connections start working in milliseconds, then seamlessly upgrade to direct paths once hole punching succeeds.

06

MagicDNS

MagicDNS automatically assigns DNS names to every device in your tailnet. Instead of remembering 100.x.y.z addresses, you reach machines by name: laptop.tail1234.ts.net. The DNS resolver runs locally in the Tailscale client, intercepting DNS queries for tailnet names.

Local DNS Resolver

The Tailscale client (tailscaled) runs a local DNS proxy. It intercepts queries for *.ts.net domains and resolves them to Tailscale IPs from the network map. Non-tailnet queries are forwarded to configured upstream resolvers (or the default OS resolver).

Service

Automatic Naming

Devices are named using their OS hostname, sanitized and uniquified: mylaptop.tail1234.ts.net. The tailnet domain (tail1234.ts.net) is unique per account. Short names (just mylaptop) also work within the same tailnet.

Service

Split DNS

Split DNS routes specific domains to designated nameservers. For example, *.corp.example.com queries can be sent to the corporate DNS server (reachable through a subnet router), while everything else uses public DNS. Configured in the admin console.

Service

HTTPS Certificates

Tailscale can provision Let's Encrypt certificates for your *.ts.net hostnames via the tailscale cert command. This enables HTTPS on internal services without managing a CA — the coordination server handles the ACME DNS-01 challenge automatically.

Service
07

ACL Policies

Tailscale's access control system is a centralized, declarative JSON policy (HuJSON with comments and trailing commas). ACLs define which users and devices can reach which services. They are enforced at the network layer — packets that violate ACLs are never delivered.

HuJSON Policy File

ACLs are written in HuJSON (human JSON) — JSON with comments and trailing commas. Policies define ACL rules, groups, tag owners, auto-approvers for routes, and SSH policies. The file is version-controlled in the admin console with a test/preview workflow.

Auth

Tags (Device Roles)

Tags (tag:server, tag:prod) label devices by role rather than owner. Tagged devices are owned by the tag, not a user — enabling server-to-server policies independent of who provisioned the machine. Tag owners are defined in ACLs.

Auth

Groups and Autogroups

Groups bundle users (group:engineering) for cleaner policies. Autogroups are built-in: autogroup:member (all human users), autogroup:admin (admins), autogroup:owner (device owners). ACL rules reference these groups in source/destination fields.

Auth

Network-Layer Enforcement

ACLs are distributed to every client via the network map. Each device enforces ACLs locally by configuring its WireGuard allowed-IPs. Denied traffic never gets a WireGuard peer entry, so it's silently dropped at the network layer — not at the application layer.

Auth
Default Deny Model

Tailscale ACLs follow a default-deny model: if no rule explicitly allows a connection, it's blocked. This is the opposite of traditional VPNs that give full LAN access once connected. New devices join the tailnet with zero access until ACL rules grant specific permissions.

08

Services & Features

Beyond the core mesh VPN, Tailscale provides a suite of networking services that solve common infrastructure problems: file sharing, subnet routing, public ingress, SSH, and exit nodes.

Taildrop (File Sharing)

Taildrop sends files directly between devices over the encrypted mesh. No cloud upload — files travel peer-to-peer via WireGuard tunnels. Works across platforms (macOS, Windows, Linux, iOS, Android). Large files transfer at full link speed since it's a direct connection.

Service

Subnet Routers

A subnet router advertises a physical network's CIDR range (e.g., 192.168.1.0/24) to the tailnet. Other Tailscale clients can then reach devices on that subnet without installing Tailscale on each one — the subnet router acts as a gateway, forwarding packets bidirectionally.

Service

Exit Nodes

Any Tailscale device can be an exit node — routing all of another device's internet traffic through it. This is a full VPN mode: useful for accessing region-locked content or securing traffic on untrusted WiFi. Traffic exits to the internet from the exit node's location.

Service

Tailscale Funnel

Funnel exposes a local service to the public internet via a *.ts.net URL. Traffic enters Tailscale's edge, gets routed to your device's WireGuard tunnel, and arrives at the local port. No port forwarding, no static IP needed. Funnel provisions a public DNS name and TLS certificate automatically.

Service

Tailscale Serve

Serve is Funnel's private counterpart: it exposes local services to your tailnet (not the public internet). It acts as a reverse proxy, accepting HTTPS on port 443 and forwarding to a local port. Supports proxying to local HTTP, HTTPS, TCP, or serving static files and file system paths.

Service

Tailscale SSH

Tailscale SSH replaces traditional SSH key management. The Tailscale client acts as an SSH server, authenticating users via their Tailscale identity (no SSH keys needed). Access is controlled by ACL SSH rules. Supports session recording and check mode (requiring periodic re-authentication).

Service
Funnel Ingress Flow
Public Internet
HTTPS Request
Tailscale Edge
TLS Termination
WireGuard Tunnel
Encrypted
Your Device
localhost:port
09

Key Management

Tailscale uses multiple key types to separate concerns: machine identity, node authorization, session encryption, and control plane trust. Understanding the key hierarchy is essential to understanding how Tailscale's security model works.

Key Type Purpose Lifetime
Machine Key Identifies the physical machine to the coordination server. Generated on first run, stored on disk. Used to encrypt control plane traffic. Permanent (per device)
Node Key The WireGuard public key for this device. Rotated periodically. This is what appears in the network map and is used by peers for encryption. Rotated (key expiry)
Session Key Ephemeral symmetric keys derived during each WireGuard handshake. Perfect forward secrecy: compromising a node key doesn't reveal past sessions. ~2 minutes per handshake
Auth Key Pre-authentication keys for automated device enrollment. Used by CI/CD, Docker containers, and Kubernetes operators to join the tailnet without interactive login. Configurable (single-use or reusable)
API Key Personal or OAuth API tokens for the Tailscale management API. Used to programmatically manage devices, ACLs, DNS, and routes. Configurable expiry

Key Expiry & Rotation

Node keys have configurable expiry (default: 180 days). When a key expires, the device must re-authenticate with the coordination server. This ensures compromised devices don't retain access indefinitely. Admins can disable key expiry for servers.

Auth

Control Plane Encryption

Communication between the client and coordination server is encrypted twice: once at the TLS transport layer, and again using the machine key via the Noise protocol (NaCl box). This "double encryption" protects against TLS MITM and ensures only the device can read control messages.

Auth

Forward Secrecy

WireGuard provides perfect forward secrecy through ephemeral Diffie-Hellman exchanges during each handshake (every ~2 minutes for active peers). Past traffic cannot be decrypted even if the node's long-term private key is later compromised.

Data Plane
10

Headscale — Open Source Alternative

Headscale is an open-source, self-hosted implementation of the Tailscale coordination server. It allows you to run your own control plane while using the official Tailscale clients. Everything stays on your infrastructure — no dependency on Tailscale's SaaS.

Self-Hosted Control Plane

Headscale reimplements the coordination server API in Go. It handles device registration, key distribution, network map generation, and node management. Runs as a single binary with SQLite or PostgreSQL storage. Supports the same client protocol as Tailscale's servers.

Open Source

Compatible Clients

Headscale works with official Tailscale clients — you just point them at your Headscale server's URL instead of Tailscale's. This is possible because the client is open-source (BSD-3). All core features work: NAT traversal, DERP, MagicDNS, subnet routes.

Open Source

Feature Differences

Headscale covers core functionality (mesh networking, ACLs, DNS, subnet routing) but lacks some Tailscale SaaS features: Funnel, admin console UI (community web UIs exist), SSO integration (uses OIDC or pre-auth keys), and automatic DERP map updates.

Open Source

Data Sovereignty

The primary motivation for Headscale is full control over the control plane metadata: device keys, network topology, ACL policies, and DNS configuration never leave your infrastructure. Ideal for airgapped environments, regulated industries, and privacy-conscious deployments.

Open Source
Community Ecosystem

Headscale has a growing ecosystem: headscale-ui (community web admin), headscale-admin (another web UI), integration guides for Docker, NixOS, and Kubernetes. The project is not affiliated with Tailscale Inc. but benefits from the open-source client protocol being well-documented.

Technologies

Key Connections