ALL NODES · 172.12.10.0/24 · COROSYNC OVER SHARED LAN · NO DEDICATED CLUSTER LINK
Network Hardware
Physical Server
cranberrylabs.net
Forest Root Domain · Single DC (violet-dc)
VGroups
Security Groups
VAdministration
VUsers
VOUs
User Accounts
VAdministration
VUsers
Domain / Forest Root
Organizational Unit
Object (User / Group)
Projects
5 entries
▶Bright/Pale VM Architecture — Media Isolation
DockerWireGuardProxmoxSecurity
Overview
Designed and implemented a two-VM architecture on the Petal node to isolate download activity from media serving. violet-bright handles clean serving (Jellyfin, Caddy, Sonarr library management) while violet-pale handles all download-side operations (qBittorrent, Sonarr indexing, Jackett) with all egress routed through a Mullvad WireGuard tunnel. A kill switch ensures that if the VPN tunnel drops, violet-pale loses internet access entirely rather than falling back to clearnet.
What I Learned
Learned how to reason about network-level isolation as a security property rather than just a convenience. Routing all egress through a VPN with a kill switch is straightforward in WireGuard config but requires thinking carefully about what happens on failure. Also reinforced the value of separating concerns at the VM boundary as if violet-pale were ever compromised or misbehaving, violet-bright and everything else on the network is unaffected.
[Interface]
Address = 10.x.x.x/32
PrivateKey = <private_key>
DNS = 10.x.x.x
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
[Peer]
PublicKey = <mullvad_peer_key>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <mullvad_endpoint>:51820
▶Three-Node Proxmox VE Cluster Build & Recovery
ProxmoxHAClusterDebian
Overview
Built a three-node Proxmox VE cluster (strand, petal, filament) running Debian 12 as the base OS. The cluster suffered a catastrophic failure caused by joining a node while HA was active, which split the quorum and took down the entire environment. Performed a full cluster rebuild from scratch, migrating all VMs and reconfiguring cluster networking, storage, and HA settings correctly.
What I Learned
Learned the hard way that Proxmox cluster operations require quorum to be in a known-good state first. HA and corosync are sensitive to the order of operations. The rebuild reinforced structured documentation habits: having a record of each VM config, network assignments, and storage layout made recovery significantly faster. Also migrated VMs from Ubuntu interim releases to Debian stable during the rebuild for better long-term support.
Configuration / Code
Check cluster quorum and node status
pvecm status
pvecm nodes
Force quorum on surviving node after split (emergency only)
# Only run if quorum is lost and you are sure of the state
pvecm expected 1
▶Self-Hosted WireGuard Remote Access VPN
WireGuardNetworkingVPNPort Forwarding
Overview
Deployed a dedicated WireGuard VPN server on violet-vpn (172.12.10.7) to allow secure remote access to the homelab from outside the LAN. Configured split tunneling so that LAN-destined traffic routes through the tunnel while internet traffic exits locally on the remote client. Port forwarded the WireGuard UDP port at the router and integrated with the Cloudflare DDNS setup so the endpoint address stays current despite a dynamic public IP.
What I Learned
Gained practical experience with WireGuard key management, peer configuration, and the difference between full-tunnel and split-tunnel setups. Split tunneling requires careful AllowedIPs configuration on the client, only the LAN subnet needs to be in AllowedIPs rather than 0.0.0.0/0. Also reinforced how DDNS and port forwarding interact: the endpoint in the WireGuard peer config uses the dynamic DNS hostname, not a raw IP.
Configuration / Code
Server config — violet-vpn (/etc/wireguard/wg0.conf)
Configured Caddy as a reverse proxy on violet-http to terminate SSL for the portfolio website and restrict admin access at the proxy level. SSL certificates are issued via Let's Encrypt using the Cloudflare DNS challenge, which avoids needing port 80 open for ACME validation. Added Fail2Ban to monitor Caddy logs and automatically ban IPs that trigger repeated auth failures.
What I Learned
Caddy's Caddyfile syntax makes reverse proxy config much simpler than equivalent Apache or Nginx config. Combining Caddy-level access restrictions with Fail2Ban creates two independent layers of protection that do not rely on the application itself.
Configuration / Code
Caddyfile — reverse proxy with LAN/VPN access restriction
Deployed a Windows Server VM (violet-dc) as a domain controller for the cranberrylabs.net domain. Configured ADDS with a custom OU structure separating administrative accounts from standard users and security groups for RBAC. All LAN hosts point to violet-dc for internal DNS resolution. Applied Group Policy for password policy and account lockout thresholds.
What I Learned
Designing OU structure before deploying ADDS saves significant rework. Separating user accounts from groups into distinct OUs makes GPO scoping clean and predictable. Running a separate admin account from a daily-use account enforces least privilege in practice, not just on paper. Internal DNS through the DC means all hostnames resolve correctly on the LAN without needing hosts file entries on every machine.